text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
Nicholas Young (mathematician)
Nicholas John Young is a British mathematician working in operator theory, functional analysis and several complex variables. He is a research professor at the University of Leeds.[1] Much of his work has been about the interaction of operator theory and function theory.[2]
Publications
Young has written more than a hundred papers,[3] over 30 of them in collaboration with Jim Agler.[4][5] He is the author of the book An Introduction to Hilbert Space.[6]
His Ph.D. adviser was Vlastimil Pták, and he has had 5 Ph.D. students.[7]
References
1. "Faculty of Mathematics and Physical Sciences - Staff list". University of Leeds.
2. Axler, Sheldon (1998). Holomorphic Spaces. Mathematical Sciences Research Institute Publications. ISBN 978-0-521-63193-8.
3. "Publications of Nicholas Young" (PDF). University of Leeds. 17 March 2018. Retrieved 23 October 2018.
4. "Search Publications database". Americal Mathematical Society. Retrieved 23 October 2018.
5. MathSciNet
6. Young, N. (21 July 1988). An Introduction to Hilbert Space. Cambridge University Press. ISBN 9780521337175.
7. Mathematics Genealogy Project
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Israel
• United States
• Australia
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
Other
• IdRef
| Wikipedia |
\begin{document}
\newcommand{\alpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\gamma}{\gamma} \newcommand{\delta}{\delta} \newcommand{\epsilon}{\epsilon} \newcommand{\zeta}{\zeta} \newcommand{\theta}{\theta} \newcommand{\iota}{\iota} \newcommand{\kappa}{\kappa} \newcommand{\lambda}{\lambda} \newcommand{\sigma}{\sigma} \newcommand{\upsilon}{\upsilon} \newcommand{\omega}{\omega} \newcommand{\varepsilon}{\varepsilon} \newcommand{\vartheta}{\vartheta} \newcommand{\varpi}{\varpi} \newcommand{\varrho}{\varrho} \newcommand{\varsigma}{\varsigma} \newcommand{\varphi}{\varphi} \newcommand{\Gamma}{\Gamma} \newcommand{\Delta}{\Delta} \newcommand{\Theta}{\Theta} \newcommand{\Lambda}{\Lambda} \newcommand{\Sigma}{\Sigma} \newcommand{\Upsilon}{\Upsilon} \newcommand{\Omega}{\Omega}
\newcommand{\frka}{{\mathfrak a}} \newcommand{\frkA}{{\mathfrak A}} \newcommand{\frkb}{{\mathfrak b}} \newcommand{\frkB}{{\mathfrak B}} \newcommand{\frkc}{{\mathfrak c}} \newcommand{\frkC}{{\mathfrak C}} \newcommand{\frkd}{{\mathfrak d}} \newcommand{\frkD}{{\mathfrak D}} \newcommand{\frke}{{\mathfrak e}} \newcommand{\frkE}{{\mathfrak E}} \newcommand{\frkf}{{\mathfrak f}} \newcommand{\frkF}{{\mathfrak F}} \newcommand{\frkg}{{\mathfrak g}} \newcommand{\frkG}{{\mathfrak G}} \newcommand{\frkh}{{\mathfrak h}} \newcommand{\frkH}{{\mathfrak H}} \newcommand{\frki}{{\mathfrak i}} \newcommand{\frkI}{{\mathfrak I}} \newcommand{\frkj}{{\mathfrak j}} \newcommand{\frkJ}{{\mathfrak J}} \newcommand{\frkk}{{\mathfrak k}} \newcommand{\frkK}{{\mathfrak K}} \newcommand{\frkl}{{\mathfrak l}} \newcommand{\frkL}{{\mathfrak L}} \newcommand{\frkm}{{\mathfrak m}} \newcommand{\frkM}{{\mathfrak M}} \newcommand{\frkn}{{\mathfrak n}} \newcommand{\frkN}{{\mathfrak N}} \newcommand{\frko}{{\mathfrak o}} \newcommand{\frkO}{{\mathfrak O}} \newcommand{\frkp}{{\mathfrak p}} \newcommand{\frkP}{{\mathfrak P}} \newcommand{\frkq}{{\mathfrak q}} \newcommand{\frkQ}{{\mathfrak Q}} \newcommand{\frkr}{{\mathfrak r}} \newcommand{\frkR}{{\mathfrak R}} \newcommand{\frks}{{\mathfrak s}} \newcommand{\frkS}{{\mathfrak S}} \newcommand{\frkt}{{\mathfrak t}} \newcommand{\frkT}{{\mathfrak T}} \newcommand{\frku}{{\mathfrak u}} \newcommand{\frkU}{{\mathfrak U}} \newcommand{\frkv}{{\mathfrak v}} \newcommand{\frkV}{{\mathfrak V}} \newcommand{\frkw}{{\mathfrak w}} \newcommand{\frkW}{{\mathfrak W}} \newcommand{\frkx}{{\mathfrak x}} \newcommand{\frkX}{{\mathfrak X}} \newcommand{\frky}{{\mathfrak y}} \newcommand{\frkY}{{\mathfrak Y}} \newcommand{\frkz}{{\mathfrak z}} \newcommand{\frkZ}{{\mathfrak Z}}
\newcommand{\bfa}{{\mathbf a}} \newcommand{\bfA}{{\mathbf A}} \newcommand{\bfb}{{\mathbf b}} \newcommand{\bfB}{{\mathbf B}} \newcommand{\bfc}{{\mathbf c}} \newcommand{\bfC}{{\mathbf C}} \newcommand{\bfd}{{\mathbf d}} \newcommand{\bfD}{{\mathbf D}} \newcommand{\bfe}{{\mathbf e}} \newcommand{\bfE}{{\mathbf E}} \newcommand{\bff}{{\mathbf f}} \newcommand{\bfF}{{\mathbf F}} \newcommand{\bfg}{{\mathbf g}} \newcommand{\bfG}{{\mathbf G}} \newcommand{\bfh}{{\mathbf h}} \newcommand{\bfH}{{\mathbf H}} \newcommand{\bfi}{{\mathbf i}} \newcommand{\bfI}{{\mathbf I}} \newcommand{\bfj}{{\mathbf j}} \newcommand{\bfJ}{{\mathbf J}} \newcommand{\bfk}{{\mathbf k}} \newcommand{\bfK}{{\mathbf K}} \newcommand{\bfl}{{\mathbf l}} \newcommand{\bfL}{{\mathbf L}} \newcommand{\bfm}{{\mathbf m}} \newcommand{\bfM}{{\mathbf M}} \newcommand{\bfn}{{\mathbf n}} \newcommand{\bfN}{{\mathbf N}} \newcommand{\bfo}{{\mathbf o}} \newcommand{\bfO}{{\mathbf O}} \newcommand{\bfp}{{\mathbf p}} \newcommand{\bfP}{{\mathbf P}} \newcommand{\bfq}{{\mathbf q}} \newcommand{\bfQ}{{\mathbf Q}} \newcommand{\bfr}{{\mathbf r}} \newcommand{\bfR}{{\mathbf R}} \newcommand{\bfs}{{\mathbf s}} \newcommand{\bfS}{{\mathbf S}} \newcommand{\bft}{{\mathbf t}} \newcommand{\bfT}{{\mathbf T}} \newcommand{\bfu}{{\mathbf u}} \newcommand{\bfU}{{\mathbf U}} \newcommand{\bfv}{{\mathbf v}} \newcommand{\bfV}{{\mathbf V}} \newcommand{\bfw}{{\mathbf w}} \newcommand{\bfW}{{\mathbf W}} \newcommand{\bfx}{{\mathbf x}} \newcommand{\bfX}{{\mathbf X}} \newcommand{\bfy}{{\mathbf y}} \newcommand{\bfY}{{\mathbf Y}} \newcommand{\bfz}{{\mathbf z}} \newcommand{\bfZ}{{\mathbf Z}}
\newcommand{{\mathcal A}}{{\mathcal A}} \newcommand{{\mathcal B}}{{\mathcal B}} \newcommand{{\mathcal C}}{{\mathcal C}} \newcommand{{\mathcal D}}{{\mathcal D}} \newcommand{{\mathcal E}}{{\mathcal E}} \newcommand{{\mathcal F}}{{\mathcal F}} \newcommand{{\mathcal G}}{{\mathcal G}} \newcommand{{\mathcal H}}{{\mathcal H}} \newcommand{{\mathcal I}}{{\mathcal I}} \newcommand{{\mathcal J}}{{\mathcal J}} \newcommand{{\mathcal K}}{{\mathcal K}} \newcommand{{\mathcal L}}{{\mathcal L}} \newcommand{{\mathcal M}}{{\mathcal M}} \newcommand{{\mathcal N}}{{\mathcal N}} \newcommand{{\mathcal O}}{{\mathcal O}} \newcommand{{\mathcal P}}{{\mathcal P}} \newcommand{{\mathcal Q}}{{\mathcal Q}} \newcommand{{\mathcal R}}{{\mathcal R}} \newcommand{{\mathcal S}}{{\mathcal S}} \newcommand{{\mathcal T}}{{\mathcal T}} \newcommand{{\mathcal U}}{{\mathcal U}} \newcommand{{\mathcal V}}{{\mathcal V}} \newcommand{{\mathcal W}}{{\mathcal W}} \newcommand{{\mathcal X}}{{\mathcal X}} \newcommand{{\mathcal Y}}{{\mathcal Y}} \newcommand{{\mathcal Z}}{{\mathcal Z}}
\newcommand{{\mathscr A}}{{\mathscr A}} \newcommand{{\mathscr B}}{{\mathscr B}} \newcommand{{\mathscr C}}{{\mathscr C}} \newcommand{{\mathscr D}}{{\mathscr D}} \newcommand{{\mathscr E}}{{\mathscr E}} \newcommand{{\mathscr F}}{{\mathscr F}} \newcommand{{\mathscr G}}{{\mathscr G}} \newcommand{{\mathscr H}}{{\mathscr H}} \newcommand{{\mathscr I}}{{\mathscr I}} \newcommand{{\mathscr J}}{{\mathscr J}} \newcommand{{\mathscr K}}{{\mathscr K}} \newcommand{{\mathscr L}}{{\mathscr L}} \newcommand{{\mathscr M}}{{\mathscr M}} \newcommand{{\mathscr N}}{{\mathscr N}} \newcommand{{\mathscr O}}{{\mathscr O}} \newcommand{{\mathscr P}}{{\mathscr P}} \newcommand{{\mathscr Q}}{{\mathscr Q}} \newcommand{{\mathscr R}}{{\mathscr R}} \newcommand{{\mathscr S}}{{\mathscr S}} \newcommand{{\mathscr T}}{{\mathscr T}} \newcommand{{\mathscr U}}{{\mathscr U}} \newcommand{{\mathscr V}}{{\mathscr V}} \newcommand{{\mathscr W}}{{\mathscr W}} \newcommand{{\mathscr X}}{{\mathscr X}} \newcommand{{\mathscr Y}}{{\mathscr Y}} \newcommand{{\mathscr Z}}{{\mathscr Z}}
\newcommand{{\mathbb A}}{{\mathbb A}} \newcommand{{\mathbb B}}{{\mathbb B}} \newcommand{{\mathbb C}}{{\mathbb C}} \newcommand{{\mathbb D}}{{\mathbb D}} \newcommand{{\mathbb E}}{{\mathbb E}} \newcommand{{\mathbb F}}{{\mathbb F}} \newcommand{{\mathbb G}}{{\mathbb G}} \newcommand{{\mathbb H}}{{\mathbb H}} \newcommand{{\mathbb I}}{{\mathbb I}} \newcommand{{\mathbb J}}{{\mathbb J}} \newcommand{{\mathbb K}}{{\mathbb K}} \newcommand{{\mathbb L}}{{\mathbb L}} \newcommand{{\mathbb M}}{{\mathbb M}} \newcommand{{\mathbb N}}{{\mathbb N}} \newcommand{{\mathbb O}}{{\mathbb O}} \newcommand{{\mathbb P}}{{\mathbb P}} \newcommand{{\mathbb Q}}{{\mathbb Q}} \newcommand{{\mathbb R}}{{\mathbb R}} \newcommand{{\mathbb S}}{{\mathbb S}} \newcommand{{\mathbb T}}{{\mathbb T}} \newcommand{{\mathbb U}}{{\mathbb U}} \newcommand{{\mathbb V}}{{\mathbb V}} \newcommand{{\mathbb W}}{{\mathbb W}} \newcommand{{\mathbb X}}{{\mathbb X}} \newcommand{{\mathbb Y}}{{\mathbb Y}} \newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{\tta}{\hbox{\tt a}} \newcommand{\ttA}{\hbox{\tt A}} \newcommand{\ttb}{\hbox{\tt b}} \newcommand{\ttB}{\hbox{\tt B}} \newcommand{\ttc}{\hbox{\tt c}} \newcommand{\ttC}{\hbox{\tt C}} \newcommand{\ttd}{\hbox{\tt d}} \newcommand{\ttD}{\hbox{\tt D}} \newcommand{\tte}{\hbox{\tt e}} \newcommand{\ttE}{\hbox{\tt E}} \newcommand{\ttf}{\hbox{\tt f}} \newcommand{\ttF}{\hbox{\tt F}} \newcommand{\ttg}{\hbox{\tt g}} \newcommand{\ttG}{\hbox{\tt G}} \newcommand{\tth}{\hbox{\tt h}} \newcommand{\ttH}{\hbox{\tt H}} \newcommand{\tti}{\hbox{\tt i}} \newcommand{\ttI}{\hbox{\tt I}} \newcommand{\ttj}{\hbox{\tt j}} \newcommand{\ttJ}{\hbox{\tt J}} \newcommand{\ttk}{\hbox{\tt k}} \newcommand{\ttK}{\hbox{\tt K}} \newcommand{\ttl}{\hbox{\tt l}} \newcommand{\ttL}{\hbox{\tt L}} \newcommand{\ttm}{\hbox{\tt m}} \newcommand{\ttM}{\hbox{\tt M}} \newcommand{\ttn}{\hbox{\tt n}} \newcommand{\ttN}{\hbox{\tt N}} \newcommand{\tto}{\hbox{\tt o}} \newcommand{\ttO}{\hbox{\tt O}} \newcommand{\ttp}{\hbox{\tt p}} \newcommand{\ttP}{\hbox{\tt P}} \newcommand{\ttq}{\hbox{\tt q}} \newcommand{\ttQ}{\hbox{\tt Q}} \newcommand{\ttr}{\hbox{\tt r}} \newcommand{\ttR}{\hbox{\tt R}} \newcommand{\tts}{\hbox{\tt s}} \newcommand{\ttS}{\hbox{\tt S}} \newcommand{\ttt}{\hbox{\tt t}} \newcommand{\ttT}{\hbox{\tt T}} \newcommand{\ttu}{\hbox{\tt u}} \newcommand{\ttU}{\hbox{\tt U}} \newcommand{\ttv}{\hbox{\tt v}} \newcommand{\ttV}{\hbox{\tt V}} \newcommand{\ttw}{\hbox{\tt w}} \newcommand{\ttW}{\hbox{\tt W}} \newcommand{\ttx}{\hbox{\tt x}} \newcommand{\ttX}{\hbox{\tt X}} \newcommand{\tty}{\hbox{\tt y}} \newcommand{\ttY}{\hbox{\tt Y}} \newcommand{\ttz}{\hbox{\tt z}} \newcommand{\ttZ}{\hbox{\tt Z}}
\newcommand{\phantom}{\phantom} \newcommand{\displaystyle }{\displaystyle } \newcommand{\vphantom{\vrule height 3pt }}{\vphantom{\vrule height 3pt }} \def\bdm #1#2#3#4{\left(
\begin{array} {c|c}{\displaystyle {#1}}
& {\displaystyle {#2}} \\ \hline {\displaystyle {#3}\vphantom{\displaystyle {#3}^1}} & {\displaystyle {#4}} \end{array} \right)} \newcommand{\widetilde }{\widetilde } \newcommand{\backslash }{\backslash } \newcommand{{\mathrm{GL}}}{{\mathrm{GL}}} \newcommand{{\mathrm{SL}}}{{\mathrm{SL}}} \newcommand{{\mathrm{GSp}}}{{\mathrm{GSp}}} \newcommand{{\mathrm{PGSp}}}{{\mathrm{PGSp}}} \newcommand{{\mathrm{Sp}}}{{\mathrm{Sp}}} \newcommand{{\mathrm{SO}}}{{\mathrm{SO}}} \newcommand{{\mathrm{SU}}}{{\mathrm{SU}}} \newcommand{\mathrm{Ind}}{\mathrm{Ind}} \newcommand{{\mathrm{Hom}}}{{\mathrm{Hom}}} \newcommand{{\mathrm{Ad}}}{{\mathrm{Ad}}} \newcommand{{\mathrm{Sym}}}{{\mathrm{Sym}}} \newcommand{\mathrm{M}}{\mathrm{M}} \newcommand{\mathrm{sgn}}{\mathrm{sgn}} \newcommand{\,^t\!}{\,^t\!} \newcommand{\sqrt{-1}}{\sqrt{-1}} \newcommand{\hbox{\bf 0}}{\hbox{\bf 0}} \newcommand{\hbox{\bf 1}}{\hbox{\bf 1}} \newcommand{\lower .3em \hbox{\rm\char'27}\!}{\lower .3em \hbox{\rm\char'27}\!} \newcommand{\bA_{\hbox{\eightrm f}}}{\bA_{\hbox{\eightrm f}}} \newcommand{{\textstyle{\frac12}}}{{\textstyle{\frac12}}} \newcommand{\hbox{\rm\char'43}}{\hbox{\rm\char'43}} \newcommand{\operatorname{Gal}}{\operatorname{Gal}}
\newcommand{{\boldsymbol{\delta}}}{{\boldsymbol{\delta}}} \newcommand{{\boldsymbol{\chi}}}{{\boldsymbol{\chi}}} \newcommand{{\boldsymbol{\gamma}}}{{\boldsymbol{\gamma}}} \newcommand{{\boldsymbol{\omega}}}{{\boldsymbol{\omega}}} \newcommand{{\boldsymbol{\psi}}}{{\boldsymbol{\psi}}} \newcommand{\mathrm{GK}}{\mathrm{GK}} \newcommand{\mathrm{ord}}{\mathrm{ord}} \newcommand{\mathrm{diag}}{\mathrm{diag}} \newcommand{{\underline{a}}}{{\underline{a}}}
\newcommand{\ZZ_{\geq 0}^n}{{\mathbb Z}_{\geq 0}^n} \newcommand{{\mathcal H}^\mathrm{nd}}{{\mathcal H}^\mathrm{nd}} \newcommand{\mathrm{EGK}}{\mathrm{EGK}}
\theoremstyle{definition} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{conjecture}{Conjecture}[section] \newtheorem{definition}{Definition}[section] \newtheorem{statement}[theorem]{Statement} \newtheorem{question}[theorem]{Problem} \theoremstyle{remark} \newtheorem{remark}{Remark}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{example}{Example}[section] \numberwithin{equation}{section}
\title[On the local density formula and the Gross-Keating invariant] {On the local density formula and the Gross-Keating invariant with an Appendix `The local density of a binary quadratic form' by T. Ikeda and H. Katsurada}
\keywords{quadratic forms, Gross-Keating invariant, local density} \thanks{The author is partially supported by JSPS KAKENHI Grant No. 16F16316, Samsung Science and Technology Foundation under Project Number SSTF-BA1802-03, and NRF-2018R1A4A1023590.} \subjclass[2010]{MSC 11E08, 11E95, 14L15, 20G25}
\author[Sungmun Cho]{Sungmun Cho} \address{Sungmun Cho \\
Department of Mathematics, POSTECH, 77, Cheongam-ro, Nam-gu, Pohang-si, Gyeongsangbuk-do, 37673, KOREA} \email{[email protected]}
\maketitle
\begin{abstract} T. Ikeda and H. Katsurada have developed the theory of the Gross-Keating invariant of a quadratic form in their recent papers \cite{IK1} and \cite{IK2}. In particular, they prove that the local factors of the Fourier coefficients of the Siegel-Eisenstein series are completely determined by the Gross-Keating invariants with extra datums, called the extended GK datums, in \cite{IK2}.
On the other hand, such a local factor is a special case of the local density for a pair of two quadratic forms. Thus we propose a general question if the local density can be expressed in terms of a certain series of
extended GK datums.
In this paper, we prove that the answer to this question is affirmative, for the local density of a single quadratic form defined over an unramified finite extension of $\mathbb{Z}_2$ and over a finite extension of $\mathbb{Z}_p$ with $p$ odd. In the appendix, T.~Ikeda and H.~Katsurada compute the local density formula of a single binary quadratic form defined over any finite extension of $\mathbb{Z}_2$. \end{abstract}
\tableofcontents
\section{Introduction}\label{intro} In 1993, B. Gross and K. Keating defined a certain invariant of a ternary quadratic form over $\mathbb{Z}_p$, in order to formulate an expression for the arithmetic intersection number of three cycles associated to three modular polynomials over the moduli stack of pairs of elliptic curves in \cite{GK}. This invariant has been generalized to quadratic forms of any degree over a local field, and is now called the Gross-Keating invariant.
The Gross-Keating invariant had been almost forgotten\footnote{The Gross-Keating invariant had been treated in \cite{Bouw}. The first sentence of loc. cit. says `This note provides details on \cite{GK} Section 4.'} for a while after the work of Gross and Keating. It was T. Ikeda and H. Katsurada who recently developed the theory of the Gross-Keating invariant in \cite{IK1} and discovered its importance to the study of the Fourier coefficients of the Siegel-Eisenstein series\footnote{We follow the notion and the definition of \cite{Katsurada} for the Siegel-Eisenstein series.} with any degree and with any weight (see \cite{IK2}). Furthermore, it has been revealed that the Gross-Keating invariant plays a key role in investigating an analogy between intersection numbers on orthogonal Shimura varieties and the Fourier coefficients of the Siegel-Eisenstein series in Kudla's program in \cite{CY}. A formula for the Gross-Keating invariant over an unramified finite extension of $\mathbb{Z}_2$ is derived in \cite{CIKY1} and \cite{CIKY2}. \\
On the other hand, the local density, denoted by $\alpha(L, L')$, of a pair of two quadratic lattices $(L, Q_L)$ and $(L', Q_{L'})$
defined over a finite extension of $\mathbb{Z}_p$, provides vital information towards computing the number of representations of a global quadratic form, which is a central problem in the theory of Siegel-Weil formula as well as in the arithmetic theory of quadratic forms. Special but important cases of the local density $\alpha(L, L')$ are as follows: \begin{enumerate} \item if $L'$ is a hyperbolic space defined over $\mathbb{Z}_p$ so that $L'$ is equivalent to \[ \begin{pmatrix} 0& 1/2 \\ 1/2 & 0 \end{pmatrix}\perp \cdots \perp \begin{pmatrix} 0& 1/2 \\ 1/2 & 0 \end{pmatrix}, \] then the associated local density $\alpha(L, L')$ is the local factor at $p$ of the Fourier coefficients of the Siegel-Eisenstein series. For a detailed explanation, see \cite{Katsurada}.
\item If $L=L'$, then the local density $\alpha(L, L)$ is the local factor at $p$ of the Smith-Minkowski-Siegel mass formula, which is an essential tool for the classification of integral quadratic lattices (over a finite extension of $\mathbb{Z}$). We refer to the introduction of \cite{Cho} for the history of the local density of a single quadratic form. \end{enumerate}
Recently in \cite{IK2}, Ikeda and Katsurada show that the local density in the above case (1) is completely determined by the Gross-Keating invariant with extra datum, called the extended GK datum. Along with their observation, we generalize their philosophy formulated in the following question:
\begin{question}\label{question}
Can we give a rule for associating to each pair $(L, L')$ of quadratic lattices a sequence $\mathcal{L}(L,L')$ of quadratic lattices with the property that if $\mathrm{EGK}(\mathcal{L}(L,L'))=\mathrm{EGK}(\mathcal{L}(M,M'))$ as multisets then \[ \textit{$\alpha(L, L')=\alpha(M, M')$}? \] \end{question}
The purpose of this paper is to answer this question in the case (2) listed above when $L=L'$ is defined over a finite unramified extension of $\mathbb{Z}_2$. In the author's previous paper \cite{Cho}, the local density formula of this case is given in terms of certain smooth group schemes. The main theorem of our paper is the following:
\begin{theorem}\label{maintheorem}(Theorem \ref{thm-ldgk}) For a quadratic lattice $(L, Q_L)$ defined over a finite unramified extension $\mathfrak{o}$ of $\mathbb{Z}_2$,
the local density $\alpha(L,L)$ is completely determined by the collection consisting of $\mathrm{GK}(L\oplus -L)$ together with the $\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}$ as $i$ runs over all integers such that $L_i$ is nonzero, where $L=\oplus_i L_i$ is a Jordan splitting.
In other words, given any two quadratic lattices $L, M$ satisfying \[ \left\{
\begin{array}{l}
\textit{$\mathrm{GK}(L\oplus -L)=\mathrm{GK}(M\oplus -M)$};\\
\textit{$\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}=\mathrm{EGK}(M\cap 2^i M^\sharp)^{\leq 1}$ for all $i$},
\end{array} \right. \] we have that \[\alpha(L,L)=\alpha(M,M).\] Here, $\mathrm{GK}(L)$ is defined in Definition \ref{def:2.1}, $\mathrm{EGK}(L)$ is defined in Definition \ref{def:3.3}, and $\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}$ is defined Definition \ref{tegk}. The lattice $L^\sharp$ is the dual lattice of $L$ and the lattice $L\cap 2^i L^\sharp$ is characterized as follows: \[ L\cap 2^i L^\sharp=\{x\in L \mid \langle x,L\rangle_{Q_L} \in 2^i\frko\}, \] where $\langle -,-\rangle_{Q_L}$ is the symmetric bilinear form associated to $Q_L$ such that $\langle x,x\rangle_{Q_L}=Q_L(x)$ for $x\in L$.
\end{theorem}
When $p$ is odd, it is easy to see that the local density $\alpha(L, L)$ is completely determined by $\mathrm{GK}(L)$, as explained in Remark \ref{podd}.
This paper is organized as follows. In Section \ref{sec:1}, we will explain notations and definitions of the Gross-Keating invariant and the extended GK datum, taken from \cite{IK1} for synchronization. In Section \ref{sec:5.1}, we will recall the local density formula given in the author's previous paper \cite{Cho}. In Section \ref{sec:5.2}, we will prove the above Theorem \ref{maintheorem}, by introducing the `truncated' extended GK datum (cf. Section \ref{sss3}) which is much simpler than the extended GK datum. An appendix to this paper has been written by T.~Ikeda and H.~Katsurada to compute the local density of binary quadratic forms over any finite extension of $\mathbb{Z}_2$.
\begin{remark} In the appendix, Ikeda and Katsurada also compute $\mathrm{GK}(L\perp -L)$ and $\mathrm{EGK}(L\cap \varpi^i L^\sharp)^{\leq 1}$ for a quadratic lattice $L$ of rank $2$ over an arbitrary finite extension of $\mathbb{Z}_2$. Here we refer $\varpi$ to Section \ref{ssnota}. But they show that the local density is not determined by these invariants (See Example \ref{ex:A1}). Thus, towards Problem \ref{question}, we may need more subtle invariants to determine the local density. \end{remark}
\textbf{Acknowledgments.} The author would like to express deep appreciation to Professors T. Ikeda and H. Katsurada for many fruitful discussions and for providing the appendix. We would also like to thank the referee for helpful suggestions and comments which substantially helped with the presentation of our paper.
\section{Notation and definition} \label{sec:1} \subsection{Notation}\label{ssnota} \begin{itemize} \item Let $F$ be a finite field extension of $\mathbb{Q}_p$, and $\frko=\frko_F$ its ring of integers. Let $\mathfrak{o}^{\times}$ be the set of units in $\mathfrak{o}$. The maximal ideal and the residue field of $\frko$ are denoted by $\frkp$ and $\frkk$, respectively. We put $q=[\frko:\frkp]$.
\item $F$ is said to be dyadic if $q$ is even.
\item We fix a prime element $\varpi$ of $\frko$ once and for all.
\item The order of $x\in F^\times$ is given by $\mathrm{ord}(x)=n$ for $x\in \varpi^n \frko^\times$. We understand $\mathrm{ord}(0)=+\infty$. The order of $x$ is sometimes referred to as the exponential valuation of $x$.
\item Put $F^{\times 2}=\{x^2\,|\, x\in F^\times\}$. Similarly, we put $\frko^{\times 2}=\{x^2\,|\, x\in\frko^\times\}$.
\item We consider an $\mathfrak{o}$-lattice $L$ with a quadratic form $Q_L:L \rightarrow \mathfrak{o}$. Here, an $\mathfrak{o}$-lattice means a finitely generated free $\mathfrak{o}$-module. Such a quadratic form $Q_L$ is called \textit{an integral quadratic form} and such a pair $(L, Q_L)$ is called \textit{a quadratic lattice}. We sometimes say that $L$ is a quadratic lattice by omitting $Q_L$, if this does not cause confusion or ambiguity. Let $\langle -,-\rangle_{Q_L}$ be the symmetric bilinear form on $L$ such that $$\langle x,y\rangle_{Q_L}=\frac{1}{2}(Q_L(x+y)-Q_L(x)-Q_L(y)).$$ Note that a bilinear form $\langle -,-\rangle_{Q_L}$ is valued in $\frac{1}{•2}\mathfrak{o}$ and thus corresponds to an element of $\mathcal{H}_n(\mathfrak{o})$, which will be defined in Section \ref{sec:2}.
We assume that $V=L\otimes_{\mathfrak{o}} F$ is non-degenerate with respect to $\langle -,-\rangle_{Q_L}$.
\item A quadratic lattice $L$ is the \textit{orthogonal sum} of sublattices $L_1$ and $L_2$, written $L=L_1\oplus L_2$, if $L_1\cap L_2=0$, $L_1$ is orthogonal to $L_2$ with respect to the symmetric bilinear form $\langle-,- \rangle_{Q_L}$, and $L_1$ and $L_2$ together span $L$.
\item When $R$ is a ring, the set of $m\times n$ matrices with entries in $R$ is denoted by $\mathrm{M}_{mn}(R)$ or $\mathrm{M}_{m,n}(R)$. As usual, $\mathrm{M}_n(R)=\mathrm{M}_{n,n}(R)$. \item The identity matrix of size $n$ is denoted by $\mathbf{1}_n$. \item For $X_1\in \mathrm{M}_s(R)$ and $X_2\in\mathrm{M}_t(R)$, the matrix $\begin{pmatrix} X_1 & 0 \\ 0 & X_2\end{pmatrix}\in\mathrm{M}_{s+t}(R)$ is denoted by $X_1\perp X_2$. \item The diagonal matrix whose diagonal entries are $b_1$, $\ldots$, $b_n$ is denoted by $\mathrm{diag}(b_1, \dots, b_n)=(b_1)\perp\dots\perp (b_n)$.
\item Let $(a_1, \cdots, a_m)$ and $(b_1, \cdots, b_n)$ be non-decreasing sequences consisting of non-negative integers. Then $(a_1, \cdots, a_m)\cup (b_1, \cdots, b_n)$ is defined as the non-decreasing sequence $(c_1, \cdots, c_{n+m})$ such that $\{c_1, \cdots, c_{n+m}\}=\{a_1, \cdots, a_m\}\cup \{b_1, \cdots, b_n\}$ as multisets. For example, $(0,1,4)\cup (1,3)=(0,1,1,3,4)$.
\item For $\underline{a}=(a_1, \cdots, a_n)$
with each $a_i$ an element of $\mathbb{Z}$, the sum $a_1+\cdots +a_n$ is denoted by $|\underline{a}|$.
\item For $\underline{a}=(a_1, \cdots, a_n)$ with each $a_i$ an element of $\mathbb{Z}$, the $m$-tuple $(a_1, \cdots, a_m)$ with $m\leq n$ is denoted by $\underline{a}^{(m)}$.
\item The set of symmetric matrices $B\in \mathrm{M}_n(F)$ of size $n$ is denoted by $\mathrm{Sym}_n(F)$. Similarly, define $\mathrm{Sym}_n(\mathfrak{o})$. \item For $B\in \mathrm{Sym}_n(F)$ and $X\in\mathrm{M}_{n,m}(F)$, we set $B[X]={}^t\! XBX \in \mathrm{Sym}_m(F)$. \item When $G$ is a subgroup of ${\mathrm{GL}}_n(F)$, we shall say that two elements $B_1, B_2\in\mathrm{Sym}_n(F)$ are $G$-equivalent, if there is an element $X\in G$ such that $B_1[X]=B_2$. \end{itemize}
\subsection{Gross-Keating invariants} \label{sec:2} In this subsection, we explain the definition of the Gross-Keating invariant
and collect some theorems, taken from \cite{IK1}.
\begin{itemize} \item We say that $B=(b_{ij})\in \mathrm{Sym}_n(F)$ is a half-integral symmetric matrix if \begin{align*} b_{ii}\in\frko_F &\qquad (1\leq i\leq n), \\ 2b_{ij}\in\frko_F& \qquad (1\leq i\leq j\leq n). \end{align*} \item The set of all half-integral symmetric matrices of size $n$ is denoted by ${\mathcal H}_n(\frko)$. \item An element $B\in{\mathcal H}_n(\frko)$ is non-degenerate if $\det B\neq 0$. \item The set of all non-degenerate elements of ${\mathcal H}_n(\frko)$ is denoted by ${\mathcal H}^\mathrm{nd}_n(\frko)$. \item For $B=(b_{ij})_{1\leq i,j\leq n}\in{\mathcal H}_n(\frko)$ and $1\leq m\leq n$, we denote the upper left $m\times m$ submatrix $(b_{ij})_{1\leq i, j\leq m}\in{\mathcal H}_m(\frko)$ by $B^{(m)}$.
\item For $B=(b_{ij})\in {\mathcal H}^\mathrm{nd}_n(\frko)$, we say that the quadratic lattice $(L, Q_L)$ is represented by $B$ if $L$ is of rank $n$ and there is an ordered basis $(e_1, \cdots, e_n)$ of $L$ such that $b_{ij}=\langle e_i,e_j\rangle_{Q_L}$.
\item When two elements $B, B'\in{\mathcal H}_n(\frko)$ are ${\mathrm{GL}}_n(\frko)$-equivalent, we just say they are equivalent and write $B\sim B'$. For two quadratic lattices $(L, Q_L)$ and $(L', Q_{L'})$ represented by $B$ and $B'$ respectively, we say that $(L, Q_L)$ and $(L', Q_{L'})$ are equivalent if $B$ and $B'$ are equivalent. We sometimes say that $Q_L$ and $Q_{L'}$ are equivalent, if this does not cause confusion or ambiguity.
\item The equivalence class of $B$ is denoted by $\{B\}_{equiv}$, i.e.,
$\{B\}_{equiv}=\{B[U]\,|\, U\in{\mathrm{GL}}_n(\frko)\}$.
\end{itemize}
\begin{definition}(\cite{IK1}, Definitions 0.1 and 0.2) \label{def:2.1} \begin{enumerate} \item Let $B=(b_{ij})\in{\mathcal H}^\mathrm{nd}_n(\frko)$. Let $S(B)$ be the set of all non-decreasing sequences $(a_1, \ldots, a_n)\in\ZZ_{\geq 0}^n$ such that \begin{align*} &\mathrm{ord}(b_{ii})\geq a_i \qquad\qquad\qquad\quad (1\leq i\leq n), \\ &\mathrm{ord}(2 b_{ij})\geq (a_i+a_j)/2 \qquad\; (1\leq i\leq j\leq n). \end{align*} Put \[ \bfS(\{B\}_{equiv})=\bigcup_{B'\in\{B\}_{equiv}} S(B')=\bigcup_{U\in{\mathrm{GL}}_n(\frko)} S(B[U]). \] The Gross-Keating invariant $\mathrm{GK}(B)$ of $B$ is the greatest element of $\bfS(\{B\}_{equiv})$ with respect to the lexicographic order $\succeq$ on $\ZZ_{\geq 0}^n$.
\item A symmetric matrix $B (\in{\mathcal H}^\mathrm{nd}_n(\frko))$ is called $optimal$ if $\mathrm{GK}(B)\in S(B)$.
\item If $B$ is a symmetric matrix of a quadratic lattice $(L, Q_L)$, then $\mathrm{GK}(L)$, called the Gross-Keating invariant of $(L, Q_L)$, is defined by $\mathrm{GK}(B)$. $\mathrm{GK}(L)$ is independent of the choice of the matrix $B$. \end{enumerate} \end{definition}
It is known that the set $\mathbf{S}(\{B\}_{equiv})$ is finite (cf. \cite{IK1}), which explains the well-definedness of $\mathrm{GK}(B)$. We can also see that $\mathrm{GK}(B)$ depends only on the equivalence class of $B$.
A sequence of length $0$ is denoted by $\emptyset$. When $B$ is the empty matrix, we understand $\mathrm{GK}(B)=\emptyset$.
By definition, a non-degenerate half-integral symmetric matrix $B\in{\mathcal H}^\mathrm{nd}_n(\frko)$ is equivalent to an optimal form.\\
For $B\in{\mathcal H}^\mathrm{nd}_n(\frko)$, we put $D_B=(-4)^{[n/2]}\det B$. If $n$ is even, we denote the discriminant ideal of $F(\sqrt{D_B})/F$ by $\mathfrak{D} _B$. We put
\[ \xi_B= \begin{cases} 1 & \text{ if $D_B\in F^{\times 2}$,} \\ -1 & \text{ if $F(\sqrt{D_B})/F$ is unramified and $[F(\sqrt{D_B}):F]=2$,} \\ 0 & \text{ if $F(\sqrt{D_B})/F$ is ramified.} \end{cases} \] \begin{definition}(\cite{IK1}, Definition 0.3) \label{def:0.3} For $B\in{\mathcal H}^\mathrm{nd}_n(\frko)$, we put \[ \Delta(B)= \begin{cases} \mathrm{ord}(D_B) & \text{ if $n$ is odd,} \\ \mathrm{ord}(D_B)-\mathrm{ord}(\mathfrak{D}_B)+1-\xi_B^2 & \text{ if $n$ is even.} \end{cases} \] \end{definition} Note that if $n$ is even, then \[ \Delta(B)= \begin{cases} \mathrm{ord}(D_B) & \text{ if $\mathrm{ord}(\mathfrak{D}_B)=0$,} \\ \mathrm{ord}(D_B)-\mathrm{ord}(\mathfrak{D}_B)+1 & \text{ if $\mathrm{ord}(\mathfrak{D}_B)>0$.} \end{cases} \]
One of the main results of \cite{IK1} is the following theorem: \begin{theorem}[\cite{IK1}, Theorem 0.1] \label{thm:2.1} For $B\in{\mathcal H}^\mathrm{nd}_n(\frko)$, we have \[
|\mathrm{GK}(B)|=\Delta(B). \] \end{theorem}
\begin{definition}(\cite{IK1}, Definition 0.4) \label{def:2.4} The Clifford invariant of $B\in{\mathcal H}^\mathrm{nd}_n(\frko)$ is the Hasse invariant of the Clifford algebra (resp.~the even Clifford algebra) of $B$ if $n$ is even (resp.~odd). \end{definition} We denote the Clifford invariant of $B$ by $\eta_B$.
If $B$ is ${\mathrm{GL}}_n(F)$-equivalent to $\mathrm{diag}(b'_1, \ldots, b'_n)$, then \begin{align*} \eta_B =& \langle -1, -1 \rangle^{[(n+1)/4]}\langle -1, \det B \rangle^{[(n-1)/2]} \prod_{i < j} \langle b'_i, b'_j \rangle \\ =&\begin{cases} \langle -1, -1 \rangle^{m(m-1)/2}\langle -1, \det B \rangle^{m-1} \displaystyle \prod_{i < j} \langle b'_i, b'_j \rangle & \text{ if $n=2m$, } \\ \noalign{\vskip 6pt} \langle -1, -1 \rangle^{m(m+1)/2}\langle -1, \det B \rangle^{m} \displaystyle \prod_{i < j} \langle b'_i, b'_j \rangle & \text{ if $n=2m+1$. } \end{cases} \end{align*} Here, $\langle -,- \rangle$ is the quadratic Hilbert symbol. If $H\in{\mathcal H}^\mathrm{nd}_2(\frko)$ is ${\mathrm{GL}}_2(F)$-isomorphic to a hyperbolic plane, then $\eta_{B\perp H}=\eta_B$. In particular, if $n$ is odd, then we have \begin{align*} \eta_B =& \begin{cases} 1 & \text{ if $B$ is split over $F$, that is, the associated Witt index is $\frac{n-1}{2}$,} \\ -1 & \text{ otherwise.} \end{cases} \end{align*}
The following theorem is necessary to define the extended GK datum, which will be explained in the next subsection. \begin{theorem}[\cite{IK1}, Theorem 0.4] \label{thm:2.4} Let $B, B_1\in{\mathcal H}^\mathrm{nd}_n(\frko)$. Suppose that $B\sim B_1$ and both $B$ and $B_1$ are optimal. Let ${\underline{a}}=(a_1, a_2, \ldots, a_n)=\mathrm{GK}(B)=\mathrm{GK}(B_1)$. Suppose that $a_k<a_{k+1}$ for $1\leq k < n$. Then the following assertions (1) and (2) hold. \begin{itemize} \item[(1)] If $k$ is even, then $\xi_{B^{(k)}}=\xi_{B_1^{(k)}}$. \item[(2)] If $k$ is odd, then $\eta_{B^{(k)}}=\eta_{B_1^{(k)}}$. \end{itemize} \end{theorem}
\subsection{The extended GK datum} \label{sec:3}
Ikeda and Katsurada augment the notion of the Gross-Keating invariant with additional data, resulting in an invariant that they call the extended GK datum, whose definition we now recall in detail from \cite{IK1}.
\begin{definition}[\cite{IK2}, Definition 3.1]\label{def3.0} Let $\underline{a}=(a_1, \cdots, a_n)$ be a non-decreasing sequence of non-negative integers. Write $\underline{a}$ as \[\underline{a}=(\underbrace{m_1, \cdots, m_1}_{n_1}, \cdots, \underbrace{m_r, \cdots, m_r}_{n_r} )\] with $m_1<\cdots <m_r$ and $n=n_1+\cdots + n_r$. For $s=1, 2, \cdots, r$, put \[n_s^{\ast}=\sum_{u=1}^{s}n_u,\] and \[ I_s=\{ n_{s-1}^{\ast}+1, n_{s-1}^{\ast}+2, \cdots, n_{s}^{\ast}\}. \] Here, we let $n_0^{\ast}=0$. \end{definition}
\begin{definition} \label{def:3.3}(\cite{IK1}, Definition 6.3) We define the extended GK datum as follows. \begin{enumerate} \item Let $B \in {\mathcal H}^\mathrm{nd}_n(\frko )$ be an optimal form such that $\mathrm{GK}(B)={\underline{a}}=(a_1,\ldots,a_n)$.
We define $\zeta_s=\zeta_s(B)$ by \[ \zeta_s=\zeta_s(B)= \begin{cases} \xi_{B^{(n_s^\ast)}} & \text{ if $n_s^\ast$ is even,} \\ \eta_{B^{(n_s^\ast)}} & \text{ if $n_s^\ast$ is odd.} \end{cases} \] Then the extended GK datum of $B$, denoted by $\mathrm{EGK}(B)$, is defined as follows: $$\mathrm{EGK}(B)=(n_1,\ldots,n_r;m_1,\ldots,m_r;\zeta_1,\ldots,\zeta_r).$$ Here, the integers $n_i$'s and $m_j$'s are obtained from $\mathrm{GK}(B)=(a_1, \cdots, a_n)$ as in Definition \ref{def3.0}.
\item For $B \in {\mathcal H}^\mathrm{nd}_n(\frko )$, we define $\mathrm{EGK}(B)=\mathrm{EGK}(B')$, where $B'$ is an optimal form equivalent to $B$. This definition does not depend on the choice of an optimal form $B'$ by Theorem \ref{thm:2.4}.
\item If $B$ is a symmetric matrix with respect to an ordered basis of a quadratic lattice $(L, Q_L)$, then $\mathrm{EGK}(L)$, called the extended GK datum of $(L, Q_L)$, is defined by $\mathrm{EGK}(B)$. \end{enumerate} \end{definition}
Clearly, $\mathrm{EGK}(B)$ (or $\mathrm{EGK}(L)$) depends only on the isomorphism class of $B$ by Theorem \ref{thm:2.4}.
\subsection{Definition of local density} \label{sec:4} In this subsection, we explain a definition of the local density in the general case, taken from Section 5 of \cite{IK2}.
For $m\geq n\geq 1$, $A\in{\mathcal H}_m(\frko)$, $B\in{\mathcal H}_n(\frko)$, we put \[ {\mathcal A}_N(B, A)=
\{X=(x_{ij})\in\mathrm{M}_{mn}(\frko)/\frkp^N \mathrm{M}_{mn}(\frko)\;|\; A[X]-B\in\frkp^N{\mathcal H}_n(\frko)\}. \] Then the local density $\alpha(B, A)$ is defined by \[ \alpha(B,A)=\lim_{N\rightarrow \infty} (q^N)^{-mn+\frac{n(n+1)}2}\, \sharp{\mathcal A}_N(B, A). \] Here, if $N>2\mathrm{ord}(D_B)$, then the value \[ (q^N)^{-mn+\frac{n(n+1)}2}\, \sharp{\mathcal A}_N(B, A) \] does not depend on $N$.
Equivalently,
we have \[ \alpha(B, A)=\int_{y\in\mathrm{Sym}_n(F)} \int_{x\in\mathrm{M}_{mn}(\frko)} \psi\left(\mathrm{tr}( y(A[x]-B))\right)\, dx\, dy \] for an additive character $\psi$ of $F$ with order $0$. Here, an additive character $\psi$ of $F$ with order $0$ means that (cf. Section 2 of \cite{IK2}) \[
\mathfrak{o}=\{a\in F| \psi(ax)=1 \textit{ for any }x\in \mathfrak{o} \}. \] Here the integral $\int_{y\in \mathrm{Sym}_n(F)}$ with respect to $y\in \mathrm{Sym}_n(F)$ should be interpreted as the principal value integral \[ \lim_{N\rightarrow \infty} \int_{y\in \varpi^{-N}{\mathcal L}} \] for some fixed lattice ${\mathcal L}\subset \mathrm{Sym}_n(F)$.
For the quadratic lattice $(L, Q_L)$ represented by $B\in{\mathcal H}^\mathrm{nd}_n(\frko)$ and the quadratic space $(V, Q_V)$ such that $V=V\otimes_{\mathfrak{o}}F$ and $Q_V=Q_L\otimes 1$, the orthogonal group $G=\mathrm{O}_{F}(V)$ is an algebraic group defined over $F$. The local density for the single quadratic lattice $(L, Q_L)$, denoted by $\beta(L)$ or $\beta(B)$, is defined by \[ \beta(L)=\frac{1}{[G:G^\circ]}\alpha(B, B). \] Here, $G^\circ$ is the identity component of $G$.
\begin{remark} \begin{enumerate} \item The local density can also be described in terms of a volume of certain $p$-adic manifold using scheme theory. In this remark, we will explain a main idea of this method following Section 3.1 of \cite{CY}.
For more detailed explanation, we refer to Section 3 of \cite{GY} or Section 3.1 of \cite{CY}.
For the quadratic lattice $(L, Q_L)$ represented by $B\in{\mathcal H}^\mathrm{nd}_n(\frko)$ and the quadratic space $(V, Q_V)$ such that $V=V\otimes_{\mathfrak{o}}F$ and $Q_V=Q_L\otimes 1$, regarding $\mathrm{M}_n(F)$ and $\mathrm{Sym}_n(F)$ as varieties over $F$, let $\omega_{M_n}$ and $\omega_{\mathcal{H}_n}$ be nonzero, translation-invariant top-degree forms on $\mathrm{M}_n(F)$ and $\mathrm{Sym}_n(F)$, respectively, with normalizations
$$\int_{\mathrm{M}_n(\mathfrak{o})}|\omega_{M_n}|=1 \mathrm{~and~} \int_{\mathcal{H}_n(\mathfrak{o})}|\omega_{\mathcal{H}_n}|=1.$$
We define a map $\rho : \mathrm{GL}_n(F) \rightarrow \mathrm{Sym}_n(F)$ by $\rho(X)= Q_V\circ X$. Here we identify $\mathrm{GL}_n(F)=\mathrm{Aut}_F(V)$ and $\mathrm{Sym}_n(F)$ to be the set of (possibly degenerate) quadratic forms on $V$. Then the inverse image of $Q_V$, along the map $\rho$, is $\mathrm{O}_F(V)$, which represents the group of $F$-linear self-maps of $V$ preserving the quadratic space $(V, Q_V)$. One can also show that the morphism $\rho$ is representable as a morphism of schemes over $F$ and that this (necessarily unique) morphism is smooth. Then we have
a differential form $\omega_{L}$ on $\mathrm{M}_n(F)$ such that $\omega_{M_n}|_{\mathrm{GL}_n(F)}= \rho^{\ast}\omega_{\mathcal{H}_n}\wedge \omega_{L}$. We denote by $\omega_{L}^{\mathrm{ld}}$ the restriction of $\omega_{L}$ to $\mathrm{O}_F(V)$. We sometimes write $\omega_{L}^{\mathrm{ld}}=\omega_{M_n}/\rho^{\ast}\omega_{\mathcal{H}_n}$.
Then we have \[ \beta(L)=\frac{1}{[G:G^\circ]}
\int_{\mathrm{O}_{\mathfrak{o}}(L)}|\omega_L^{\mathrm{ld}}|. \] Here, $\mathrm{O}_{\mathfrak{o}}(L)$ represents the group, as a subset of $M_n(\mathfrak{o})$, preserving the quadratic lattice $(L, Q_L)$.
This equation is explained in Lemma 3.4 of \cite{GY}. For arbitrary $B \in{\mathcal H}^\mathrm{nd}_n(\frko)$ and $A \in{\mathcal H}^\mathrm{nd}_m(\frko)$,
there is also a similar formulation of $\alpha(B,A)$ as the integral of a volume form on a certain $p$-adic manifold. See Lemma 3.2 of \cite{CY} for more details.
\item The local density given in \cite{Cho} , which we denote in this paper by $\beta^C(L)$ (in \cite{Cho} this term is denoted by $\beta(L)$), uses a different normalization. With the above setting,
let $\omega_{Sym_n}$ be a nonzero, translation-invariant top-degree form on $\mathrm{Sym}_n(F)$ with normalization
$$ \int_{\mathrm{Sym}_n(\mathfrak{o})}|\omega_{Sym_n}|=1.$$ Then $\beta^C(L)$ is defined to be \[
\beta^C(L)=\frac{1}{[G:G^\circ]}\int_{\mathrm{O}_{\mathfrak{o}}(L)}|\omega_{L}^{C, \mathrm{ld}}|, \] where
$\omega_{L}^{C, \mathrm{ld}}:=\omega_{M_n}/\rho^{\ast}\omega_{Sym_n}$.
\item In order to compare the both local densities $\beta(L)$ and $\beta^C(L)$, it suffices to compare the two volume forms $\omega_{\mathcal{H}_n}$ and $\omega_{Sym_n}$. Our normalizations imply that \[ \varpi^{n(n-1)/2}\cdot \omega_{Sym_n}= \omega_{\mathcal{H}_n}. \] Thus $\omega_{L}^{C, \mathrm{ld}}=\varpi^{n(n-1)/2}\cdot \omega_{L}^{\mathrm{ld}}$. This yields: \[ \beta^\mathrm{C}(L)=q^{-e n(n-1)/2} \beta(L). \] Here, $e$ is the ramification index of $F$ over ${\mathbb Q}_2$. \end{enumerate}
\end{remark}
\section{The local density formula of a single quadratic lattice}\label{sec:5.1} In this section, we recall the local density formula for $\beta^C(L)$
given in \cite{Cho}.
We assume that $F$ is an unramified finite field extension of ${\mathbb Q}_2$. We follow the formulation of \cite{Cho}. Let $(L, Q_L)$ be the quadratic lattice represented by $B\in {\mathcal H}^\mathrm{nd}_n(\frko)$. We first collect necessary terminology below. \begin{enumerate} \item Recall that the bilinear form $\langle x, y \rangle_{Q_L}$ is defined by \[ \langle x, y\rangle_{Q_L}=\frac12 (Q_L(x+y)-Q_L(x)-Q_L(y)). \] \item The scale $\bfs(L)$ and the norm $\bfn(L)$
are defined by \begin{align*}
\bfs(L)=&\{\langle x, y\rangle_{Q_L}\;|\; x, y\in L\} , \\
\bfn(L)=&\textit{the fractional ideal generated by $\{Q_L(x)\;|\; x\in L\}$}. \end{align*} \item The dual lattice $L^\sharp$ is defined by \[
L^\sharp=\{x\in L\otimes F\;|\; \langle x, L\rangle_{Q_L}\subset \frko\}. \] \item $L$ is called a unimodular lattice if $L=L^\sharp$. \item A unimodular lattice $L$ is \textit{of parity type I} if $\bfn(L)=\frko$ otherwise \textit{of parity type II}.
\item $(L, Q_L)$ is $i$-\textit{modular} if $(L, a^{-1}Q_L)$ is unimodular for some $a\in \mathfrak{o} \backslash \{0\}$ such that the exponential valuation of $a$ is $i$ (such an $a$ being unique up to a unit). In this case the parity type of $(L, Q_L)$ is defined to be the parity type of $(L, a^{-1}Q_L)$. The zero lattice is considered to be \textit{of parity type II}.
\item Let $B(L)$ be the sublattice of $L$ such that $B(L)/2L$ is the kernel of the linear form $2^{-s(L)}Q_L$ mod 2 on $L/2L$. Here, $s(L)$ is the integer such that $\bfs(L)=(2^{s(L)})$. \item Let $Z(L)$ be the sublattice of $L$ such that $Z(L)/2L$ is the kernel of the quadratic form $2^{-s(L)-1}Q_L$ mod 2 on $B(L)/2L$.
\end{enumerate}
Let \[ L=\bigoplus_i L_i \] be a Jordan splitting. We assume $\bfs(L_i)=(2^i)$, allowing $L_i$ to be the zero lattice. Put $n_i=\mathrm{rank}_\frko L_i$ and
\[ \left\{
\begin{array}{l}
\textit{$A_i=\{x\in L \mid \langle x,L\rangle_{Q_L} \in 2^i\frko\}=L\cap 2^i L^\sharp$};\\
\textit{$B_i=B(A_i)$};\\
\textit{$Z_i=Z(A_i)$}.
\end{array} \right. \]
Then $Z_i$ is the sublattice of $B_i$ such that $Z_i/2 A_i$ is the kernel of the quadratic form $\frac{1}{2^{i+1}}Q_L$ mod 2 on $B_i/2 A_i$. Let $\bar{V_i}=B_i/Z_i$ and $\bar{q}_i$ denote the nonsingular quadratic form $\frac{1}{2^{i+1}}Q_L$ mod 2 on $\bar{V_i}$.\\
We assign a type to each $L_i$ as follows: \[\left \{
\begin{array}{l l}
I & \quad \textit{if $L_i$ is of parity type I,}\\
I^o & \quad \textit{if $L_i$ is of parity type I and the rank of $L_i$ is odd,}\\
I^e & \quad \textit{if $L_i$ is of parity type I and the rank of $L_i$ is even,}\\ II & \quad \textit{if $L_i$ is of parity type II}.\\
\end{array} \right.\]
In addition, we say that $L_i$ is \[\left \{
\begin{array}{l l}
\textit{bound} & \quad \textit{if at least one of $L_{i-1}$ or $L_{i+1}$ is of parity type I,}\\
\textit{free} & \quad \textit{if both $L_{i-1}$ and $L_{i+1}$ are of parity type II}.\\
\end{array} \right.\] Assume that a lattice $L_i$ is \textit{free} \textit{of type} $\textit{I}^e$. We denote by $\bar{V_i}$ the $\frkk$-vector space $B_i/Z_i$. Then we say that $L_i$ is \[
\left\{
\begin{array}{l l}
\textit{of type I}^e_1 & \quad \text{if the dimension of $\bar{V_i}$ is odd, }\\
\textit{of type I}^e_2 & \quad \text{otherwise}.\\
\end{array} \right. \] Notice that for each $i$, the type of $L_i$ is independent of the choice of a Jordan splitting.\\
Let $\underline{G}$ be the smooth integral model of $G=\mathrm{O}_{Q_F}$. The readers are referred to the beginning of Section 3 of \cite{Cho} for a detailed definition of $\underline{G}$. The special fibre of $\underline{G}$ is denoted by $\tilde G$. Then there exists a surjective morphism $\varphi$ (cf. Theorem 4.1 in \cite{Cho}) \[ \varphi=\prod_i \varphi_i : \tilde{G} ~ \longrightarrow ~\prod_i \mathrm{O}(\bar{V_i}, \bar{q_i})^{\mathrm{red}}. \] The image Im $\varphi_i$ is described as follows (cf. Remark 4.3 in \cite{Cho}).
\[
\begin{array}{c|c}
\hline
\mathrm{Type~of~lattice~} L_i & \mathrm{Im~} \varphi_i \\
\hline
\textit{I}^o,\ \ \mathrm{\textit{free}} & \mathrm{O}(n_i-1, \bar{q_i})\\
\textit{I}^e_1,\ \ \mathrm{\textit{free}} &\mathrm{SO}(n_i-1, \bar{q_i})\\
\textit{I}^e_2,\ \ \mathrm{\textit{free}} &\mathrm{O}(n_i-2, \bar{q_i})\\
\textit{II},\ \ \mathrm{\textit{free}} &\mathrm{O}(n_i, \bar{q_i})\\
\textit{I}^o,\ \ \mathrm{\textit{bound}} &\mathrm{SO}(n_i, \bar{q_i})\\
\textit{I}^e,\ \ \mathrm{\textit{bound}} &\mathrm{SO}(n_i-1, \bar{q_i})\\
\textit{II},\ \ \mathrm{\textit{bound}} &\mathrm{SO}(n_i+1, \bar{q_i}) \\
\hline
\end{array}
\]
Let
\begin{itemize}
\item $\alpha$ be the number of all $i$ such that $L_i$ is \textit{free} \textit{of type} $\textit{I}^e_1$.
\item $\beta$ be the number of all $j$ such that $L_j$ is \textit{of type I} and $L_{j+2}$ is \textit{of type II}.
\end{itemize}
\begin{theorem}[\cite{Cho}, Theorem 4.12]\label{thmcho412} We have an isomorphism \[ \tilde{G}/R_u\tilde{G}\simeq \prod_i \mathrm{O}(\bar{V_i}, \bar{q_i})^{\mathrm{red}} \times (\mathbb{Z}/2\mathbb{Z})^{\alpha+\beta}. \] Here, $R_u\tilde{G}$ is the connected unipotent radical of $\tilde{G}$. \end{theorem}
Let \begin{itemize}
\item $b$ be the total number of pairs of adjacent constituents $L_i$ and $L_{i+1}$ that are both \textit{of type I}.
\item $c$ be the sum of ranks of all nonzero Jordan constituents $L_i$ that are \textit{of type} $\textit{II}$.
\end{itemize}
\begin{theorem}[\cite{Cho}, Theorem 5.2]\label{thmcho52}
The local density of $(L,Q_L)$ defined in \cite{Cho}, which we are denoting in this paper by $\beta^\mathrm{C}(L)$, is \[ \beta^\mathrm{C}(L)=\frac{1}{[G:G^{\circ}]}q^N \cdot q^{-\mathrm{dim} G} \sharp\tilde{G}(\frkk), \] where \begin{align*} N =&t+\sum_{i<j} i\cdot n_i\cdot n_j+\sum_i i\cdot n_i\cdot (n_i+1)/2-b+c, \\ t =&\text{ the total number of $L_i$'s that are \textit{of type I}}. \end{align*} \end{theorem}
In the above local density formula, \[ \sharp\tilde{G}(\frkk)=\sharp R_u\tilde{G}(\frkk)\cdot \sharp (\tilde{G}/R_u\tilde{G})(\frkk). \]
\section{Reformulation of the local density formula}\label{sec:5.2} We are now ready to explain our main result. In this section, we show that the local density $\beta(L)$ is determined by a series of Gross-Keating invariants and the (truncated) extended GK datums (cf. Theorem \ref{thm-ldgk}). We keep assuming that $F$ is unramified over $\mathbb{Q}_2$. Let $(L, Q_L)$ be the quadratic lattice represented by $B\in {\mathcal H}^\mathrm{nd}_n(\frko)$.
\subsection{Reduced form of Ikeda and Katsurada}\label{sss1}
In \cite{IK1}, Ikeda and Katsurada introduced the notion of a `reduced form' associated to $B$ and showed it to be optimal. We use a reduced form several times in this paper and thus
provide its detailed definition through Definitions \ref{def3.1}-\ref{def3.2}.
They are taken from \cite{IK1} for synchronization. The main result of this subsection is Proposition \ref{propi-ii}, which will be used in the next subsection.
Let $\mathfrak{S}_n$ be the symmetric group of degree $n$. Let $\sigma\in \mathfrak{S}_n$ be an involution i.e. $\sigma^2=id$. For a non-decreasing sequence of non-negative integers $\underline{a}=(a_1, \cdots, a_n)$, we set
\[\mathcal{P}^0=\mathcal{P}^0(\sigma)=\{i|1\leq i \leq n, i=\sigma(i)\}, \]
\[\mathcal{P}^+=\mathcal{P}^+(\sigma)=\{i|1\leq i \leq n, a_i>a_{\sigma(i)}\}, \]
\[\mathcal{P}^-=\mathcal{P}^-(\sigma)=\{i|1\leq i \leq n, a_i<a_{\sigma(i)}\}. \]
\begin{definition}[\cite{IK1}, Definition 3.1]\label{def3.1} We say that an involution $\sigma\in \mathfrak{S}_n$ is $\underline{a}$-admissible if the following three conditions are satisfied: \begin{itemize} \item[(i)] ${\mathcal P}^0$ has at most two elements. If ${\mathcal P}^0$ has two distinct elements $i$ and $j$, then $a_i\not\equiv a_j \text{ mod $2$}$.
Moreover, if $i \in {\mathcal P}^0$, then \[
a_i=\max\{ a_j \, |\, j\in {\mathcal P}^0\cup{\mathcal P}^+, \, a_j\equiv a_i \text{ mod }2\}. \] \item[(ii)] For $s=1, \ldots, r$, we have \[ \#(\mathcal{P}^+\cap I_s)\leq 1, ~~~~~~\textit{ }~~~~~~~~ \#(\mathcal{P}^-\cap I_s)+\#(\mathcal{P}^0\cap I_s)\leq 1. \] Here, $I_s$ is defined in Definition \ref{def3.0}.
\item[(iii)]
If $i \in {\mathcal P}^-$, then \[
a_{\sigma(i)}=\min\{a_j \,| \, j\in {\mathcal P}^+, a_j>a_i,\, a_j\equiv a_i \text{ mod } 2\}. \] Similarly, if $i \in {\mathcal P}^+$, then \[
a_{\sigma(i)}=\max\{a_j \,| \, j\in {\mathcal P}^-, a_j<a_i,\, a_j\equiv a_i \text{ mod } 2\}. \] \end{itemize} If $\sigma$ is an ${\underline{a}}$-admissible involution, the pair $({\underline{a}}, \sigma)$ is called a GK type. \end{definition}
\begin{definition}[\cite{IK1}, Definition 3.2]\label{def3.2} Write $B=\begin{pmatrix}b_{ij}\end{pmatrix}\in {\mathcal H}^\mathrm{nd}_n(\frko)$. Let $\underline{a}\in S(B)$ (cf. Definition \ref{def:2.1}.(1)). Let $\sigma\in \mathfrak{S}_n$ be an $\underline{a}$-admissible involution. We say that $B$ is a reduced form of GK-type $(\underline{a}, \sigma)$ if the following conditions are satisfied: \begin{enumerate} \item If $i \notin \mathcal{P}^0$, $j=\sigma(i)$, and $a_i\leq a_j$, then \[\mathrm{GK}\begin{pmatrix}\begin{pmatrix}b_{ii} & b_{ij}\\ b_{ij}&b_{jj}\end{pmatrix}\end{pmatrix}=(a_i, a_j).\] Note that this condition is equivalent to the following condition (by Proposition 2.3 of \cite{IK1}). \[\left\{
\begin{array}{l l}
\mathrm{ord}(2b_{ij})=\frac{a_i+a_{j}}{2} & \quad \text{if $i\notin \mathcal{P}^0$, $j=\sigma(i)$};\\
\mathrm{ord}(b_{ii})=a_i & \quad \text{if $i\in \mathcal{P}^-$}.
\end{array} \right.\]
\item if $i\in \mathcal{P}^0$, then \[\mathrm{ord}(b_{ii})=a_i.\]
\item If $j\neq i, \sigma(i)$, then \[\mathrm{ord}(2b_{ij})>\frac{a_i+a_j}{2}.\] \end{enumerate}
Saying just `reduced form' without an $\underline{a}$ or a $\sigma$ means `reduced form' of GK-type $(\underline{a}, \sigma)$ for some non-increasing sequence $\underline{a}$ of integers and an $\underline{a}$-admissible involution $\sigma$. \end{definition}
\begin{theorem}[\cite{IK1}, Corollary 5.1]\label{thm5.1} A reduced form is optimal. More precisely, if $B$ is a reduced form of GK-type $(\underline{a}, \sigma)$, then $$\mathrm{GK}(B)=\underline{a}.$$ \end{theorem}
We list a few facts about the above definitions. \begin{remark}\label{rmk1} \begin{enumerate} \item For any given non-decreasing sequence of non-negative integers $\underline{a}=(a_1, \cdots, a_n)$, there always exists an $\underline{a}$-admissible involution (cf. the paragraph following Definition 3.1 of \cite{IK1}).
\item For $B \in {\mathcal H}^\mathrm{nd}_n(\frko)$, there always exist a $\mathrm{GK}(B)$-admissible involution $\sigma$ and a reduced form of GK type $(\mathrm{GK}(B), \sigma)$, which is equivalent to $B$ (cf. Theorem 4.1 of \cite{IK1}). Equivalently, for a quadratic lattice $(L, Q_L)$, there always exists a reduced form which represents the integral quadratic form $Q_L$.
Such an involution $\sigma$ is not unique, but it is so up to a certain notion of equivalence, as will be recalled in Remark \ref{rmk11}.
\item The first integer of $\mathrm{GK}(L)$ is the exponential valuation of a generator of $\bfn(L)$ (cf. Lemma B.1 of \cite{thyang}). \end{enumerate} \end{remark}
\begin{remark}\label{rmk11} We say that two $\underline{a}$-admissible involutions are equivalent if they are conjugate by an element of $\mathfrak{S}_{n_1}\times \cdots \times \mathfrak{S}_{n_r}$. Here, we follow the notation introduced in Definition \ref{def3.0} to specify the integers $n_1, \cdots, n_r$. If $\sigma$ is an $\underline{a}$-admissible involution, then the equivalence class of $\sigma$ is determined by \begin{equation}\label{eqset} \#(\mathcal{P}^+\cap I_s), ~~\textit{ }~~~ \#(\mathcal{P}^-\cap I_s), ~~\textit{ }~~~ \#(\mathcal{P}^0\cap I_s) \end{equation} for $1\leq s \leq r$ (cf. the paragraph following Remark 4.1 in \cite{IK1}).
Let $\sigma$ and $\tau$ be $\mathrm{GK}(B)$-admissible involutions associated to reduced forms of GK types $(\mathrm{GK}(B), \sigma)$ and $(\mathrm{GK}(B), \tau)$, respectively, which are equivalent to a given symmetric matrix $B$.
Then $\sigma$ and $\tau$ are equivalent (cf. Theorem 4.2 of \cite{IK1}).
Therefore, the above sets in (\ref{eqset}) for $B$ are independent of the choice of a
$\mathrm{GK}(B)$-admissible involution with a reduced form. \end{remark}
\begin{lemma}\label{lemgk}
If $\mathrm{GK}(B)=(a_1, \cdots, a_n)$, then $$\mathrm{GK}(2^lB)=(a_1+l, \cdots, a_n+l).$$ \end{lemma} \begin{proof} We write $\mathrm{GK}(2^lB)=(b_1, \cdots, b_n)$.
It is obvious by Definition \ref{def:2.1} that $(b_1, \cdots, b_n) \succeq (a_1+l, \cdots, a_n+l)$. Since $\mathrm{GK}(B)=\mathrm{GK}(2^{-l}\cdot 2^lB)$, we also have $(a_1, \cdots, a_n)\succeq (b_1-l, \cdots, b_n-l)$. Thus we have $(b_1, \cdots, b_n) = (a_1+l, \cdots, a_n+l)$.
\end{proof}
Let $\sigma\in \mathfrak{S}_n$ be an $\underline{a}$-admissible involution and let $\tau \in \mathfrak{S}_m$ be a $\underline{b}$-admissible involution.
We choose embeddings of $\underline{a}$ and $\underline{b}$ into $\underline{a} \cup \underline{b}$. Here, the notion of $\underline{a} \cup \underline{b}$ is given at the beginning of Section \ref{ssnota}. The involution $\sigma\cup \tau$ is defined as the element in $\mathfrak{S}_{n+m}$ such that the restriction of $\sigma\cup \tau$ to $\underline{a}$ (resp. $\underline{b}$) along the embedding is well-defined and is the same as $\sigma$ (resp. $\tau$). If we assume that both $\mathcal{P}^0(\sigma)$ and $\mathcal{P}^+(\sigma)$ are empty (thus $\mathcal{P}^-(\sigma)$ is empty as well), i.e. $\sigma(i)\neq i$ and $a_i=a_{\sigma(i)}$ for any $1\leq i \leq n$, then it is easy to see that $\sigma\cup \tau$ is an $\underline{a}\cup \underline{b}$-admissible involution for any pair of embeddings from $\underline{a}$ and $\underline{b}$ into $\underline{a} \cup \underline{b}$.
\begin{lemma}\label{red} Let $B=X\bot Y$ be of size $(n+2)\times (n+2)$, where $X=2^l\begin{pmatrix} 2u& w \\ w & 2v \end{pmatrix}$ with $w \in \mathfrak{o}^{\times}$ and $u, v \in \mathfrak{o}$. Then \[\mathrm{GK}(B)=\mathrm{GK}(X)\cup \mathrm{GK}(Y).\] \end{lemma} \begin{proof} Note that $\mathrm{GK}(X)=(l+1, l+1)$ by Proposition 2.3 of \cite{IK1} and Lemma \ref{lemgk}. Let $\underline{a}=\mathrm{GK}(X)$ and let $\sigma$ be the associated non-trivial $\underline{a}$-admissible involution (i.e. $\sigma(1)=2$). Then $\mathcal{P}^0(\sigma)$ and $\mathcal{P}^+(\sigma)$ are empty and $X$ is a reduced form of GK-type $(\underline{a}, \sigma)$.
Let $Y'$ be a reduced form of GK-type $(\underline{b}, \tau)$ which is equivalent to $Y$, where $\underline{b}=\mathrm{GK}(Y)$.
The existence of a reduced form is guaranteed by Remark \ref{rmk1}.(2). The argument explained just before this lemma yields that $\sigma\cup \tau$ is an $\underline{a}\cup \underline{b}$-admissible involution. Let $(M, Q_M)$ be the quadratic lattice represented by the symmetric matrix $X\bot Y'$. We choose $(e_1, \cdots, e_{n+2})$ to be a basis of $M$ such that with respect to this basis the symmetric matrix of the quadratic lattice $(M, Q_M)$ is $X\bot Y'$.
Recall that $\underline{a}\cup \underline{b}$ is a reordered multiset of $\{a_1, a_2, b_1, \cdots, b_n\}$, where $\underline{a}=\{a_1, a_2\}$ and $\underline{b}=\{b_1, \cdots, b_n\}$. Let $\varphi$ be the permutation such that $\varphi(a_1, a_2, b_1, \cdots, b_n)=\underline{a}\cup \underline{b}$. Here we consider $(a_1, a_2, b_1, \cdots, b_n)$ as an ordered multiset. We define $(e_1', \cdots, e_{n+2}')$ to be the reordered basis of $M$ such that $(e_1', \cdots, e_{n+2}')=\varphi\left(e_1, \cdots, e_{n+2}\right)$.
Then the symmetric matrix of $M$ with respect to the reordered basis $(e_1', \cdots, e_{n+2}')$, which is equivalent to $X\bot Y'$, is a reduced form of GK-type $(\underline{a}\cup \underline{b}, \sigma\cup \tau)$ by Definition \ref{def3.2}.
The lemma then follows from Theorem \ref{thm5.1}.
\end{proof}
In general, let $B=\oplus B_i$ be a Jordan splitting such that $B_i$ is $i$-modular of size $n_i\times n_i$. By this, we mean that $B$ is an orthogonal sum of $B_i$'s and each $B_i$ is of the form $2^iB_i'$, where $B_i'$ is unimodular, i.e.
all entries of $B_i'$ are elements in $\mathfrak{o}$ and the determinant of $B_i'$ is a unit in $\mathfrak{o}$. Each unimodular symmetric matrix $B_i'$ is of one of the following forms (cf. Theorem 2.4 of \cite{Cho}): \[ \left\{
\begin{array}{l l}
(\bigoplus_k \begin{pmatrix} 2a_k& u_k \\ u_k & 2b_k \end{pmatrix}) & \quad \textit{: type II};\\
(\bigoplus_k \begin{pmatrix} 2a_k& u_k \\ u_k & 2b_k \end{pmatrix}) \oplus (\epsilon) & \quad \textit{: type $I^o$};\\
(\bigoplus_k \begin{pmatrix} 2a_k& u_k \\ u_k & 2b_k \end{pmatrix}) \oplus (\epsilon) \oplus (\epsilon') & \quad \textit{: type $I^e$}.
\end{array} \right.\] Here, $a_k, b_k \in \mathfrak{o}$ and $u_k, \epsilon, \epsilon' \in \mathfrak{o}^{\times}$. Then we have the following reduction formula about $\mathrm{GK}(B)$ by using Lemma \ref{red} inductively.
\begin{proposition}\label{propi-ii} Let $B=\oplus B_i$ be a Jordan splitting such that $B_i$ is $i$-modular of size $n_i\times n_i$.
By using the argument explained in the paragraph just before this proposition, we write $B_i=B_i^{\dag}\bot B_i^{\ddag}$ such that $B_i^{\dag}$ is \textit{of type II} and $B_i^{\ddag}$ is
empty (if $B_i$ is \textit{of type} $II$), of rank 1 (if $B_i$ is \textit{of type} $I^o$), or of rank 2 (if $B_i$ is \textit{of type} $I^e$).
Then \[\mathrm{GK}(B)=\mathrm{GK}(\oplus B_i^{\dag})\cup \mathrm{GK}(\oplus B_i^{\ddag})\] and \begin{multline*} \mathrm{GK}(\oplus B_i^{\dag})=(\bigcup_{\textit{$L_i$:of type $II$}}(i+1, i+1)^{n_i/2}) \cup\\ (\bigcup_{\textit{$L_i$:of type $I^0$}}(i+1, i+1)^{(n_i-1)/2})\cup (\bigcup_{\textit{$L_i$:of type $I^e$}}(i+1, i+1)^{(n_i-2)/2}). \end{multline*} Here, $(i+1, i+1)^{n_i/2}=\cup_{n_i/2}(i+1, i+1)$ and so on. \end{proposition}
\begin{proof} Since $B_i^{\dag}$ is $i$-modular \textit{of type II}, it is equivalent to an orthogonal sum of $2\times 2$ matrices of the form $2^i\begin{pmatrix} 2a& u \\ u & 2b \end{pmatrix}$ with $u \in \mathfrak{o}^{\times}$ and $a, b \in \mathfrak{o}$ as explained in the paragraph just before this proposition. Then the proposition follows from Lemma \ref{red} inductively. \end{proof}
\subsection{Description in terms of $\mathrm{GK}(L\oplus -L)$}\label{sss2}
Let $(-L, Q_{-L})$ be the quadratic lattice represented by $-B\in {\mathcal H}^\mathrm{nd}_n(\frko)$. Let \[ L=\bigoplus_i L_i \] be a Jordan splitting such that $\bfs(L_i)=(2^i)$, allowing some of the $L_i$ to possibly be the zero lattice. Put $n_i=\mathrm{rank}_\frko L_i$. Then \[ L\oplus -L=\bigoplus_i (L_i\oplus -L_i) \] is a Jordan splitting of $L\oplus -L$ such that $\bfs(L_i\oplus -L_i)=(2^i)$.
The Gross-Keating invariant of $L\oplus -L$ is computed as follows: \begin{proposition}\label{propgkl-l} We have that \[ \mathrm{GK}(L\oplus -L)=\bigcup_i\mathrm{GK}(L_i\oplus -L_i). \] Here, \[ \mathrm{GK}(L_i\oplus -L_i)=\left\{
\begin{array}{l l}
(\underbrace{i+1, \cdots, i+1}_{2n_i}) & \quad \textit{if $L_i$ is of type II};\\
(\underbrace{i+1, \cdots, i+1}_{2n_i-2})\cup (i, i+2) & \quad \textit{if $L_i$ is of type $I$}.
\end{array} \right. \] \end{proposition} \begin{proof} If $L_i$ is of type $I$ (resp. $II$), then $L_i\oplus -L_i$ is of type $I^e$ (resp. $II$). If $L_i$ is nonzero, then we claim that there is an ordered basis of $L_i\oplus -L_i$ such that with respect to this basis the symmetric matrix of the quadratic lattice $L_i\oplus -L_i$ is
\begin{equation}\label{eq42} \left\{
\begin{array}{l l} 2^i(\bigoplus_k \begin{pmatrix} 2a_k& u_k \\ u_k & 2b_k \end{pmatrix}) & \quad \textit{if $L_i$ is of type II};\\ 2^i(\bigoplus_k \begin{pmatrix} 2a_k& u_k \\ u_k & 2b_k \end{pmatrix})\oplus 2^i \begin{pmatrix} 1& 1 \\ 1 & 4c_i \end{pmatrix} & \quad \textit{if $L_i$ is of type $I$}
\end{array} \right. \end{equation}
with $c_i\in \mathfrak{o}$. Here, $a_k, b_k \in \mathfrak{o}$ and $u_k \in \mathfrak{o}^{\times}$.
To prove the claim, we use Theorem 2.4 of \cite{Cho}. Theorem 2.4 of loc. cit. directly verifies our claim when $L_i$ is of type $II$ and thus we may and do assume that $L_i$ is of type $I$. If $L$ is of type $I^o$, then it suffices to prove that $\begin{pmatrix} \epsilon&0\\0&-\epsilon \end{pmatrix}$ with $\epsilon\in \mathfrak{o}^{\times}$ is equivalent to $\begin{pmatrix} 1&1\\1&4c \end{pmatrix}$ for some $c\in \mathfrak{o}$.
We denote by $Q_{\epsilon}$ the quadratic lattice of rank $2$ represented by the symmetric matrix $\begin{pmatrix} \epsilon&0\\0&-\epsilon \end{pmatrix}$. Choose the ordered basis $(a_1, a_2)$ of $Q_{\epsilon}$ such that with respect to this basis the symmetric matrix of $Q_{\epsilon}$ is
$\begin{pmatrix} \epsilon&0\\0&-\epsilon \end{pmatrix}$. Then the matrix of $Q_{\epsilon}$ with respect to the ordered basis $(a_1, a_1+a_2)$ is $\begin{pmatrix} \epsilon&\epsilon\\\epsilon&0 \end{pmatrix}$. Since any unit in $\mathfrak{o}$ is square modulo $2$, we may assume that $\epsilon \equiv 1$ mod $2$. By Theorem 2.4 of \cite{Cho}, $\begin{pmatrix} \epsilon&\epsilon\\\epsilon&0 \end{pmatrix}$ is equivalent to $\begin{pmatrix} 1& 1 \\ 1 & 2c \end{pmatrix}$ for some $c\in \mathfrak{o}$. Thus it suffices to show that $c$ is contained in the ideal $(2)$.
Consider the following two quadratic forms: $f(x,y)=\epsilon x^2 + 2 \epsilon xy$ and $g(x,y)=x^2+2xy+2cy^2$. These two quadratic forms are determined by two matrices $\begin{pmatrix} \epsilon&\epsilon\\\epsilon&0 \end{pmatrix}$ and $\begin{pmatrix} 1& 1 \\ 1 & 2c \end{pmatrix}$, respectively, and thus are equivalent. Note that both $f$ modulo $2$ and $g$ modulo $2$ define linear forms as $\frkk$-valued functions. We consider the kernels of these two linear forms respectively. For example, the kernel of $f$ modulo $2$ is generated by $(2a_1, a_1+a_2)$. It is easy to see that the restriction of $f$ to the kernel is $4\epsilon x^2 + 4 \epsilon xy$, which we denote it by $f_{res}(x,y)$, and that the restriction of $g$ to the kernel is $4x^2+4xy+2cy^2$, which we denote it by $g_{res}(x,y)$.
Since $f$ and $g$ are equivalent, $f_{res}$ is equivalent to $g_{res}$ as well. Thus, the norm of $f_{res}$, which is the ideal $(4)$, should also be the norm of $g_{res}$. This directly yields that $c$ is contained in the ideal $(2)$.
If $L$ is of type $I^e$, then we may and do assume that the rank of $L_i\oplus -L_i$ is $4$. We choose the ordered basis $(a_1, a_2, a_3, a_4)$ of $L_i\oplus -L_i$ such that with respect to this basis the symmetric matrix of $L_i\oplus -L_i$ is as follows:
$$\begin{pmatrix} 1&1&0&0\\1&2\gamma&0&0\\ 0&0&-1&-1\\0&0&-1&-2\gamma \end{pmatrix}$$ with $\gamma \in \mathfrak{o}$.
Then the symmetrix matrix of $L_i\oplus -L_i$ with respect to the ordered basis $(a_1+a_3, a_2+2\gamma a_3, a_2+a_3, a_2+a_4)$ is
$\begin{pmatrix} 0&1-2\gamma&0&0\\1-2\gamma&2\gamma-4\gamma^2&0&0\\ 0&0&-1+2\gamma&-1+2\gamma\\0&0&-1+2\gamma&0 \end{pmatrix}$. Thus it suffices to prove that $\begin{pmatrix} -1+2\gamma&-1+2\gamma\\-1+2\gamma&0 \end{pmatrix}$ is equivalent to $\begin{pmatrix} 1&1\\1&4c \end{pmatrix}$ for some $c\in \mathfrak{o}$. The proof of this is the same as the above case (when $L$ is of type $I^o$) and thus we may skip it.
Before proceeding our proof, we explain an involution defined on an ordered basis and a reordered basis. For an involution $\sigma$ defined on the set $\{1, \cdots, n\}$,
define the involution $\sigma$ on an ordered basis $(e_1, \cdots, e_n)$ of a lattice $L$ as follows:
\[
\textit{$\sigma(e_i)=e_j$ if $\sigma(i)=j$.}
\] For an involution $\sigma$ defined on the ordered basis $(e_1, \cdots, e_n)$, if $(e_1', \cdots, e_n')$, which we denote it by $\mathcal{RE}$, is a reordered basis for $(e_1, \cdots, e_n)$, then the involution $\sigma_{\mathcal{RE}}$ on $(e_1', \cdots, e_n')$, which is induced from $\sigma$, is defined as follows: \[ \textit{$\sigma_{\mathcal{RE}}(e_r')=e_s'$ if $e_r'=e_i, e_s'=e_j, $ and $\sigma(e_i)=e_j$.}\] Then we define the involution $\sigma_{\mathcal{RE}}$ on $\{1, \cdots, n\}$ as follows: \[
\textit{$\sigma_{\mathcal{RE}}(r)=s$ if $\sigma_{\mathcal{RE}}(e_r')=e_s'$.} \]
We claim that the Gross-Keating invariant of $L_i \oplus -L_i$ is as described in the statement of the proposition. Let $(e_1^{(i)}, f_1^{(i)}, \cdots, e^{(i)}_{n_i}, f^{(i)}_{n_i})$ be an ordered basis of $L_i\oplus -L_i$ such that with respect to this basis the symmetric matrix of $L_i\oplus -L_i$ is described as in (\ref{eq42}). We consider the involution $\sigma$ which switches $e^{(i)}_j$ and $f^{(i)}_j$. Then it is easy to see that if $L_i$ is of type $II$, then the symmetric matrix of $L_i\oplus -L_i$ with respect to the ordered basis $(e^{(i)}_1, f^{(i)}_1, \cdots, e^{(i)}_{n_i}, f^{(i)}_{n_i})$ is a reduced form of GK-type \[\left(\left(\underbrace{i+1, \cdots, i+1}_{2n_i} \right), \sigma \right).\] It is also easy to see that if $L_i$ is of type $I$, then the symmetric matrix of $L_i\oplus -L_i$ with respect to the reordered basis $(e^{(i)}_{n_i}, e^{(i)}_1, f_1^{(i)}, \cdots, e^{(i)}_{n_i-1}, f^{(i)}_{n_i-1}, f^{(i)}_{n_i})$, which we denote it by $\mathcal{RE}^{(i)}$, is a reduced form
of GK-type
\[
\left(\left(\underbrace{i+1, \cdots, i+1}_{2n_i-2}\right)\cup \left(i, i+2\right), \sigma_{\mathcal{RE}^{(i)}} \right).\]
To prove our claim for $L\oplus -L$,
we may and do assume that each $(L_i\oplus -L_i)$ is of type $I$ with rank $2$ or the zero lattice by Proposition \ref{propi-ii}. We consider the ordered basis of $L\oplus -L$ as follows: \[ \left(\cdots, \left(e_1^{(i-1)}, f_1^{(i-1)}\right), \left(e_1^{(i)}, f_1^{(i)}\right), \left(e_1^{(i+1)}, f_1^{(i+1)}\right), \cdots\right). \] Here, $\left(e_1^{(i)}, f_1^{(i)}\right)$ is an ordered basis of $(L_i\oplus -L_i)$ such that the symmetric matrix of $(L_i\oplus -L_i)$ is described as in (\ref{eq42}), if $(L_i\oplus -L_i)$ is of type $I$ with rank $2$. If $(L_i\oplus -L_i)$ is the zero lattice, then we understand that $\left(e_1^{(i)}, f_1^{(i)}\right)$ is empty.
Let $\sigma$ be the involution on the above ordered basis which switches $e^{(i)}_{1}$ and $f^{(i)}_{1}$.
We choose the reordered basis of $L\oplus -L$ in the following manner: Let $j, k$ be integers such that both $L_{j-1}\oplus -L_{j-1}$ and $L_{j+k+1}\oplus -L_{j+k+1}$ are the zero lattices, and all of $L_{j}\oplus -L_{j}, \cdots, L_{j+k}\oplus -L_{j+k}$ are non-zero lattices, where $k\geq 0$. Recall that we consider the ordered basis of the lattice $(L_{j}\oplus -L_{j})\oplus \cdots \oplus (L_{j+k}\oplus -L_{j+k})$ as \[ \left(\left(e_1^{(j)}, f_1^{(j)}\right), \cdots, \left(e_1^{(j+k)}, f_1^{(j+k)}\right)\right). \]
Then we choose the reordered basis, which we denote it by $\mathcal{RE}_{j,k}$, of $(L_{j}\oplus -L_{j})\oplus \cdots \oplus (L_{j+k}\oplus -L_{j+k})$ as follows: \[ \left(\left(e_1^{(j)}, e_1^{(j+1)}\right), \left( f_1^{(j)}, e_1^{(j+2)}\right), \left( f_1^{(j+1)}, e_1^{(j+3)}\right), \cdots, \left( f_1^{(j+k-2)}, e_1^{(j+k)}\right), \left( f_1^{(j+k-1)}, f_1^{(j+k)}\right)\right). \] Here, if $k=0$, then we understand the above reordered basis as $\left( e_1^{(j)}, f_1^{(j)}\right)$. If $k=1$, then we understand the above reordered basis as $\left(e_1^{(j)}, e_1^{(j+1)}, f_1^{(j)}, f_1^{(j+1)}\right)$. The reordered basis of $L\oplus-L$ is then defined by the ordered set $\bigcup\limits_{j,k}\mathcal{RE}_{j,k}$ with respect to $j$, which we denote it by $\mathcal{RE}$.
Then it is easy to see that the symmetric matrix of the quadratic lattice $L\oplus -L$ with respect to the above reordered basis $\mathcal{RE}$ is a reduced form of GK-type \[ \left(\bigcup_{i} (i, i+2), \sigma_{\mathcal{RE}}\right). \]
This completes the proof. \end{proof}
We write $\mathrm{GK}(L\oplus -L)$ as $(a_1, \cdots, a_{2n})$. If $L_i$ is of type $I$, then the involution $\sigma_{\mathcal{RE}}$, which is described in the proof of the above proposition, satisfies the following property that $\sigma_{\mathcal{RE}}(i')=j'$ for some $1\leq i', j'\leq 2n$ such that $a_{i'}=i$ and $a_{j'}=i+2$. Thus, by Remark \ref{rmk11},
any $\mathrm{GK}(L\oplus -L)$-admissible involution associated to any reduced form of $L\oplus -L$ satisfies the same property. Using this, we can recover the parity type and the rank of $L_i$ from $\mathrm{GK}(L\oplus -L)$, as stated in the following corollary.
\begin{corollary} \label{cortypeli} Let $\mathrm{GK}(L\oplus -L)=(a_1, \cdots, a_{2n})$
and let $\sigma$ be a $\mathrm{GK}(L\oplus -L)$-admissible involution associated to a reduced form of $L\oplus -L$. For each $i\in \mathbb{Z}$, we define two numbers $\mathcal{A}_i$ and $\mathcal{B}_i$ as follows: \[ \left\{
\begin{array}{l}
\mathcal{A}_i=\#\{t | a_t=a_{\sigma(t)}=i+1\};\\
\mathcal{B}_i=\#\{t| a_t=i \textit{ and }
a_{\sigma(t)}=i+2\}.
\end{array} \right. \] Here $\mathcal{A}_i$ is even (possibly zero) and $\mathcal{B}_i$ is either $0$ or $1$. Then $\mathcal{A}_i$ and $\mathcal{B}_i$ determine the following information about $L_i$:
\[ \left\{
\begin{array}{l l} L_i : \textit{type II}, n_i=\mathcal{A}_i/2 & \quad \textit{if $\mathcal{B}_i=0$};\\
L_i : \textit{type $I^o$}, n_i=\mathcal{A}_i/2+1 & \quad \textit{if $\mathcal{B}_i=1$ and $\mathcal{A}_i\equiv 0$ mod 4};\\
L_i : \textit{type $I^e$}, n_i=\mathcal{A}_i/2+1 & \quad \textit{if $\mathcal{B}_i=1$ and $\mathcal{A}_i\equiv 2$ mod 4}.
\end{array} \right. \] \end{corollary}
Note that the two numbers $\mathcal{A}_i$ and $\mathcal{B}_i$ are independent of the choice of an admissible involution $\sigma$ since any two admissible involutions are equivalent, as explained in Remark \ref{rmk11}.
\begin{corollary}\label{cortypelii} Let $\mathrm{GK}(L\oplus -L)=(a_1, \cdots, a_{2n})$ and let
$\mathcal{C}_i=\#\{t | a_t=i+1\}$.
Then $\mathcal{B}_i$ is determined by the parity type of $L_i$ as follows: \[ \mathcal{B}_i= \left\{
\begin{array}{l l} 0 & \quad \textit{if $L_{i}$ is of parity type II};\\ 1 & \quad \textit{if $L_{i}$ is of parity type I}. \end{array} \right. \] In addition, we have the following description of $\mathcal{A}_i$ in terms of $\mathcal{C}_i$. \[ \mathcal{A}_i= \left\{
\begin{array}{l l} \mathcal{C}_i & \quad \textit{if both $L_{i-1}$ and $L_{i+1}$ are of parity type II};\\ \mathcal{C}_i-1 & \quad \textit{if exactly one of $L_{i-1}$ and $L_{i+1}$ is of parity type I};\\ \mathcal{C}_i-2 & \quad \textit{if both $L_{i-1}$ and $L_{i+1}$ are of parity type I}.
\end{array} \right. \]
\end{corollary}
\begin{remark}\label{rmk2} We note that $\mathrm{EGK}(L\oplus -L)$ does not determine
the local density $\beta(L)$ (and hence, neither does $\mathrm{GK}(L\oplus -L)$). As an example, let $L$ be the quadratic lattice represented by the symmetric matrix $\begin{pmatrix} 1&0\\0&1 \end{pmatrix}$ and let $M$ be the quadratic lattice represented by the symmetric matrix $\begin{pmatrix} 1&0\\0&3 \end{pmatrix}$. Then $L$ is unimodular of type $I^e_1$ and $M$ is unimodular of type $I^e_2$ so that they have the different local densities. But we can easily see that \[\mathrm{EGK}(L\oplus -L)=\mathrm{EGK}(M\oplus -M)=(1,2,1;0,1,2;1,1,1).\] \end{remark}
\subsection{Truncated EGK}\label{sss3}
Remark \ref{rmk2} implies that we need the extended Gross-Keating datums of more quadratic lattices, and not just $\mathrm{EGK}(L\oplus -L)$,
to completely determine the local density $\beta(L)$. To do that, we
consider the normalized quadratic lattice $(L\cap 2^i L^\sharp, \frac{1}{2^i}Q_L)$ for each integer $i$ such that $L_i$ is nonzero, where $L=\bigoplus L_i$ is a Jordan splitting with $\bfs(L_i)=(2^i)$. Note that $L\cap 2^i L^\sharp$ is denoted by $A_i$ in Section \ref{sec:5.1}. We can choose a Jordan splitting $L\cap 2^i L^\sharp=\bigoplus_{k \geq 0} M_j$ with $\bfs(M_j)=(2^j)$ such that $M_0=L_i$ (cf. Remark 2.8 of \cite{Cho}). In this subsection and the next subsection, the quadratic lattice $L\cap 2^i L^\sharp$, for each $i$ such that $L_i$ is nonzero, is meant to be the normalized quadratic lattice as described above. In this subsection, we fix $\mathrm{GK}(L\cap 2^i L^\sharp)=(a_1, \cdots, a_n)$. \begin{lemma}\label{lem48} A reduced form $B^i$, which represents the restriction of $\frac{1}{2^i}Q_L$ to $L\cap 2^i L^\sharp$, is expressed as the block matrix:
$$B^i=\begin{pmatrix} B_{00}^i&B_{01}^i&B_{02}^i\\ B_{10}^i&B_{11}^i&B_{12}^i\\ B_{20}^i&B_{21}^i&B_{22}^i \end{pmatrix}$$
satisfying the following conditions:
\begin{enumerate} \item the size of $B_{00}^i$ is the same as the number of $0$'s in $\mathrm{GK}(L\cap 2^i L^\sharp)$;
\item $B_{00}^i$ is a reduced form such that $\mathrm{GK}(B_{00}^i)$ consists of $0$'s; \item $B_{11}^i$ is a reduced form such that $\mathrm{GK}(B_{11}^i)$ consists of $1$'s; \item $\begin{pmatrix} B_{00}^i \ \ B_{01}^i\\ B_{10}^i \ \ B_{11}^i \end{pmatrix}$ is a reduced form whose Gross-Keating invariant consists of $0$'s or $1$'s; \item the first integer of $\mathrm{GK}(B_{22}^i)$ is at least $2$. \end{enumerate} Here each block $B^i_{ij}$ can be the empty matrix.
\end{lemma}
\begin{proof}
We write a reduced form $B^i$ as $(b_{st})$, $1 \leq s,t \leq n$. Let $l$ be the number of $0$'s in $\mathrm{GK}(L\cap 2^i L^\sharp)$ and let $m$ be the number of $1$'s in $\mathrm{GK}(L\cap 2^i L^\sharp)$.
Consider an $\mathrm{GK}(L\cap 2^i L^\sharp)$-admissible involution $\sigma$ associated to the reduced form $B^i$. Then using the definition of a reduced form given in Definition \ref{def3.2} and the definition of an admissible involution given in Definition \ref{def3.1}, it is easy to see that $\sigma$ and $b_{st}$ satisfy the following conditions: \begin{enumerate} \item[(i)] If $l$ ($m$, respectively) is even, then $\sigma (s)\neq s$ for $1\leq s \leq l$ ($l+1\leq s \leq l+m$, respectively); \item[(ii)] If $l$ is odd, then there is exactly one $s_0$ with $1\leq s_0 \leq l$ such that either $\sigma(s_0)=s_0$ or $\sigma(s_0)> l+m$. In this case, $\mathrm{ord}(b_{s_0s_0})=0$; \item[(iii)] If $m$ is odd, then there is exactly one $s_1$ with $l+1\leq s_1 \leq l+m$ such that either $\sigma(s_1)=s_1$ or $\sigma(s_1)> l+m$. In this case, $\mathrm{ord}(b_{s_1s_1})=1$; \item[(iv)] $\mathrm{ord}(2b_{st})\geq 1$ for any $1 \leq s\leq l$ and $l+1\leq t\leq l+m$; \item[(v)] $\mathrm{ord}(b_{ss}) $ for any $s\geq l+m+1$ and $\mathrm{ord}(2b_{st})\geq 2$ for any $s,t\geq l+m+1$ with $s\neq t$. \end{enumerate}
Let $B^i_{00}=(b_{st})$ with $1 \leq s,t \leq l$ and let $B^i_{11}=(b_{st})$ with $l+1 \leq s,t \leq l+m$. If $l=0$ ($m=0$, respectively), then we understand $B^i_{00}$ ($B^i_{11}$, respectively) as the empty matrix. Note that $B^i_{00}$ and $B^i_{11}$ determine the rest blocks of $B^i$. Using the definition of a reduced form given in Definition \ref{def3.2}, it is easy to see the followings:
The statement (1) is obvious from the construction of $B^i_{00}$. The statement (2) follows from (i) and (ii). The statement (3) follows from (i) and (iii). The statement (4) follows from (i)-(iv). The statement (5) follows from (v).
\end{proof}
We define the nonnegative integer $m_i$ for each $i$ to be
$$m_i=\#\{t| a_t=1\}.$$ The reduced form $B^i$ has the following properties by Proposition \ref{propi-ii} and Remark \ref{rmk1}.(3), that are independent of the choices involved in its definition.
\begin{equation}\label{eqqq} \left\{
\begin{array}{l}
\textit{If $L_i$ is of type $I$, then the rank of $B_{00}^i$ is $1$};\\% (cf. Proposition \ref{propi-ii} and Remark \ref{rmk1}.(3))};\\
\textit{If $L_i$ is of type $II$, then $B_{00}^i$ is empty};\\% (cf. Proposition \ref{propi-ii})};\\
\textit{$B_{11}^i$ is an $m_i\times m_i$-matrix.}
\end{array} \right. \end{equation}
Let $q_{11}^i$ be the integral quadratic form represented by the symmetric matrix $B_{11}^i$ and let $\bar{q}_{11}^i$ be the quadratic form $\frac{1}{2•}\cdot q_{11}^i$ modulo $2$, which is represented by the symmetric matrix $\frac{1}{2•}\cdot B^i_{11}$ modulo $2$.
We claim the following lemma: \begin{lemma} The quadratic form $\bar{q}_{11}^i$ is the same as the nonsingular quadratic form $\bar{q}_i$ lying over the quadratic space $\bar{V}_i$, given in Section \ref{sec:5.1}. \end{lemma} \begin{proof} We first claim that
the symmetric matrix $\tilde{B}^i=\begin{pmatrix} 4B_{00}^i&2B_{01}^i&2B_{02}^i\\ 2B_{10}^i&B_{11}^i&B_{12}^i\\ 2B_{20}^i&B_{21}^i&B_{22}^i \end{pmatrix}$ represents the restriction of $\frac{1}{2^{i}}Q_L$ to $B(L\cap 2^i L^\sharp)$,
where $B(L\cap 2^i L^\sharp)$ is defined as in (7) of the list of terminology introduced shortly following the beginning of Section \ref{sec:5.1}. To prove this, we express $B^i=(b_{st})$, $1 \leq s,t \leq n$, where $B^i$ is as explained in Lemma \ref{lem48}. Since the symmetric matrix $B^i$ represents the integral quadratic form $\frac{1}{2^i}Q_L$ restricted to $L\cap 2^i L^\sharp$, we have the following:
\begin{equation}\label{eqn43}
\frac{1}{2^i}Q_L|_{L\cap 2^i L^\sharp}(x_1, \cdots, x_n)= \sum_{s=1}^{n}b_{ss}x_s^2+2\sum_{1\leq s<t\leq n}b_{st}x_sx_t. \end{equation}
If $L_i$ is of type $II$ then $B^i_{00}$ is empty and the entry $b_{ss}$ for $1\leq s \leq n$ is contained in the ideal $(2)$ of $\mathfrak{o}$ by (\ref{eqqq}) and so $B(L\cap 2^i L^\sharp)=L\cap 2^i L^\sharp$. This verifies our claim.
If $L_i$ is of type $I$ then $b_{11}$ is a unit in $\mathfrak{o}$ and $b_{ss}$ for $2\leq s \leq n$ is contained in the ideal $(2)$ of $\mathfrak{o}$ by (\ref{eqqq}). Therefore, we have the following description of the integral quadratic form $\frac{1}{2^{i}}Q_L$ restricted to $B(L\cap 2^i L^\sharp)$: \[
\frac{1}{2^{i}}Q_L|_{B(L\cap 2^i L^\sharp)}(x_1, \cdots, x_n)= 4b_{11}x_1^2+\sum_{s=2}^{n}b_{ss}x_s^2+4\sum_{1< t\leq n}b_{1t}x_1x_t +2\sum_{2\leq s<t\leq n}b_{st}x_sx_t.
\] Here we replace $x_1$ of RHS of Equation (\ref{eqn43}) by $2x_1$. Now it is clear that this quadratic form is represented by the symmetric matrix $\tilde{B}^i$.
Note that any non-diagonal entry of $\tilde{B}^i$ multiplied by $2$ as well as any diagonal entry of $\tilde{B}^i$, except entries of $B_{11}^i$, is divisible by $4$. Since $\bar{q}_i$ is defined to be the quadratic form represented by the symmetric matrix $\frac{1}{2•}\cdot \tilde{B}^i$ modulo $2$, we have that $\bar{q}_i=\bar{q}^i_{11}$. \end{proof}
The above lemma implies that the quadratic form $\bar{q}_{11}^i$ is independent of the choice of a reduced form $B^i$. We obtain the following result: \begin{lemma} \label{lem:5.4}
Recall that $\mathrm{GK}(L\cap 2^i L^\sharp)=(a_1, \cdots, a_n)$ and that $m_i=\#\{t | a_t=1\}$. Then \[m_i=\textit{dim $\bar{V}_i$}.\] \end{lemma}
\begin{definition}\label{tegk} We define $\mathrm{GK}(L\cap 2^i L^\sharp)^{\leq 1}$ (respectively $\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}$ ) to be the Gross-Keating invariant (respectively the extended Gross-Keating datum) of the reduced form
$\begin{pmatrix} B_{00}^i&B_{01}^i\\ B_{10}^i&B_{11}^i \end{pmatrix}$, so that the only integers appearing in $\mathrm{GK}(L\cap 2^i L^\sharp)^{\leq 1}$ are $0$ or $1$.
\end{definition} We remark that truncated invariant $\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}$ is much simpler than $\mathrm{EGK}(L\cap 2^i L^\sharp)$.
\begin{proposition}\label{prop:5.5} We have a formula for $\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}$ as follows: (for the notion of $\mathrm{EGK}$, see Definition \ref{def:3.3}): \begin{enumerate} \item Assume that $m_i$ is even. Then we have
\[ \mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}=\left\{
\begin{array}{l l}
(1, m_i;0, 1;1, \zeta_2) & \quad \textit{if $L_i$ is of type $I$};\\ (m_i; 1;\zeta_1) & \quad \textit{if $L_i$ is of type $II$}.
\end{array} \right. \] Here, \[\textit{$\bar{q}_{11}^i=\bar{q}_{i}$ is split if and only if $\zeta_2=1$ (resp. $\zeta_1=1$) in the first case (resp. the second case).}\] If $m_i=0$, then we understand the right hand side to be $(1;0;1)$ if $L_i$ is of type $I$ and zero if $L_i$ is of type $II$.
\item Assume that $m_i$ is odd. Then we have
\[ \mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}=\left\{
\begin{array}{l l}
(1, m_i;0, 1;1, 0) & \quad \textit{if $L_i$ is of type $I$};\\ (m_i;1;1) & \quad \textit{if $L_i$ is of type $II$}.
\end{array} \right. \] \end{enumerate} \end{proposition}
Before proving the proposition, we state the following corollary, which is a direct consequence of the proposition. \begin{corollary} $\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}$ is independent of the choice of a reduced form $B^i$.
In other words, for given two reduced forms $B^i$ and $C^i$ for $L\cap 2^i L^\sharp$, $\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}$ associated to $B^i$ is the same as $\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}$ associated to $C^i$. In addition,
$\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}$ determines whether $\bar{q}_{11}^i$ is split or not. \end{corollary}
\begin{proof}[Proof of Proposition \ref{prop:5.5}] For (1), we first assume that $m_i$ is even and that $L_i$ is of type $I$. The case with $m_i=0$ is easy and so we leave it as an exercise. Thus we also assume that $m_i$ is nonzero.
By Proposition 3.1 of \cite{IK1}, the quadratic lattice represented by the symmetric matrix $\begin{pmatrix} B_{00}^i&B_{01}^i\\ B_{10}^i&B_{11}^i \end{pmatrix}$ is unimodular of type $I^o$. Thus, using Theorem 2.4 of \cite{Cho}, we have that \[ \begin{pmatrix} B_{00}^i&B_{01}^i\\ B_{10}^i&B_{11}^i \end{pmatrix} \cong (\epsilon)\perp \begin{pmatrix} 0&1\\ 1&0 \end{pmatrix}\perp \cdots \perp \begin{pmatrix} 0&1\\ 1&0 \end{pmatrix} \perp \begin{pmatrix} 2&1\\ 1&2a \end{pmatrix} \]
for $\epsilon\equiv 1\textit{ mod 2}$ and $a\in \mathfrak{o}$.
Note that the right hand side is a reduced form with the involution $\sigma$ such that $\sigma(1)=1$ and $\sigma(i)=(i+1)$ for all even integer $i$. Thus we can see that $\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}=(1, m_i;0, 1;1, \zeta_2)$. To analyze the argument about $\zeta_2$, we observe that $\bar{q}_{11}^i$ is split if and only if the equation $x^2+xy+\bar{a}y^2=0$ has a solution over $\frkk$, other than $(x,y)=(0,0)$. Here $\bar{a}$ is the image of $a$ in $\frkk$.
On the other hand, $\zeta_2=1$ if and only if $\begin{pmatrix} B_{00}^i&B_{01}^i\\ B_{10}^i&B_{11}^i \end{pmatrix} $ is split over $F$. This is equivalent that $(\epsilon)\perp \begin{pmatrix} 2&1\\ 1&2a \end{pmatrix}$ is isotropic over $F$, if and only if $\begin{pmatrix} 2&1\\ 1&2a \end{pmatrix} \perp (4\epsilon)$ is isotropic over $F$. The last condition is equivalent to the condition that $\begin{pmatrix} 2&1\\ 1&2a \end{pmatrix} \perp (4\epsilon)$ is isotropic over $\mathfrak{o}$,
if and only if there is a solution $(x,y,z)$ (with $(x,y,z)\neq (0,0,0)$) over $\mathfrak{o}$ of $x^2+xy+ay^2+2\epsilon z^2=0$.
We claim that this is equivalent that the equation $x^2+xy+\bar{a}y^2=0$ has a nontrivial solution over $\frkk$, other than $(x,y)=(0,0)$. Assume that $x^2+xy+ay^2+2\epsilon z^2=0$ has a nonzero solution over $\mathfrak{o}$ denoted by $(x_0, y_0, z_0) (\neq (0,0,0))$.
Since this polynomial is homogeneous, we may assume that at least one of $x_0, y_0, z_0$ is a unit in $\mathfrak{o}$. If either $x_0$ or $y_0$ is a unit, then the equation $x^2+xy+\bar{a}y^2=0$ has a nontrivial solution over $\frkk$, by simply taking reduction modulo $\varpi$. If both $x_0$ and $y_0$ are non-units and $z_0$ is a unit, we should have $x_0^2+x_0y_0+ay_0^2=-2\epsilon z_0^2$. The exponential valuation of the left hand side is at least $2$, whereas that of the right hand side is $1$. This is a contradiction.
Conversely, if the equation $x^2+xy+\bar{a}y^2=0$ has a nontrivial solution over $\frkk$, then the equation $x^2+xy+ay^2+2\epsilon z^2=0$ also has a nontrivial solution over $\mathfrak{o}$, by the multivariable version of Hensel's lemma (cf. Theorems 2.1 and 3.8 of \cite{Con} or Proposition 5 in Section 2.3 of \cite{BLR}).
This verifies our claim about $\zeta_2$.
Secondly, we assume that $m_i$ is even and that $L_i$ is of type $II$ so that $B_{00}^i$ is empty. Then as in the above case,
Proposition 3.1 of \cite{IK1} yields that the quadratic lattice represented by the symmetric matrix $B_{11}^i$ is unimodular of type $II$. Thus using Theorem 2.4 of \cite{Cho}, we have that \[ B_{11}^i \cong
\begin{pmatrix} 0&1\\ 1&0 \end{pmatrix}\perp \cdots \perp \begin{pmatrix} 0&1\\ 1&0 \end{pmatrix} \perp \begin{pmatrix} 2a&1\\ 1&2b \end{pmatrix} \]
for $a, b\in \mathfrak{o}$, so that $\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}=(m_i; 1;\zeta_1)$ and
$F(\sqrt{D_{B_{11}^i}})=F(\sqrt{1-4ab})$.
Here, $D_{B_{11}^i}$ is defined in the paragraph before Definition \ref{def:0.3}.
The field extension $F(\sqrt{1-4ab})/F$ is the splitting field of the equation $x^2-x+ab=0$. Thus $F(\sqrt{1-4ab})/F$ is either trivial or nontrivial unramified. In addition, $F(\sqrt{1-4ab})/F$ is trivial if and only if $\zeta_1=1$ if and only if the equation $x^2-x+ab=0$ has a solution over $\mathfrak{o}$. Hensel's lemma yields that this is equivalent to the existence of a solution of the equation $x^2+x+\bar{a}\bar{b}=0$ over $\frkk$.
We claim that this is equivalent to the condition that $\bar{q}_{11}^i$ is split, if and only if the equation $\bar{a}x^2+xy+\bar{b}y^2=0$ has a nontrivial solution over $\frkk$. If $\bar{a}=0$, then both equations have nontrivial solutions. If $\bar{a}\neq 0$, then the equation $\bar{a}x^2+xy+\bar{b}y^2=0$ is equivalent to $(\bar{a}x)^2+(\bar{a}x)y+\bar{a}\bar{b}y^2=0$. Then the existence of a nontrivial solution of $(\bar{a}x)^2+(\bar{a}x)y+\bar{a}\bar{b}y^2=0$ is equivalent to the existence of a solution of $x^2+x+\bar{a}\bar{b}=0$ over $\frkk$. This verifies our claim about $\zeta_1$.
For (2), assume that $m_i$ is odd. As in the above case,
Proposition 3.1 of \cite{IK1} yields that the quadratic lattice represented by the symmetric matrix $\begin{pmatrix} B_{00}^i&B_{01}^i\\ B_{10}^i&B_{11}^i \end{pmatrix}$ has two Jordan components, say $M_0\oplus M_1$, where $\bfs(M_j)=\frkp^j$ with $j=0 \textit{ or }1$. Note that $M_0$ could be empty depending on the type and the rank of $L_i$, which will be explained below.
If $L_i$ is of type $I$, then the rank of $M_0$ (resp. $M_1$) is $m_i$ (resp. $1$). Thus using Theorem 2.4 of \cite{Cho}, we have that \[ \begin{pmatrix} B_{00}^i&B_{01}^i\\ B_{10}^i&B_{11}^i \end{pmatrix} \cong (\epsilon)\perp \begin{pmatrix} 0&1\\ 1&0 \end{pmatrix}\perp \cdots \perp \begin{pmatrix} 0&1\\ 1&0 \end{pmatrix} \perp \begin{pmatrix} 2&1\\ 1&2a \end{pmatrix}\perp (2\epsilon') \]
for $\epsilon, \epsilon' \in \mathfrak{o}^{\times}$ and $a\in \mathfrak{o}$.
If $m_i=1$, then we understand the right hand side to be $(\epsilon)\perp (2\epsilon')$.
The right hand side is a reduced form of GK-type $(\underline{a}, \sigma)$, where
$\underline{a}=(0, 1, \cdots, 1)$ and $\sigma$ exchanges $t$ and $t+1$ for all even integer $t$ with $2\leq t\leq m_i-1$,
and fixes $1$ and $m_i+1$.
Using this, we can easily see that $\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}=(1, m_i;0, 1;1, 0)$.
If $L_i$ is of type $II$, then $B_{00}^i$ is empty and thus
the rank of $M_0$ (resp. $M_1$) is $m_i-1$ (resp. $1$).
The case with $m_i=1$ is easy and so we leave it as an exercise.
Thus we assume that $m_i>1$.
Using Theorem 2.4 of \cite{Cho}, we have that \[ B_{11}^i \cong \begin{pmatrix} 0&1\\ 1&0 \end{pmatrix}\perp \cdots \perp \begin{pmatrix} 0&1\\ 1&0 \end{pmatrix} \perp \begin{pmatrix} 2a&1\\ 1&2b \end{pmatrix}\perp (2\epsilon') \]
for $\epsilon'x` \in \mathfrak{o}^{\times}$ and $a, b\in \mathfrak{o}$. On the other hand, the quadratic lattice represented by $\begin{pmatrix} 2a&1\\ 1&2b \end{pmatrix}\perp (2\epsilon')$ is always isotropic, as can be proved using
Hensel's lemma. Thus we have that $\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}=(m_i;1;1)$. \end{proof}
\subsection{Final result}\label{secfr} We now state our main theorem of this section.
\begin{theorem}\label{thm-ldgk} The local density $\beta(L)$ is completely determined by the collection consisting of $\mathrm{GK}(L\oplus -L)$ together with the $\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}$ as $i$ runs over all integers such that $L_i$ is nonzero, where $L=\oplus_i L_i$ is a Jordan splitting.
In other words, given any two quadratic lattices $L, M$ satisfying \[ \left\{
\begin{array}{l}
\textit{$\mathrm{GK}(L\oplus -L)=\mathrm{GK}(M\oplus -M)$};\\
\textit{$\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}=\mathrm{EGK}(M\cap 2^i M^\sharp)^{\leq 1}$ for all $i$},
\end{array} \right. \] we have that \[\beta(L)=\beta(M).\] \end{theorem}
\begin{proof} Firstly,
by Proposition \ref{prop:5.5}, the parity type of $L_i$ is $I$ if and only if $0\in \mathrm{GK}(L\cap 2^i L^\sharp)^{\leq 1}$. Thus the collection of $\{\mathrm{GK}(L\cap 2^i L^\sharp)^{\leq 1}|i\in \mathbb{Z}, L_i\neq 0\}$ determines the integer $\beta$ in Theorem \ref{thmcho412} and the integers $t$ and $b$ in Theorem \ref{thmcho52}.
Corollary \ref{cortypelii} implies that $\mathrm{GK}(L\oplus -L)$ and the collection of $\{\mathrm{GK}(L\cap 2^i L^\sharp)^{\leq 1}|i\in \mathbb{Z}, L_i\neq 0\}$ determine $\mathcal{A}_i$'s and $\mathcal{B}_i$'s. Thus by Corollary \ref{cortypeli}, the rank of each $L_i$ (denoted by $n_i$) is determined by $\mathrm{GK}(L\oplus -L)$ together with the collection of $\{\mathrm{GK}(L\cap 2^i L^\sharp)^{\leq 1}|i\in \mathbb{Z}, L_i\neq 0\}$. Therefore, these two invariants determine $N$ as well as $c$ in Theorem \ref{thmcho52}.
Secondly, the quadratic space $\bar{V_i}$ (and thus $\#\mathrm{O}(\bar{V_i}, \bar{q_i})^{\mathrm{red}}$) is determined by $\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}$ as we see using Lemma \ref{lem:5.4} (which determines the dimension of $\bar{V_i}$) and Proposition \ref{prop:5.5} (which determines whether or not $\bar{V_i}$ is split).
Thus it suffices to show that the integer $\alpha$ in Theorem \ref{thmcho412} is determined by $\mathrm{GK}(L\oplus -L)$ and
the collection of $\{\mathrm{GK}(L\cap 2^i L^\sharp)^{\leq 1}|i\in \mathbb{Z}, L_i\neq 0\}$.
These two invariants determine each parity type and the rank of $L_i$ as explained above.
Assume that $L_i$ is free of type $I^e$. Then $L_i$ is of type $I^e_1$ if and only if the dimension of $\bar{V}_i$ is odd. The latter is determined by $\mathrm{GK}(L\cap 2^i L^\sharp)^{\leq 1}$ (cf. Lemma \ref{lem:5.4}). This completes the proof. \end{proof}
\begin{remark}\label{rmk3} As in Remark \ref{rmk2}, the sequence $\{\mathrm{EGK}(L\cap 2^i L^\sharp)^{\leq 1}\}_{L_i\neq 0}$ is not enough to describe the local density $\beta(L)$. As an example, Let $L$ be the quadratic lattice represented by the symmetric matrix $\begin{pmatrix} \begin{pmatrix} 0&1\\1&0 \end{pmatrix}&0\\0&\begin{pmatrix} 1&0\\0&3 \end{pmatrix} \end{pmatrix}$ and let $M$ be the quadratic lattice represented by the symmetric matrix $\begin{pmatrix} \begin{pmatrix} 0&1\\1&0 \end{pmatrix}&0\\0&1 \end{pmatrix}$. Then $L$ is unimodular of type $I^e$ and $M$ is unimodular of type $I^o$
so that they have the different local densities. But we can easily show that \[\mathrm{EGK}(L)^{\leq 1}=\mathrm{EGK}(M)^{\leq 1} (=\mathrm{EGK}(M))=(1,2;0,1;1,1). \]
\end{remark}
\begin{remark}\label{podd} In this remark, we assume that $F$ is a finite field extension of $\mathbb{Q}_p$ with $p>2$. Then a quadratic lattice $(L, Q_L)$ is always diagonalizable. That is, there is a basis of $L$ such that with respect to this basis the symmetric matrix of the quadratic lattice $(L, Q_L)$ is $(b_1)\perp \cdots \perp (b_n)$, where $\mathrm{ord}(b_i)\leq \mathrm{ord}(b_j)$ if $i \leq j$. If we define $a_i$ to be $\mathrm{ord}(b_i)$, then Remark 1.1 of \cite{IK1} says that $\mathrm{GK}(L)=(a_1, \cdots, a_n)$.
Thus $\mathrm{GK}(L)$ determines $n_i$ by the relation $n_i=\#\{a_j|a_j=i\}$. Here, $n_i$ is
the rank of $L_i$, where $L=\oplus L_i$ is a Jordan splitting such that $\bfs(L_i)=\frkp^i$.
On the other hand, one of the principal results of \cite{GY} implies that the local density $\beta(L)$ is completely determined by all the $n_i$. More precisely, by Theorem 7.3 of loc. cit., $\beta(L)$ is determined by all the $n_i$ and the $\#G_i(\frkk)$. For the definition of $G_i$, see Section 6.2 of loc. cit. The algebraic group $G_i$ defined over $\frkk$ is described in Proposition 6.2.3 of loc. cit. It is basically an orthogonal group associated to a nondegenerate quadratic space of dimension $n_i$ over $\frkk$. If $n_i$ is odd, then $G_i$ is split. If $n_i$ is even, then it could be either split or non-split. Nonetheless, $\#G_i(\frkk)$ only depends on the field $\frkk$ and the dimension of the quadratic space, which is $n_i$, as described in page 818 of \cite{Cho}.
In conclusion, the local density $\beta(L)$ is completely determined by $\mathrm{GK}(L)$.
\end{remark}
\appendix \section{The local density of a binary quadratic form} \label{App:Appendix}
\centerline{Tamotsu IKEDA} \centerline{Graduate school of mathematics, Kyoto University,} \centerline{Kitashirakawa, Kyoto, 606-8502, Japan} \centerline{[email protected]}
\centerline{Hidenori KATSURADA} \centerline{Muroran Institute of Technology} \centerline{27-1 Mizumoto, Muroran, 050-8585, Japan} \centerline{[email protected]}
In this appendix, we calculate the local density of a binary form over a dyadic field $F$ which may not be an unramified extension of ${\mathbb Q}_2$. We also calculate the Gross-Keating invariant $\mathrm{GK}(L\perp -L)$ and the truncated EGK invariant $\mathrm{EGK}(L\cap \varpi^i L^\sharp)^{\leq 1}$ for a binary quadratic lattice $L$. The local density formula is given in Proposition \ref{prop:A6}. We also show that the local density is not determined by $\mathrm{GK}(L\perp -L)$ and $\mathrm{EGK}(L\cap \varpi^i L^\sharp)^{\leq 1}$, if we drop the assumption that $F/{\mathbb Q}_2$ is unramified (See Example \ref{ex:A1}).
Let $\frko$ be the ring of integers of $F$, $\frkp$ the maximal ideal of $\frko$, $\varpi$ a prime element of $F$, and $q$ the order of the residue field. The ramification index of $F/{\mathbb Q}_2$ is denoted by $e$.
Let $(L, Q)$ and $(L_1, Q_1)$ be quadratic lattices of rank $n$ over $\frko$. We say that $(L, Q)$ and $(L_1, Q_1)$ are weakly equivalent if there exist an isomorphism $\iota:L\rightarrow L_1$ and a unit $u\in \frko^\times$ such that $u Q_1(\iota(x))=Q(x)$ for any $x\in L$. Similarly, we say that $B, B_1\in {\mathcal H}_n(\frko)$ are weakly equivalent if there exist a unimodular matrix $U\in{\mathrm{GL}}_n(\frko)$ and a unit $u \in\frko^\times$ such that $u B_1=B[U]$. If $B$ and $B_1$ are weakly equivalent, then $\mathrm{GK}(B)=\mathrm{GK}(B_1)$. Recall that a half-integral symmetric matrix $B\in{\mathcal H}_n(\frko)$ is primitive if $\varpi^{-1}B\notin {\mathcal H}_n(\frko)$. Put $\mathrm{GK}(B)=(a_1, a_2, \ldots, a_n)$. Then $B$ is primitive if and only if $a_1=0$.
Let $E/F$ be a semi-simple quadratic algebra. This means that $E$ is a quadratic extension of $F$ or $E=F\times F$. The non-trivial automorphism of $E/F$ is denoted by $x\mapsto \bar x$. Note that if $E=F\times F$, we have $\overline{(x_1, x_2)}=(x_2, x_1)$. Let $\frko_E$ be the maximal order of $E$. In the case $E=F\times F$, $\frko_E=\frko\times \frko$. The discriminant ideal of $E/F$ is denoted by $\frkD_E$. When $E=F\times F$, we understand $\frkD_E=\frko$. Put $d=\mathrm{ord}(\frkD_E)$ and \[ \xi= \begin{cases} 1 & \text{ if $E=F\times F$, } \\ -1 & \text{ if $E/F$ is unramified quadratic extension, } \\ 0 & \text{ if $E/F$ is ramified quadratic extension. } \end{cases} \]
We say that $E/F$ is unramified, if $d=0$. Note that $d\in\{2g\,|\, g\in{\mathbb Z}, 0\leq g \leq e\}\cup\{2e+1\}$. The order $\frko_{E, f}$ of conductor $f$ for $E/F$ is defined by $\frko_{E, f}=\frko+\frkp^f\frko_E$. Any open $\frko$-subring of $\frko_E$ is of the form $\frko_{E, f}$ for some non-negative integer $f$.
\begin{proposition}[\cite{IK1}, Proposition 2.1] \label{prop:6.1} Let $B\in{\mathcal H}^\mathrm{nd}_2(\frko)$ be a primitive half-integral symmetric matrix of size $2$ and $(L, Q)$ its associated quadratic lattice. Put $E=F(\sqrt{D_B})/F$. When $D_B\in F^{\times 2}$, we understand $E=F\times F$. Put $f=(\mathrm{ord}(D_B)-\mathrm{ord}(\mathfrak{D}_E))/2$. Then $f$ is a non-negative integer and $(L, Q)$ is weakly equivalent to $(\frko_{E, f}, {\mathcal N})$, where ${\mathcal N}$ is the norm form for $E/F$. \end{proposition}
\begin{proposition}[\cite{IK1}, Proposition 2.2] \label{prop:6.2} The Gross-Keating invariant of the binary quadratic form $(L, Q)=(\frko_{E,f}, {\mathcal N})$ is given by \[ \begin{cases} (0, 2f) & \text{ if $E/F$ is unramified,} \\ (0, 2f+1) & \text{ if $E/F$ is ramified.} \end{cases} \] \end{proposition}
The following lemma is well-known. \begin{lemma} \label{lem:6.3} We have \[ [\frko^\times:\frko^{\times 2}(1+\frkp^f)]= \begin{cases} q^{\left[\frac f2\right]} & \text{ if $0<f\leq 2e$,} \\ 2q^e & \text{ if $f> 2e$.} \end{cases} \] \end{lemma}
Choose $\omega\in \frko_E$ such that $\frko_E=\frko+\frko\omega$. If $E/F$ is unramified, then $\mathrm{ord}_E(\omega)=0$. It $E/F$ is ramified, then we may assume $\omega$ is a prime element of $E$. Put $h=\mathrm{ord}_E(\omega)$.
We fix an $\frko$-module isomorphism $\frko^2\simeq \frko_{E, f}$ by $(x, y)\mapsto x+\varpi^f \omega y$. By this isomorphism, we identify $\frko^2$ and $\frko_{E,f}$. We consider a quadratic form $Q(x, y)$ by \[ Q(x, y)={\mathcal N}(x+\varpi^f \omega y)=x^2+\varpi^f \mathrm{tr}(\omega) xy +\varpi^{2f}{\mathcal N}(\omega)y^2. \] An $\frko$-endomorphism of $\frko_{E,f}$ is expressed as \[ U_{\alpha, \beta}(x+ \varpi^f \omega y) = \alpha x + \beta \varpi^f \omega y \] for some $\alpha\in \frko_{E, f}$ and $\beta\in (\varpi^f\omega)^{-1}\frko_{E, f}$. Note that $U_{\alpha, \alpha}\circ U_{\beta, \gamma}=U_{\alpha\beta, \alpha\gamma}$. Note also that \[ Q(U_{\alpha, \beta}(x,y))= {\mathcal N}(\alpha)x^2+\varpi^f \mathrm{tr}(\bar\alpha\omega\beta) xy +\varpi^{2f}{\mathcal N}(\omega\beta)y^2. \] We shall determine when $Q\circ U_{\alpha, \beta}\equiv Q$ mod $\frkp^N$, where $N$ is a sufficiently large integer. Put \[
V_N=\{\alpha\in \frko_{E,f}\,|\, {\mathcal N}(\alpha)\equiv 1 \text{ mod } \frkp^N\}. \] Then $V_N\subset \frko_{E,f}^\times$. Clearly, if $Q\circ U_{\alpha, \beta}\equiv Q$ mod $\frkp^N$, then $\alpha\in V_N$. Replacing $U_{\alpha, \beta}$ by $U_{\alpha, \alpha}^{-1}\circ U_{\alpha, \beta}$, we may assume $\alpha=1$. Then $\beta$ belongs to the set \[ W_N=\left\{\beta\in (\varpi^f\omega)^{-1}\frko_{E, f}\ \vrule\ \begin{matrix} \mathrm{tr}(\omega\beta)\equiv \mathrm{tr}(\omega) \text{ mod } \frkp^{N-f} \\ {\mathcal N}(\beta)\equiv 1 \text{ mod } \frkp^{N-2f-h}\ \ \end{matrix}\right\}. \] Thus we have \begin{align*}
\{U_{\alpha, \beta}\,|\,\text{$Q\circ U_{\alpha, \beta}\equiv Q$ mod $\frkp^N$}\}
=\{U_{\alpha, \alpha}\circ U_{1, \beta}\,|\, \alpha\in V_N, \ \beta\in W_N\}, \end{align*} As we have assumed that $N$ is sufficiently large, we have $W_N\subset \frko_{E,f}^\times$. Then the local density for $(\frko_{E,f}, Q)$ is \[ \frac 12 q^{3N} \frac{\mathrm{Vol}(V_N)} {\mathrm{Vol}(\frko_{E,f})} \frac{\mathrm{Vol}(W_N)} {\mathrm{Vol}((\varpi^f \omega)^{-1}\frko_{E,f})}. \] \begin{lemma} \label{lem:6.4} \[ \frac{\mathrm{Vol}(W_N)} {\mathrm{Vol}((\varpi^f \omega)^{-1}\frko_{E,f})} =2 q^{-2N+2f+d}. \] \end{lemma} \begin{proof} For $\beta\in W_N$, we have \begin{align*} (\bar \beta-1)(\omega \beta-\bar\omega)&\equiv \omega{\mathcal N}(\beta)-\mathrm{tr}(\omega \beta )+\bar \omega \\ &\equiv \omega-\mathrm{tr}(\omega)+\bar \omega \\ &\equiv 0 \qquad \qquad\qquad\text{ mod } \varpi^{N-2f}\frko_E. \end{align*} It follows that \[ W_N\subset (1+\varpi^{N-2f-d}\frko_E)\cup \left(\frac{\bar\omega}{\omega}+\varpi^{N-2f-d}\frko_E\right). \] Put $W'_N=W_N\cap (1+\varpi^{N-2f-d}\frko_E)$. Then we have \[ W_N=W'_N\cup \frac{\bar\omega}{\omega}\overline{W'_N}. \] Note that $\mathrm{ord}_E(1-\frac{\bar\omega}{\omega})=\mathrm{ord}_E(\omega^{-1}(\omega-\bar\omega))=d-h$, and so we have $W'_N\cap \frac{\bar\omega}{\omega}\overline{W'_N}=\emptyset$, since $N$ is sufficiently large. Hence we have $\mathrm{Vol}(W_N)=2\mathrm{Vol}(W'_N)$. Note that \[ W'_N= \left\{ 1+\varpi^{N-2f-d}\gamma\in 1+\varpi^{N-2f-d}\frko_E \ \vrule\ \begin{matrix} \mathrm{tr}(\omega\gamma)\equiv 0 \text{ mod } \frkp^{f+d} \\ \mathrm{tr}(\gamma)\equiv 0 \text{ mod } \frkp^{d-h}\ \ \end{matrix}\right\}. \] Observe that if $\gamma=x+\bar\omega y$, \ $x, y\in\frko$, then \[ \begin{pmatrix} \mathrm{tr}(\gamma) \\ \mathrm{tr}(\omega\gamma) \end{pmatrix} = \begin{pmatrix} 2 & \mathrm{tr}(\omega) \\ \mathrm{tr}(\omega) & 2{\mathcal N}(\omega) \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}. \] Since $\mathrm{ord}\left(\det \begin{pmatrix} 2 & \mathrm{tr}(\omega) \\ \mathrm{tr}(\omega) & 2{\mathcal N}(\omega) \end{pmatrix}\right)=\mathrm{ord}(\omega-\bar\omega)^2=d$, we have \[ \frac{\mathrm{Vol}(W'_N)} {\mathrm{Vol}(1+\varpi^{N-2f-d}\frko_E)} =
q^{-f-d+h}. \] Note also that \[ \mathrm{Vol}((\varpi^f \omega)^{-1}\frko_{E,f})=q^{2f+h}\mathrm{Vol}(\frko_{E,f})= q^{f+h}\mathrm{Vol}(\frko_{E}). \] Hence we have \begin{align*} \frac{\mathrm{Vol}(W_N)} {\mathrm{Vol}((\varpi^f \omega)^{-1}\frko_{E,f})} &=2 \frac{\mathrm{Vol}(W'_N)} {\mathrm{Vol}(1+\varpi^{N-2f-d}\frko_E)} \frac {\mathrm{Vol}(1+\varpi^{N-2f-d}\frko_E)} {\mathrm{Vol}((\varpi^f \omega)^{-1}\frko_{E,f})} \\ &= 2 q^{-f-d+h}\cdot q^{-2N+3f+2d-h} \\ &=2 q^{-2N+2f+d}. \end{align*} \end{proof}
\begin{lemma} \label{lem:6.5} \begin{itemize} \item[(1)] If $E/F$ is unramified, then \[ \frac{\mathrm{Vol}(V_N)} {\mathrm{Vol}(\frko_{E,f})} = \begin{cases} q^{-N}(1-\xi q^{-1}) & \text{ if $f=0$,} \\ \noalign{\vskip 3pt} q^{-N+[f/2]} & \text{ if $0 < f \leq 2e$,} \\ \noalign{\vskip 2pt} 2q^{-N+e} & \text{ if $f>2e$.} \end{cases} \] \item[(2)] If $E/F$ is ramified, then \[ \frac{\mathrm{Vol}(V_N)} {\mathrm{Vol}(\frko_{E,f})} = \begin{cases} 2q^{-N+f} & \text{ if $0\leq f < \left[\frac{d+1}2\right]$,} \\ \noalign{\vskip 5pt} q^{-N+[\frac f2+\frac d4]} & \text{ if $\left[\frac {d+1}2\right] \leq f \leq 2e-\left[ \frac d2\right]$,} \\ \noalign{\vskip 5pt} 2q^{-N+e} & \text{ if $f> 2e-\left[\frac d2\right]$.} \end{cases} \] \end{itemize} \end{lemma} \begin{proof} We normalize the Haar measure of $E$ and $F$ by $\mathrm{Vol}(\frko_E)=\mathrm{Vol}(\frko)=1$. Since $N$ is sufficiently large, ${\mathcal N}(\frko_{E,f}^\times)\supset 1+\frkp^N$. Then we have \[ [\frko_{E,f}^\times :V_N]=[{\mathcal N}(\frko_{E,f}^\times): 1+\frkp^N]. \] We have \begin{align*} \frac{\mathrm{Vol}(V_N)} {\mathrm{Vol}(\frko_{E,f})} =& \frac{\mathrm{Vol}(\frko_{E,f}^\times)} {\mathrm{Vol}(\frko_{E,f})} \frac{\mathrm{Vol}(1+\frkp^N)} {\mathrm{Vol}({\mathcal N}(\frko^\times_{E,f}))} \\ =& q^{-N+f} \frac{\mathrm{Vol}(\frko_{E,f}^\times)}{\mathrm{Vol}({\mathcal N}(\frko^\times_{E,f}))}. \end{align*} It is easily seen that \[ \mathrm{Vol}(\frko_{E,f}^\times) = \begin{cases} (1-q^{-1})(1-\xi q^{-1}) & \text{ if $f=0$, } \\
q^{-f} (1-q^{-1}) & \text{ if $f>0$. } \end{cases} \] Thus it is enough to calculate $\mathrm{Vol}({\mathcal N}(\frko_{E,f}^\times))$. If $f=0$, then \[ [\frko^\times :{\mathcal N}(\frko_E^\times)]= \begin{cases} 1 & \text{ if $E/F$ is unramified, } \\ 2 & \text{ if $E/F$ is ramified.} \end{cases} \] This settles the case $f=0$. Suppose that $f>0$. Then $\frko_{E,f}^\times=\frko^\times (1+\varpi^f \frko_E)$ and so \[ {\mathcal N}(\frko_{E,f}^\times)=\frko^{\times 2} \cdot{\mathcal N}(1+\varpi^f \frko_E). \] If $E/F$ is unramified, then ${\mathcal N}(1+\varpi^f\frko_E)=1+\frkp^f$. By Lemma \ref{lem:6.3}, we have \[ \mathrm{Vol} ({\mathcal N}(\frko_{E,f}^\times)) = \begin{cases} (1-q^{-1}) q^{-\left[\frac f2\right]} &\text{ if $0< f\leq 2e$,} \\ \frac 12 (1-q^{-1}) q^{-e} &\text{ if $f> 2e$,} \end{cases} \] and so \[ \frac{\mathrm{Vol}(V_N)} {\mathrm{Vol}(\frko_{E,f})} = \begin{cases} q^{-N+[f/2]} & \text{ if $0 < f \leq 2e$,} \\ \noalign{\vskip 2pt} 2q^{-N+e} & \text{ if $f>2e$.} \end{cases} \] Now suppose $F/F$ is ramified. By Serre \cite{serre}, p.85, Corollary 3, we have \[ {\mathcal N}(1+\varpi^f \frko_E)=1+\frkp^{f+\left[\frac d2\right]} \] for $f\geq \left[ \frac{d+1} 2\right]$. It follows that \[ {\mathcal N}(\frko_{E,f}^\times)=\frko^{\times 2}(1+\frkp^{f+\left[\frac d2\right]}) \] for $f\geq \left[ \frac{d+1} 2\right]$. By Lemma \ref{lem:6.3}, we have \[ \mathrm{Vol}({\mathcal N}(\frko_{E,f}^\times)) = \begin{cases} (1-q^{-1})q^{-[\frac f2+\frac d4]} & \text{ if $\left[\frac {d+1}2\right] \leq f \leq 2e-\left[ \frac d2\right]$,} \\ \noalign{\vskip 5pt} \frac 12 (1-q^{-1})q^{-e} & \text{ if $f> 2e-\left[\frac d2\right]$,}\end{cases} \] and so \[ \frac{\mathrm{Vol}(V_N)} {\mathrm{Vol}(\frko_{E,f})} = \begin{cases} q^{-N+[\frac f2+\frac d4]} & \text{ if $\left[\frac {d+1}2\right] \leq f \leq 2e-\left[ \frac d2\right]$,} \\ \noalign{\vskip 5pt} 2q^{-N+e} & \text{ if $f> 2e-\left[\frac d2\right]$.} \end{cases} \] Finally, suppose that $0<f<\left[ \frac{d+1} 2\right]$. In this case, by Shimura \cite{shimura}, Lemma 21.13 (v), we have \[ {\mathcal N}(1+\varpi^f \frko_E)=(1+\frkp^{2f})\cap {\mathcal N}(\frko_E^\times). \] Since $1+\frkp^{d-1}\not\subset{\mathcal N}(\frko_E^\times)$, we have ${\mathcal N}(1+\varpi^f \frko_E)\subsetneqq 1+\frkp^{2f}$. Hence \[ \mathrm{Vol}({\mathcal N}(1+\varpi^f\frko_E))=\frac 12 \mathrm{Vol}(1+\frkp^{2f}) =\frac 12 q^{-2f}. \] On the other hand, we have \[ {\mathcal N}(1+\varpi^f \frko_E)\cap \frko^{\times 2}=(1+\frkp^{2f})\cap {\mathcal N}(\frko_E^\times)\cap \frko^{\times 2}=(1+\frkp^{2f})\cap \frko^{\times 2}. \] Hence \begin{align*} [\frko^{\times 2}:{\mathcal N}(1+\varpi^f \frko_E)\cap \frko^{\times 2}] =& [\frko^{\times 2}:(1+\frkp^{2f})\cap \frko^{\times 2}] \\ =& [\frko^{\times 2}(1+\frkp^{2f}): 1+\frkp^{2f}] \\ =& \frac{[\frko^\times: 1+\frkp^{2f}]} {[\frko^\times:\frko^{\times 2}(1+\frkp^{2f})]} \\ =& q^f(1-q^{-1}). \end{align*} It follows that \begin{align*} \mathrm{Vol}({\mathcal N}(\frko_{E,f}^\times))=& \mathrm{Vol}({\mathcal N}(1+\varpi^f\frko_E)) [\frko^{\times 2}\cdot{\mathcal N}(1+\varpi^f \frko_E):{\mathcal N}(1+\varpi^f \frko_E)] \\ =&\frac 12 q^{-2f}[\frko^{\times 2}:{\mathcal N}(1+\varpi^f \frko_E)\cap \frko^{\times 2}] \\ =&\frac 12 q^{-f}(1-q^{-1}). \end{align*} Hence we have \[ \frac{\mathrm{Vol}(V_N)} {\mathrm{Vol}(\frko_{E,f})} = 2q^{-N+f} \] in this case. This proves the lemma. \end{proof} By Lemma \ref{lem:6.4} and Lemma \ref{lem:6.5}, we obtain the following formula.
\begin{proposition} \label{prop:A6} \begin{itemize} \item[(1)] Assume that $E$ is unramified. Then the local density of $(L, Q)=(\frko_{E,f}, {\mathcal N})$ is given by \[ \beta(L)= \begin{cases} 1-\xi q^{-1} & \text{ if $f=0$,} \\ \noalign{\vskip 3pt} q^{[f/2]+2f} & \text{ if $0 < f \leq 2e$,} \\ \noalign{\vskip 2pt} 2q^{e+2f} & \text{ if $f>2e$.} \end{cases} \] \item[(2)] Assume that $E$ is ramified and that $d=2g\leq 2e$. Then the local density of $(L, Q)=(\frko_{E,f}, {\mathcal N})$ is given by \[ \beta(L)= \begin{cases} 2q^{3f+2g} & \text{ if $0\leq f < g$,} \\ \noalign{\vskip 5pt} q^{[\frac f2+\frac g2]+2f+2g} & \text{ if $g \leq f \leq 2e-g$,} \\ \noalign{\vskip 5pt} 2q^{2f+e+2g} & \text{ if $f> 2e-g$.} \end{cases} \] \item[(3)] Assume that $E$ is ramified and that $d=2e+1$. Then the local density of $(L, Q)=(\frko_{E,f}, {\mathcal N})$ is given by \[ \beta(L)= \begin{cases} 2q^{3f+2e+1} & \text{ if $0\leq f < e+1$,} \\ \noalign{\vskip 5pt} 2q^{2f+3e+1} & \text{ if $f\geq e+1$.} \end{cases} \] \end{itemize} \end{proposition}
Next, we calculate $\mathrm{GK}(L\oplus -L)$. \begin{proposition} \label{prop:A7} Suppose that $(L, Q)=(\frko_{E,f}, {\mathcal N})$. \begin{itemize} \item[(1)] Assume that $E$ is unramified. Then $L\oplus -L$ is equivalent to a reduced form of GK type $({\underline{a}}, \sigma)$, where \[ ({\underline{a}}, \sigma)= \begin{cases} ((0,0,0,0), \, (12)(34)) & \text{ if $f=0$,} \\ \noalign{\vskip 3pt} ((0,f,f,2f),\, (14)(23)) & \text{ if $0 < f < 2e$,} \\ \noalign{\vskip 2pt} ((0, 2e, 2f-2e, 2f),\, (12)(34)) & \text{ if $f\geq 2e$.} \end{cases} \] \item[(2)] Assume that $E$ is ramified and that $d\leq 2e$. Put $g=d/2$. Then $L\oplus -L$ is equivalent to a reduced form of GK type $({\underline{a}}, \sigma)$, where \[ ({\underline{a}}, \sigma)= \begin{cases} ((0, 2f+1, 2g-1, 2g+2f),\, (14)(23)) & \text{ if $0\leq f \leq g-1$,} \\ \noalign{\vskip 5pt}
((0, g+f, g+f, 2g+2f),\, (14)(23)) & \text{ if $g\leq f < 2e-g$,}
\\ \noalign{\vskip 5pt} ((0, 2e, 2g+2f-2e, 2g+2f),\, (12)(34)) & \text{ if $ f \geq 2e-g$.} \end{cases} \] \item[(3)] Assume that $E$ is ramified and that $d=2e+1$. Then $L\oplus -L$ is equivalent to a reduced form of GK type $({\underline{a}}, \sigma)$, where \[ ({\underline{a}}, \sigma)= \begin{cases} ((0, 2f+1, 2e, 2e+2f+1),\, (13)(24)) & \text{ if $0\leq f < e$,} \\ \noalign{\vskip 5pt} ((0, 2e, 2f+1, 2e+2f+1),\, (12)(34)) & \text{ if $ f \geq e$.} \end{cases} \]
\end{itemize} \end{proposition}
\begin{proof} Let $B\in{\mathcal H}_2(\frko)$ be a half-integral symmetric matrix associated to $(L, Q)$. First we consider the case $E/F$ is unramified. If $f=0$, then we have $B\perp -B\sim H\oplus H$, where $H$ is the hyperbolic plane $\begin{pmatrix} 0 & 1/2 \\ 1/2 & 0 \end{pmatrix}$. In fact, it is easy to see that $B\perp -B$ expresses $H$, and so $B\perp -B\sim H\perp K$ for some $K\in{\mathcal H}_2(\frko)$. Since $-\det (2K)\in \frko^{\times 2}$, we have $K\sim H$. This settles the case $f=0$ of (1). Next, we consider the case $0<f$. Let $\{1, \omega\}$ be a basis for $\frko_E$ as an $\frko$-module. Then, since $F$ is dyadic, we have $\mathrm{tr}(\omega)\in\frko^\times$. By multiplying $\omega$ by some unit, we may assume $\mathrm{tr}(\omega)=1$. By using this basis, the half-integral symmetric matrix associated to $(\frko_E, {\mathcal N})$ is of the form $\begin{pmatrix} 1 & 1/2 \\ 1/2 & u\end{pmatrix}$ for some $u\in \frko$. Since $\{1, \varpi^f\omega\}$ is a basis of $\frko_{E, f}$ over $\frko$, we may assume $B=\begin{pmatrix} 1 & \varpi^f/2 \\ \varpi^f/2 & u \varpi^{2f} \end{pmatrix}$. If $0<f<2e$, we have \[ B\perp -B= \bdm {\begin{matrix} 1 & \varpi^f/2 \\ \varpi^f/2 & u \varpi^{2f} \end{matrix}} {\begin{matrix} 0 \phantom{aaaa}& 0 \\ 0\phantom{aaaa} & 0 \end{matrix}} {\begin{matrix} 0 \phantom{aaa}& 0 \\ 0\phantom{aaa} & 0 \end{matrix}} {\begin{matrix} -1 & -\varpi^f/2 \\ -\varpi^f/2 & -u \varpi^{2f} \end{matrix}} \xrightarrow{\left(\begin{smallmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{smallmatrix}\right)} \bdm {\begin{matrix} 1 & \varpi^f/2 \\ \varpi^f/2 & u \varpi^{2f} \end{matrix}} {\begin{matrix} 1 & \varpi^f/2 \\ \varpi^f/2 & u \varpi^{2f} \end{matrix}} {\begin{matrix} 1 & \varpi^f/2 \\ \varpi^f/2 & u \varpi^{2f} \end{matrix}} {\begin{matrix} 0 \phantom{aaa}& 0 \\ 0\phantom{aaa} & 0 \end{matrix}}. \] Here, $X\xrightarrow{A} Y$ means $Y=X[A]$. Since the last matrix is a reduced form of GK type $((0,f,f,2f),\, (14)(23))$, we have proved the case $0<f<2e$ of (1). Next, suppose $f\geq 2e$. Then we have \[ B=\begin{pmatrix} 1 & \varpi^f/2 \\ \varpi^f/2 & u \varpi^{2f} \end{pmatrix} \xrightarrow{\left(\begin{smallmatrix} 1 & -\varpi^f/2 \\ 0 & 1\end{smallmatrix}\right)} \begin{pmatrix} 1 & 0 \\ 0 & v\varpi^{2f-2e} \end{pmatrix}. \] Here, $v=\varpi^{2e}(4u-1)/4\in\frko^\times$. Then we have \[ B\perp -B \sim \bdm {\begin{matrix} 1 & \phantom{-}0 \\ 0 & -1 \end{matrix}} {\begin{matrix} 0\phantom{aaaaaa} & 0 \\ 0\phantom{aaaaaa} & 0 \end{matrix}} {\begin{matrix} 0 & \phantom{-}0 \\ 0 & \phantom{-}0 \end{matrix}} {\begin{matrix} v\varpi^{2f-2e} & 0 \\ 0 & -v\varpi^{2f-2e} \end{matrix}} \xrightarrow{\left(\begin{smallmatrix} 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{smallmatrix}\right)} \bdm {\begin{matrix} 1 & 1 \\ 1 & 0 \end{matrix}} {\begin{matrix} 0\phantom{aaaaaa} & 0 \\ 0\phantom{aaaaaa} & 0 \end{matrix}} {\begin{matrix} 0 & 0 \\ 0 & 0 \end{matrix}} {\begin{matrix} v\varpi^{2f-2e} & v\varpi^{2f-2e} \\ v\varpi^{2f-2e} & 0 \end{matrix}}. \] It is easy to check that the lase matrix is a reduced form of GK type $(0, 2e, 2f-2e, 2f),\, (12)(34))$. Thus we have proved the last case of (1).
Suppose that $E/F$ is ramified and $d=2g\leq 2e$. In this case, $E$ is generated by an element $\varpi_E=\varpi^g (-1+\sqrt{\varepsilon})/2$, such that $\mathrm{ord}(\varepsilon-1)=2e-2g+1$. Then $\{1, \varpi^f\varpi_E\}$ is a basis of $\frko_{E, f}$. By using this basis, $B=\begin{pmatrix} 1 & \varpi^{g+f}/2 \\ \varpi^{g+f}/2 & u\varpi^{2f+1} \end{pmatrix}$. Here, $u=\varpi^{2g-1}(1-\varepsilon)/4\in\frko^\times$. If $f<2e-g$, we have \begin{align*} B\perp -B \sim& \bdm {\begin{matrix} 1 & \varpi^{g+f}/2 \\ \varpi^{g+f}/2 & u \varpi^{2f+1} \end{matrix}} {\begin{matrix} 0 \phantom{aaaa}& 0 \\ 0\phantom{aaaa} & 0 \end{matrix}} {\begin{matrix} 0 \phantom{aaa}& 0 \\ 0\phantom{aaa} & 0 \end{matrix}} {\begin{matrix} -1 & -\varpi^{g+f}/2 \\ -\varpi^{g+f}/2 & -u \varpi^{2f+1} \end{matrix}} \\ \xrightarrow{\left(\begin{smallmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{smallmatrix}\right)} & \bdm {\begin{matrix} 1 & \varpi^{g+f}/2 \\ \varpi^{g+f}/2 & u \varpi^{2f+1} \end{matrix}} {\begin{matrix} 1 & \varpi^{g+f}/2 \\ \varpi^{g+f}/2 & u \varpi^{2f+1} \end{matrix}} {\begin{matrix} 1 & \varpi^{g+f}/2 \\ \varpi^{g+f}/2 & u \varpi^{2f+1} \end{matrix}} {\begin{matrix} 0 \phantom{aaaaa}& 0 \\ 0\phantom{aaaaa} & 0 \end{matrix}} \\ \xrightarrow{\left(\begin{smallmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & -1 & 0 & 1 \end{smallmatrix}\right)} & \bdm {\begin{matrix} \phantom{aaa}1 & \phantom{a}0 \\ \phantom{aaa}0 & \phantom{aa}-u \varpi^{2f+1} \end{matrix}} {\begin{matrix} 1 & \varpi^{g+f}/2 \\ \varpi^{g+f}/2 & u \varpi^{2f+1} \end{matrix}} {\begin{matrix} 1\phantom{a} & \varpi^{g+f}/2 \\ \varpi^{g+f}/2 & u \varpi^{2f+1} \end{matrix}} {\begin{matrix} 0 \phantom{aaaaa}& 0 \\ 0\phantom{aaaaa} & 0 \end{matrix}}. \end{align*} The last matrix is a reduced form of GK type $((0, 2f+1, 2g-1, 2g+2f),\, (14)(23))$ if $0\leq f \leq g-1$, and a reduced form of GK type $((0, g+f, g+f, 2g+2f),\, (14)(23))$, if $g\leq f < 2e-g$. This proves the first and the second case of (2). Suppose that $f\geq 2e-g$. Then we have \[ B=\begin{pmatrix} 1 & \varpi^{g+f}/2 \\ \varpi^{g+f}/2 & u\varpi^{2f+1} \end{pmatrix} \xrightarrow{\left(\begin{smallmatrix} 1 & -\varpi^{g+f}/2 \\ 0 & 1\end{smallmatrix}\right)} \begin{pmatrix} 1 & 0 \\ 0 & v \varpi^{2g+2f-2e} \end{pmatrix}. \] Here, $v=-\varpi^{2e}\varepsilon/4\in\frko^\times$. In this case, by a similar calculation as before, we have \[ B\perp -B \sim \bdm {\begin{matrix} 1 & 1 \\ 1 & 0 \end{matrix}} {\begin{matrix} 0\phantom{aaaamaaa} & 0 \\ 0\phantom{aaaamaaa} & 0 \end{matrix}} {\begin{matrix} 0 & 0 \\ 0 & 0 \end{matrix}} {\begin{matrix} v\varpi^{2g+2f-2e} & v\varpi^{2g+2f-2e} \\ v\varpi^{2g+2f-2e} & 0 \end{matrix}}. \] This matrix is a reduced form of GK type $((0, 2e, 2g+2f-2e, 2g+2f),\, (12)(34))$, and this settles the last case of (2).
Finally, suppose that $E/F$ is ramified and $d=2e+1$. In this case, the quadratic extension $E/F$ is generated by $\varpi_E=\sqrt{-\varpi u}$ for some unit $u\in\frko^\times$. Then $\{1, \varpi^f\varpi_E\}$ is a $\frko$-basis of $\frko_{E, f}$. By using this basis, we may assume $B=\begin{pmatrix} 1 & 0 \\ 0 & u \varpi^{2f+1} \end{pmatrix}$. Then, by a similar calculation as before, we have \[ B\perp -B \sim \begin{cases} \bdm {\begin{matrix} 1 & 0 \\ 0 & u \varpi^{2f+1} \end{matrix}} {\begin{matrix} 1 & 0 \\ 0 & u \varpi^{2f+1} \end{matrix}} {\begin{matrix} 1 & 0 \\ 0 & u \varpi^{2f+1} \end{matrix}} {\begin{matrix} 0 \phantom{am} & 0\phantom{am} \\ 0\phantom{am} & 0\phantom{am} \end{matrix}} & \text{ if $f<e$, } \\ \noalign{\vskip 5pt} \bdm {\begin{matrix} 1 & 1 \\ 1 & 0 \end{matrix}} {\begin{matrix} 0\phantom{aamaa} & 0 \\ 0\phantom{aamaa} & 0 \end{matrix}} {\begin{matrix} 0 & 0 \\ 0 & 0 \end{matrix}} {\begin{matrix} u\varpi^{2f+1} & u\varpi^{2f+1} \\ u\varpi^{2f+1} & 0 \end{matrix}} & \text{ if $f\geq e$. } \end{cases} \] If $0\leq f < e$, then the first matrix is a reduced form of GK type $((0, 2f+1, 2e, 2e+2f+1),\, (13)(24))$. If $ f \geq e$, then the second matrix is a reduced form of GK type $((0, 2e, 2f+1, 2e+2f+1),\, (12)(34))$. Hence we have proved the proposition. \end{proof}
By Theorem \ref{thm5.1} (Corollary 5.1 of \cite{IK1}), we obtain the following proposition. \begin{proposition} \label{prop:A.8} Suppose that $(L, Q)=(\frko_{E,f}, {\mathcal N})$. \begin{itemize} \item[(1)] Assume that $E$ is unramified. Then we have \[ \mathrm{GK}(L\oplus -L)= \begin{cases} (0,0,0,0) & \text{ if $f=0$,} \\ \noalign{\vskip 3pt} (0,f,f,2f) & \text{ if $0 < f < 2e$,} \\ \noalign{\vskip 2pt} (0, 2e, 2f-2e, 2f) & \text{ if $f\geq 2e$.} \end{cases} \] \item[(2)] Assume that $E$ is ramified and that $d=2g\leq 2e$. Then we have \[ \mathrm{GK}(L\oplus -L)= \begin{cases} (0, 2f+1, 2g-1, 2g+2f) & \text{ if $0\leq f \leq g-1$,} \\ \noalign{\vskip 5pt} (0, g+f, g+f, 2g+2f) & \text{ if $g\leq f < 2e-g$,} \\ \noalign{\vskip 5pt} (0, 2e, 2g+2f-2e, 2g+2f) & \text{ if $ f \geq 2e-g$.} \end{cases} \] \item[(3)] Assume that $E$ is ramified and that $d=2e+1$. Then we have \[ \mathrm{GK}(L\oplus -L)= \begin{cases} (0, 2f+1, 2e, 2e+2f+1) & \text{ if $0\leq f < e$,} \\ \noalign{\vskip 5pt} (0, 2e, 2f+1, 2e+2f+1) & \text{ if $ f \geq e$.} \end{cases} \]
\end{itemize} \end{proposition}
We shall give a Jordan splitting for $L=(\frko_{E,f}, {\mathcal N})$. Let $L=\bigoplus L_i$ be a Jordan splitting such that $L_i$ is $i$-modular.
Put $\mathrm{Jor}(L)=\{i\in{\mathbb Z}\,|\, L_i \text{ is nonzero.}\}$. \begin{lemma} \label{lem:A10} \begin{itemize} \item[(1)] Suppose that $E/F$ is unramified. If $f<e$, then $\mathrm{Jor}(L)=\{f-e\}$ and $L$ is an indecomposable $(f-e)$-modular lattice. If $f\geq e$, then $\mathrm{Jor}(L)=\{0, 2f-2e\}$ and $L\sim (1)\perp (u\varpi^{2f-2e})$, with $u\in\frko^\times$. \item[(2)] Suppose that $E/F$ is ramified and $d=2g\leq 2e$. If $f<e-g$, then $\mathrm{Jor}(L)=\{f+g-e\}$ and $L$ is an indecomposable $(f+g-e)$-modular lattice. If $f\geq e-g$, then $\mathrm{Jor}(L)=\{0, 2f+2g-2e\}$ and $L\sim (1)\perp (u\varpi^{2f+2g-2e})$, with $u\in\frko^\times$. \item[(3)] Suppose that $E/F$ is ramified and $d=2e+1$. In this case, $\mathrm{Jor}(L)=\{0, 2f+1\}$ and $L\sim (1)\perp (u\varpi^{2f+1})$, with $u\in\frko^\times$. \end{itemize} \end{lemma} \begin{proof} Suppose that $E/F$ is unramified. As we have seen in the proof of Proposition \ref{prop:A7}, $L$ is expressed by $B=\begin{pmatrix} 1 & \varpi^f/2 \\ \varpi^f/2 & u \varpi^{2f} \end{pmatrix}$ for some $u\in\frko^\times$. If $f<e$, then $B$ is indecomposable by Lemma 2.1 of \cite{IK1}. In this case, it is easy to see $\varpi^{e-f}B$ is unimodular. If $f\geq e$, then we have $B\sim (1) \perp ((-1+u\varpi^{2e}) \varpi^{2f-2e})$. This proves (1). The other cases can be proved similarly. \end{proof}
We shall calculate the Gross-Keating invariant $\mathrm{GK}(L\cap\varpi^i L^\sharp)$ for $(L\cap\varpi^i L^\sharp, \varpi^{-i}Q)$ for each $i\in \mathrm{Jor}(L)$. Recall that the Gross-Keating invariant $(a_1, a_2)$ of a binary form $(L', Q')$ is determined by \begin{align*} a_1&=\mathrm{ord}(\bfn(L')), \\ a_1+a_2&= \begin{cases} \mathrm{ord}(4\det Q') & \text{ if $\mathrm{ord}(\frkD_{Q'})=0$,} \\ \mathrm{ord}(4\det Q')-\mathrm{ord}(\frkD_{Q'})+1 & \text{ if $\mathrm{ord}(\frkD_{Q'})>0$.} \end{cases} \end{align*} Here, $\frkD_{Q'}$ is the discriminant of $F(\sqrt{-\det Q'})/F$. These formula follows form Proposition \ref{prop:6.2}, since a binary quadratic form is isomorphic to some $(\frko_{E,f}, {\mathcal N})$ up to multiplication by a unit (\cite{IK1}, Proposition 2.1). In terms of $B=\begin{pmatrix} b_{11} & b_{12} \\ b_{12} & b_{22}\end{pmatrix}\in {\mathcal H}_2(\frko)$, the Gross-Keating invariant $(a_1, a_2)$ of $B$ is given by \begin{align*} a_1&=\min\{\mathrm{ord}(b_{11}), \mathrm{ord}(2b_{12}), \mathrm{ord}(b_{22})\}, \\ a_1+a_2&= \begin{cases} \mathrm{ord}(4\det B) & \text{ if $\mathrm{ord}(\frkD_{B})=0$,} \\ \mathrm{ord}(4\det B)-\mathrm{ord}(\frkD_{B})+1 & \text{ if $\mathrm{ord}(\frkD_{B})>0$.} \end{cases} \end{align*} Note also that $\mathrm{GK}(\varpi^i B)=(a_1+i, a_2+i)$.
\begin{proposition} \label{prop:A.9} Suppose that $(L, Q)=(\frko_{E,f}, {\mathcal N})$ and $i\in \mathrm{Jor}(L)$. \begin{itemize} \item[(1)] Assume that $E$ is unramified. Then we have \[ \mathrm{GK}(L\cap \varpi^i L^\sharp)= \begin{cases} (e-f, e+f) & \text{ if $f< e$,} \\ (0, 2f) & \text{ if $f\geq e$.} \end{cases} \] \item[(2)] Assume that $E$ is ramified and that $d=2g\leq 2e$. Then, we have \[ \mathrm{GK}(L\cap \varpi^i L^\sharp)= \begin{cases} (e-g-f, e-g+f+1)
& \text{ if $f<e-g$,} \\ (0, 2f+1) & \text{ if $f\geq e-g$,} \end{cases} \] \item[(3)] Assume that $E$ is ramified and that $d=2e+1$. Then we have \[ \mathrm{GK}(L\cap \varpi^i L^\sharp)= (0, 2f+1). \] \end{itemize} \end{proposition}
\begin{proof} Suppose that $L$ is $i$-modular. In this case, $L\cap \varpi^i L^\sharp=L$. Then ${\mathrm{GL}}(L\cap \varpi^i L^\sharp)=(a_1-i, a_2-i)$, where $(a_1, a_2)={\mathrm{GL}}(L)$. (Remember that the quadratic form for $L\cap\varpi^i L^\sharp$ is multiplied by $\varpi^{-i}$.)
Suppose that $L\sim (1)\perp (u\varpi^k)$. In this case, $\mathrm{Jor}(L)=\{0, k\}$ and $(L\cap \varpi^i L^\sharp, \varpi^{-i} Q)$ is expressed by $(1)\perp (u\varpi^k)$ or $(u)\perp (\varpi^k)$, according as $i=0$ or $i=k$. In either case, $(L\cap \varpi^i L^\sharp, \varpi^{-i} Q)$ is weakly equivalent to $(L, Q)$. Hence the proposition. \end{proof}
For $B\in {\mathcal H}_n(\frko)$, we define $\mathrm{EGK}(B)^{\leq 1}$ as in the main part of this paper. This is defined as follows. Let $\mathrm{GK}(B)=(\underbrace{0, \ldots, 0}_{m_0}, \underbrace{1, \ldots, 1}_{m_1}, a_{m_0+m_1+1}, \ldots, a_n)$, where $a_{m_0+m_1+1}>1$. If $B$ is equivalent to a reduced form \[ \begin{array}{ccccc} &\hskip -32pt \overbrace{\hphantom{B_{11}}}^{m_0} \; \overbrace{\hphantom{ B_{12}}}^{m_1} \; \overbrace{\hphantom{ B_{15}}}^{n-m_0-m_1} \\ & B'=\left( \begin{array}{cccl} B_{00} & B_{01} & B_{02} \\ {}^t\! B_{01} & B_{11} & B_{12} \\ {}^t\! B_{02} & {}^t\! B_{12} & B_{22} \\ \end{array} \hskip -0pt \right) \hskip -5pt \begin{array}{l}
\left.\vphantom{B_{11}} \right\} \text{\footnotesize${m_0}$} \\
\left.\vphantom{B_{11}} \right\} \text{\footnotesize${m_1}$} \\
\left.\vphantom{B_{11}} \right\} \text{\footnotesize${n-m_0-m_1}$,} \end{array} \end{array} \] then $\mathrm{EGK}(B)^{\leq 1}=\mathrm{EGK}\left(\begin{pmatrix} B_{00} & B_{11} \\ {}^t\! B_{01} & B_{11}\end{pmatrix}\right)$. This definition does not depend on the choice of the reduced form $B'$. If $B$ is associated to a quadratic lattice $M$, we write $\mathrm{EGK}(M)^{\leq 1}$ for $\mathrm{EGK}(B)^{\leq 1}$.
The next proposition follows from Proposition \ref{prop:A.9}. \begin{proposition} \label{prop:A.11} Suppose that $(L, Q)=(\frko_{E,f}, {\mathcal N})$ and $i\in \mathrm{Jor}(L)$. \begin{itemize} \item[(1)] Assume that $E$ is unramified. Then we have \[ \mathrm{EGK}(L\cap \varpi^i L^\sharp)^{\leq 1}= \begin{cases} \emptyset & \text{ if $f<e-1$,} \\ (2;1;\xi) & \text{ if $f=0$, $e=1$, } \\ (1;1;1) & \text{ if $f=e-1$, $e>1$, } \\ (1;0;1) & \text{ if $f\geq e$.} \end{cases} \] \item[(2)] Assume that $E$ is ramified and that $d=2g\leq 2e$. Then, we have \[ \mathrm{EGK}(L\cap \varpi^i L^\sharp)^{\leq 1}= \begin{cases} \emptyset & \text{ if $f<e-g-1$,} \\ (1;1;1)
& \text{ if $f=e-g-1$,} \\ (1;0;1) & \text{ if $f\geq e-g$, \, $g<e$,} \\ (1;0;1) & \text{ if $f>0$, \, $g=e$,} \\ (1,1;0,1;1,0) & \text{ if $f=0$, \, $g=e$.} \end{cases} \] \item[(3)] Assume that $E$ is ramified and that $d=2e+1$. Then we have \[ \mathrm{EGK}(L\cap \varpi^i L^\sharp)^{\leq 1}= \begin{cases} (1,1;0,1;1,0) & \text{ if $f=0$,} \\ (1;0,1) & \text{ if $f>0$.} \end{cases} \] \end{itemize} \end{proposition}
We shall show that there exist two binary quadratic lattices $L$ and $L'$, which satisfy the following conditions (1), (2), and (3). \begin{itemize} \item[(1)] $\mathrm{GK}(L\perp -L)=\mathrm{GK}(L'\perp -L')$. \item[(2)] $\mathrm{Jor}(L)=\mathrm{Jor}(L')$ and $\mathrm{EGK}(L\cap \varpi^i L^\sharp)^{\leq 1}=\mathrm{EGK}(L'\cap \varpi^i L'^\sharp)^{\leq 1}$ for each $i\in \mathrm{Jor}(L)$. \item[(3)] $\beta(L)\neq \beta(L')$. \end{itemize} \begin{example} \label{ex:A1} Suppose that $e=5$. Suppose also that $E/F$ is a ramified quadratic extension with $d=2$ and $E'/F$ is a ramified quadratic extension with $d=4$. Put $L=\frko_{E, 2}$ and $L'=\frko_{E', 1}$. Then we have \[ \mathrm{GK}(L\perp -L)=\mathrm{GK}(L'\perp -L')=(0,3,3,6) \] by Proposition \ref{prop:A.8}. Note that $\mathrm{Jor}(L)=\mathrm{Jor}(L')=\{-2 \}$ and \[ \mathrm{EGK}(L\cap \varpi^{-1} L^\sharp)^{\leq 1}=\mathrm{EGK}(L'\cap \varpi^{-1} L'^\sharp)^{\leq 1}=\emptyset \qquad (i\in \mathrm{Jor}(L)) \] by Proposition \ref{prop:A.11}. But we have \[ \beta(L)=q^7, \qquad \beta(L')=2q^7 \] by Proposition \ref{prop:A6}. Thus ${\mathrm{GL}}(L\perp -L)$ and $\mathrm{EGK}(L\cap \varpi^i L^\sharp)^{\leq 1}$ are not enough to determine $\beta(L)$ in the case $e>1$. \end{example}
\end{document} | arXiv |
This question is about estimating cut-off scores on a multi-dimensional screening questionnaire to predict a binary endpoint, in the presence of correlated scales.
I was asked about the interest of controlling for associated subscores when devising cut-off scores on each dimension of a measurement scale (personality traits) which might be used for alcoholism screening. That is, in this particular case, the person was not interested in adjusting on external covariates (predictors) -- which leads to (partial) area under covariate-adjusted ROC curve, e.g. (1-2) -- but essentially on other scores from the same questionnaire because they correlate one to each other (e.g. "impulsivity" with "sensation seeking"). It amounts to build an GLM which includes on the left-side the score of interest (for which we seek a cut-off) and another score computed from the same questionnaire, while on the right-hand side the outcome may be drinking status.
To clarify (per @robin request), suppose we have $j=4$ scores, say $x_j$ (e.g., anxiety, impulsivity, neuroticism, sensation seeking), and we want to find a cut-off value $t_j$ (i.e. "positive case" if $x_j>t_j$, "negative case" otherwise) for each of them. We usually adjust for other risk factors like gender or age when devising such cut-off (using ROC curve analysis). Now, what about adjusting impulsivity (IMP) on gender, age, and sensation seeking (SS) since SS is known to correlate with IMP? In other words, we would have a cut-off value for IMP where effect of age, gender and anxiety level are removed.
Do you have a more thorough understanding of this particular situation, with link to relevant papers when possible?
Janes, H and Pepe, MS (2008). Adjusting for Covariates in Studies of Diagnostic, Screening, or Prognostic Markers: An Old Concept in a New Setting. American Journal of Epidemiology, 168(1): 89-97.
Janes, H and Pepe, MS (2008). Accommodating Covariates in ROC Analysis. UW Biostatistics Working Paper Series, Paper 322.
The way that you've envisioned the analysis is really not the way I would suggest you start out thinking about it. First of all it is easy to show that if cutoffs must be used, cutoffs are not applied on individual features but on the overall predicted probability. The optimal cutoff for a single covariate depends on all the levels of the other covariates; it cannot be constant. Secondly, ROC curves play no role in meeting the goal of making optimum decisions for an individual subject.
To handle correlated scales there are many data reduction techniques that can help. One of them is a formal redundancy analysis where each predictor is nonlinearly predicted from all the other predictors, in turn. This is implemented in the redun function in the R Hmisc package. Variable clustering, principal component analysis, and factor analysis are other possibilities. But the main part of the analysis, in my view, should be building a good probability model (e.g., binary logistic model).
The point of the Janes, Pepe article on covariate adjusted ROC curves is allowing a more flexible interpretation of the estimated ROC curve values. This is a method of stratifying ROC curves among specific groups in the population of interest. The estimated true positive fraction (TPF; eq. sensitivity) and true negative fraction (TNF; eq. specificity) are interpreted as "the probability of a correct screening outcome given the disease status is Y/N among individuals of the same [adjusted variable list]". At a glance, it sounds like what you're trying to do is improve your diagnostic test by incorporating more markers into your panel.
A good background to understand these methods a little better would be to read about the Cox proportional hazards model and to look at Pepe's book on "The Statistical Evaluation of Medical Tests for Classification and ...". You'll notice screening reliability measures share many similar properties with a survival curve, thinking of the fitted score as a survival time. Just as the Cox model allows for stratification of the survival curve, they propose giving stratified reliability measures.
The reason this matters to us might be justified in the context of a binary mixed effects model: suppose you're interested in predicting the risk of becoming a meth addict. SES has such an obvious dominating effect on this that it seems foolish to evaluate a diagnostic test, which might be based on personal behaviors, without somehow stratifying. This is because [just roll with this], even if a rich person showed manic and depressive symptoms, they'll probably never try meth. However, a poor person would show a much larger increased risk having such psychological symptoms (and higher risk score). The crude analysis of risk would show very poor performance of your predictive model because the same differences in two groups were not reliable. However, if you stratified (rich versus poor), you could have 100% sensitivity and specificity for the same diagnostic marker.
The point of covariate adjustment is to consider different groups homogeneous due to lower prevalence and interaction in the risk model between distinct strata.
Not the answer you're looking for? Browse other questions tagged epidemiology roc or ask your own question.
Based only on these sensitivity and specificity values, what is the best decision method? | CommonCrawl |
Journal of Computational Dynamics
June 2014 , Volume 1 , Issue 2
Necessary and sufficient condition for the global stability of a delayed discrete-time single neuron model
Ferenc A. Bartha and Ábel Garab
2014, 1(2): 213-232 doi: 10.3934/jcd.2014.1.213 +[Abstract](4579) +[PDF](916.4KB)
We consider the global asymptotic stability of the trivial fixed point of the difference equation $x_{n+1}=m x_n-\alpha \varphi(x_{n-1})$, where $(\alpha,m) \in \mathbb{R}^2$ and $\varphi$ is a real function satisfying the discrete Yorke condition: $\min\{0,x\} \leq \varphi(x) \leq \max\{0,x\}$ for all $x\in \mathbb{R}$. If $\varphi$ is bounded then $(\alpha,m) \in [|m|-1,1] \times [-1,1]$, $(\alpha,m) \neq (0,-1), (0,1)$ is necessary for the global stability of $0$. We prove that if $\varphi(x) \equiv \tanh(x)$, then this condition is sufficient as well.
Ferenc A. Bartha, \u00C1bel Garab. Necessary and sufficient condition for the global stability of a delayed discrete-time single neuron model. Journal of Computational Dynamics, 2014, 1(2): 213-232. doi: 10.3934/jcd.2014.1.213.
Reconstructing functions from random samples
Steve Ferry, Konstantin Mischaikow and Vidit Nanda
From a sufficiently large point sample lying on a compact Riemannian submanifold of Euclidean space, one can construct a simplicial complex which is homotopy-equivalent to that manifold with high confidence. We describe a corresponding result for a Lipschitz-continuous function between two such manifolds. That is, we outline the construction of a simplicial map which recovers the induced maps on homotopy and homology groups with high confidence using only finite sampled data from the domain and range, as well as knowledge of the image of every point sampled from the domain. We provide explicit bounds on the size of the point samples required for such reconstruction in terms of intrinsic properties of the domain, the co-domain and the function. This reconstruction is robust to certain types of bounded sampling and evaluation noise.
Steve Ferry, Konstantin Mischaikow, Vidit Nanda. Reconstructing functions from random samples. Journal of Computational Dynamics, 2014, 1(2): 233-248. doi: 10.3934/jcd.2014.1.233.
Detecting isolated spectrum of transfer and Koopman operators with Fourier analytic tools
Gary Froyland, Cecilia González-Tokman and Anthony Quas
2014, 1(2): 249-278 doi: 10.3934/jcd.2014.1.249 +[Abstract](4492) +[PDF](8600.4KB)
The isolated spectrum of transfer operators is known to play a critical role in determining mixing properties of piecewise smooth dynamical systems. The so-called Dellnitz-Froyland ansatz places isolated eigenvalues in correspondence with structures in phase space that decay at rates slower than local expansion can account for. Numerical approximations of transfer operator spectrum are often insufficient to distinguish isolated spectral points, so it is an open problem to decide to which eigenvectors the ansatz applies. We propose a new numerical technique to identify the isolated spectrum and large-scale structures alluded to in the ansatz. This harmonic analytic approach relies on new stability properties of the Ulam scheme for both transfer and Koopman operators, which are also established here. We demonstrate the efficacy of this scheme in metastable one- and two-dimensional dynamical systems, including those with both expanding and contracting dynamics, and explain how the leading eigenfunctions govern the dynamics for both real and complex isolated eigenvalues.
Gary Froyland, Cecilia Gonz\u00E1lez-Tokman, Anthony Quas. Detecting isolated spectrum of transfer and Koopman operators with Fourier analytic tools. Journal of Computational Dynamics, 2014, 1(2): 249-278. doi: 10.3934/jcd.2014.1.249.
Optimal control of multiscale systems using reduced-order models
Carsten Hartmann, Juan C. Latorre, Wei Zhang and Grigorios A. Pavliotis
We study optimal control of diffusions with slow and fast variables and address a question raised by practitioners: is it possible to first eliminate the fast variables before solving the optimal control problem and then use the optimal control computed from the reduced-order model to control the original, high-dimensional system? The strategy ``first reduce, then optimize''---rather than ``first optimize, then reduce''---is motivated by the fact that solving optimal control problems for high-dimensional multiscale systems is numerically challenging and often computationally prohibitive. We state sufficient and necessary conditions, under which the ``first reduce, then control'' strategy can be employed and discuss when it should be avoided. We further give numerical examples that illustrate the ``first reduce, then optmize'' approach and discuss possible pitfalls.
Carsten Hartmann, Juan C. Latorre, Wei Zhang, Grigorios A. Pavliotis. Optimal control of multiscale systems using reduced-order models. Journal of Computational Dynamics, 2014, 1(2): 279-306. doi: 10.3934/jcd.2014.1.279.
Lattice structures for attractors I
William D. Kalies, Konstantin Mischaikow and Robert C.A.M. Vandervorst
We describe the basic lattice structures of attractors and repellers in dynamical systems. The structure of distributive lattices allows for an algebraic treatment of gradient-like dynamics in general dynamical systems, both invertible and noninvertible. We separate those properties which rely solely on algebraic structures from those that require some topological arguments, in order to lay a foundation for the development of algorithms to manipulate these structures computationally.
William D. Kalies, Konstantin Mischaikow, Robert C.A.M. Vandervorst. Lattice structures for attractors I. Journal of Computational Dynamics, 2014, 1(2): 307-338. doi: 10.3934/jcd.2014.1.307.
Optimizing the stable behavior of parameter-dependent dynamical systems --- maximal domains of attraction, minimal absorption times
Péter Koltai and Alexander Volf
We propose a method for approximating solutions to optimization problems involving the global stability properties of parameter-dependent continuous-time autonomous dynamical systems. The method relies on an approximation of the infinite-state deterministic system by a finite-state non-deterministic one --- a Markov jump process. The key properties of the method are that it does not use any trajectory simulation, and that the parameters and objective function are in a simple (and except for a system of linear equations) explicit relationship.
P\u00E9ter Koltai, Alexander Volf. Optimizing the stable behavior of parameter-dependent dynamical systems --- maximal domains of attraction, minimal absorption times. Journal of Computational Dynamics, 2014, 1(2): 339-356. doi: 10.3934/jcd.2014.1.339.
Polynomial chaos based uncertainty quantification in Hamiltonian, multi-time scale, and chaotic systems
José Miguel Pasini and Tuhin Sahai
Polynomial chaos is a powerful technique for propagating uncertainty through ordinary and partial differential equations. Random variables are expanded in terms of orthogonal polynomials and differential equations are derived for the expansion coefficients. Here we study the structure and dynamics of these differential equations when the original system has Hamiltonian structure, multiple time scales, or chaotic dynamics. In particular, we prove that the differential equations for the coefficients in generalized polynomial chaos expansions of Hamiltonian systems retain the Hamiltonian structure relative to the ensemble average Hamiltonian. We connect this with the volume-preserving property of Hamiltonian flows to show that, for an oscillator with uncertain frequency, a finite expansion must fail at long times, regardless of truncation order. Also, using a two-time scale forced nonlinear oscillator, we show that a polynomial chaos expansion of the time-averaged equations captures uncertainty in the slow evolution of the Poincaré section of the system and that, as the scale separation increases, the computational advantage of this procedure increases. Finally, using the forced Duffing oscillator as an example, we demonstrate that when the original dynamical system displays chaotic dynamics, the resulting dynamical system from polynomial chaos also displays chaotic dynamics, limiting its applicability.
Jos\u00E9 Miguel Pasini, Tuhin Sahai. Polynomial chaos based uncertainty quantification in Hamiltonian, multi-time scale, and chaotic systems. Journal of Computational Dynamics, 2014, 1(2): 357-375. doi: 10.3934/jcd.2014.1.357.
Equation-free computation of coarse-grained center manifolds of microscopic simulators
Constantinos Siettos
An algorithm, based on the Equation-free concept, for the approximation of coarse-grained center manifolds of microscopic simulators is addressed. It is assumed that the macroscopic equations describing the emergent dynamics are not available in a closed form. Appropriately initialized short runs of the microscopic simulators, which are treated as black box input-output maps provide a polynomial estimate of a local coarse-grained center manifold; the coefficients of the polynomial are obtained by wrapping around the microscopic simulator an optimization algorithm. The proposed method is demonstrated through kinetic Monte Carlo simulations, of simple reactions taking place on catalytic surfaces, exhibiting coarse-grained turning points and Andronov-Hopf bifurcations.
Constantinos Siettos. Equation-free computation of coarse-grained center manifolds of microscopic simulators. Journal of Computational Dynamics, 2014, 1(2): 377-389. doi: 10.3934/jcd.2014.1.377.
On dynamic mode decomposition: Theory and applications
Jonathan H. Tu, Clarence W. Rowley, Dirk M. Luchtenburg, Steven L. Brunton and J. Nathan Kutz
2014, 1(2): 391-421 doi: 10.3934/jcd.2014.1.391 +[Abstract](23170) +[PDF](1657.5KB)
Originally introduced in the fluid mechanics community, dynamic mode decomposition (DMD) has emerged as a powerful tool for analyzing the dynamics of nonlinear systems. However, existing DMD theory deals primarily with sequential time series for which the measurement dimension is much larger than the number of measurements taken. We present a theoretical framework in which we define DMD as the eigendecomposition of an approximating linear operator. This generalizes DMD to a larger class of datasets, including nonsequential time series. We demonstrate the utility of this approach by presenting novel sampling strategies that increase computational efficiency and mitigate the effects of noise, respectively. We also introduce the concept of linear consistency, which helps explain the potential pitfalls of applying DMD to rank-deficient datasets, illustrating with examples. Such computations are not considered in the existing literature but can be understood using our more general framework. In addition, we show that our theory strengthens the connections between DMD and Koopman operator theory. It also establishes connections between DMD and other techniques, including the eigensystem realization algorithm (ERA), a system identification method, and linear inverse modeling (LIM), a method from climate science. We show that under certain conditions, DMD is equivalent to LIM.
Jonathan H. Tu, Clarence W. Rowley, Dirk M. Luchtenburg, Steven L. Brunton, J. Nathan Kutz. On dynamic mode decomposition: Theory and applications. Journal of Computational Dynamics, 2014, 1(2): 391-421. doi: 10.3934/jcd.2014.1.391.
2020 CiteScore: 1 | CommonCrawl |
\begin{document}
\title{Irrational Exuberance: Correcting Bias in Probability Estimates} \author{Gareth M. James$^1$, Peter Radchenko$^2$ and Bradley Rava$^{1,3}$} \date{}
\footnotetext[1]{Department of Data Sciences and Operations, University of Southern California.} \footnotetext[2]{University of Sydney.} \footnotetext[3]{Research is generously supported by the NSF GRFP in Mathematical Statistics.}
\maketitle
\begin{abstract}
We consider the common setting where one observes probability estimates for a large number of events, such as default risks for numerous bonds. Unfortunately, even with unbiased estimates, selecting events corresponding to the most extreme probabilities can result in systematically underestimating the true level of uncertainty. We develop an empirical Bayes approach ``Excess Certainty Adjusted Probabilities" (ECAP), using a variant of Tweedie's formula, which updates probability estimates to correct for selection bias. ECAP is a flexible non-parametric method, which directly estimates the score function associated with the probability estimates, so it does not need to make any restrictive assumptions about the prior on the true probabilities. ECAP also works well in settings where the probability estimates are biased. We demonstrate through theoretical results, simulations, and an analysis of two real world data sets, that ECAP can provide significant improvements over the original probability estimates. \end{abstract}
\textbf{Keywords:\/} Empirical Bayes; selection bias; excess certainty; Tweedie's formula.
\section{Introduction}
We are increasingly facing a world where automated algorithms are used to generate probabilities, often in real time, for thousands of different events. Just a small handful of examples include finance where rating agencies provide default probabilities on thousands of different risky assets \citep{Keal03, Hull05}; sporting events where each season ESPN and other sites estimate win probabilities for all the games occurring in a given sport \citep{LEUNG2014710}; politics where pundits estimate the probabilities of candidates winning in congressional and state races during a given election season \citep{Silver18, Soumbatiants2006}; or medicine where researchers estimate the survival probabilities of patients undergoing a given medical procedure \citep{Poses97, Smeenk07}. Moreover, with the increasing availability of enormous quantities of data, there are more and more automated probability estimates being generated and consumed by the general public.
Many of these probabilities have significant real world implications. For example, the rating given to a company's bonds will impact their cost of borrowing, or the estimated risk of a medical procedure will affect the patient's likelihood of undertaking the operation. This leads us to question the accuracy of these probability estimates. Let $p_i$ and $\tilde p_i$ respectively represent the true and estimated probability of $A_i$ occurring for a series of events $A_1,\ldots, A_n$. Then, we often seek an unbiased estimator such that $E(\tilde p_i|p_i)=p_i,$
so $\tilde p_i$ is neither systematically too high nor too low. Of course, there are many recent examples where this unbiasedness assumption has not held. For example, prior to the financial crisis of 2008 rating agencies systematically under estimated the risk of default for mortgage backed securities so $E(\tilde p_i|p_i)<p_i$. Similarly, in the lead up to the 2016 US presidential election political pundits significantly underestimated the uncertainty in which candidate would win.
However, even when unbiasedness does hold, using $\tilde p_i$ as an estimate for $p_i$ can cause significant problems. Consider, for example, a conservative investor who only purchases bonds with extremely low default risk. When presented with $n$ estimated bond default probabilities $\tilde p_1,\ldots, \tilde p_n$ from a rating agency, she only invests when $\tilde p_i=0.001$. Let us suppose that the rating agency has done a careful risk assessment, so their probability estimates are unbiased for all $n$ bonds. What then is the fraction of the investor's bonds which will actually default? Given that the estimates are unbiased, one might imagine (and the investor is certainly hoping) that the rate would be close to $0.001$. Unfortunately, the true default rate may be much higher.
\begin{figure}
\caption{\small Left: Simulated $p_i$ and associated $\tilde p_i$. The probability estimates are unbiased. Center: The average value of $p_i$, as a function of $\tilde p_i$ i.e. $E(p_i|\tilde p_i)$ (orange line) is systematically higher than $\tilde p_i$ (dashed line). Right: The ratio of $E(p_i|\tilde p_i)$ relative to $\tilde p_i$, as a function of $\tilde p_i$. An ideal ratio would be one (dashed line).}
\label{intro.plot}
\end{figure}
Figure~\ref{intro.plot} provides an illustration. We first generated a large number of probabilities $p_i$ from a uniform distribution and then produced corresponding $\tilde p_i$ in such a way that $E(\tilde p_i|p_i)=p_i$ for $i=1,\ldots, n$. In the left panel of Figure~\ref{intro.plot} we plotted a random sample of $100$ of these probabilities, concentrating on values less than 10\%. While there is some variability in the estimates, there is no evidence of bias in $\tilde p_i$. In the middle panel we used the simulated data to compute the average value of $p_i$ for any given value of $\tilde p_i$ i.e. $E(p_i|\tilde p_i)$. A curious effect is observed. At every point the average value of $p_i$ (orange line) is systematically higher than $\tilde p_i$ (dashed line) i.e. $E(p_i|\tilde p_i)>\tilde p_i$. Finally, in the right panel we have plotted the ratio of $E(p_i|\tilde p_i)$ to $\tilde p_i$. Ideally this ratio should be approximately one, which would, for example, correspond to the true risk of a set of bonds equalling the estimated risk. However, for small values of $\tilde p_i$ we observe ratios far higher than one. So, for example, our investor who only purchases bonds with an estimated default risk of $\tilde p_i=0.001$ will in fact find that $0.004$ of her bonds end up defaulting, a 400\% higher risk level than she intended to take!
These somewhat surprising results are not a consequence of this particular simulation setting. It is in fact an instance of selection bias, a well known issue which occurs when the selection of observations is made in such a way, e.g. selecting the most extreme observations, that they can no longer be considered random samples from the underlying population. If this bias is not taken into account then any future analyses will provide a distorted estimate of the population. Consider the setting where we observe $X_1,\ldots, X_n$ with $E(X_i)=\mu_i$ and wish to estimate $\mu_i$ based on an observed $X_i$. Then it is well known that the conditional expectation $E(\mu_i|X_i)$ corrects for any selection bias associated with choosing $X_i$ in a non-random fashion \citep{efron2011}. Numerous approaches have been suggested to address selection bias, with most methods imposing some form of shrinkage to either explicitly, or implicitly, estimate $E(\mu_i|X_i)$. Among linear shrinkage methods, the James-Stein estimator \citep{james1961} is the most well known, although many others exist \citep{efron1975, ikeda2016}. There are also other popular classes of methods, including: non-linear approaches utilizing sparse priors \citep{donoho1994, abramovich2006, bickel2008, ledoit2012}, Bayesian estimators \citep{gelman2012} and empirical Bayes methods \citep{jiang2009, brown2009nonparametric, petrone2014}.
For Gaussian data, Tweedie's formula \citep{Rob1956} provides an elegant empirical Bayes estimate for $E(\mu_i|X_i)$, using only the marginal distribution of $X_i$. While less well known than the James-Stein estimator, it has been shown to be an effective non-parametric approach for addressing selection bias \citep{efron2011}. The approach can be automatically adjusted to lean more heavily on parametric assumptions when little data is available, but in settings such as ours, where large quantities of data have been observed, it provides a highly flexible non-parametric shrinkage method \citep{benjamini2005, henderson2015}.
However, the standard implementation of Tweedie's formula assumes that, conditional on $\mu_i$, the observed data follow a Gaussian distribution. Most shrinkage methods make similar distributional assumptions or else model the data as unbounded, which makes little sense for probabilities. What then would be a better estimator for low probability events? In this paper we propose an empirical Bayes approach, called ``Excess Certainty Adjusted Probability" (ECAP), specifically designed for probability estimation in settings with a large number of observations. ECAP uses a variant of Tweedie's formula which models $\tilde p_i$ as coming from a beta distribution, automatically ensuring the estimate is bounded between $0$ and $1$. We provide theoretical and empirical evidence demonstrating that the ECAP estimate is generally significantly more accurate than $\tilde p_i$.
This paper makes three key contributions. First, we convincingly demonstrate that even an unbiased estimator $\tilde p_i$ can provide a systematically sub-optimal estimate for $p_i$, especially in situations where large numbers of probability estimates have been generated. This leads us to develop the oracle estimator for $p_i$, which results in a substantial improvement in expected loss. Second, we introduce the ECAP method which estimates the oracle. ECAP does not need to make any assumptions about the distribution of $p_i$. Instead, it relies on estimating the marginal distribution of $\tilde p_i$, a relatively easy problem in the increasingly common situation where we observe a large number of probability estimates. Finally, we extend ECAP to the biased data setting where $\tilde p_i$ represents a biased observation of $p_i$ and show that even in this setting we are able to recover systematically superior estimates of $p_i$.
The paper is structured as follows. In Section~\ref{method.sec} we first formulate a model for $\tilde p_i$ and a loss function for estimating $p_i$. We then provide a closed form expression for the corresponding oracle estimator and its associated reduction in expected loss.
We conclude Section~\ref{method.sec} by proposing the ECAP estimator for the oracle and deriving its theoretical properties. Section~\ref{extension.sec} provides two extensions. First, we propose a bias corrected version of ECAP, which can detect situations where $\tilde p_i$ is a biased estimator for $p_i$ and automatically adjust for the bias. Second, we generalize the ECAP model from Section~\ref{method.sec}.
Next, Section~\ref{sim.sec} contains results from an extensive simulation study that examines how well ECAP works to estimate $p_i$, in both the unbiased and biased settings. Section~\ref{emp.sec} illustrates ECAP on two interesting real world data sets. The first is a unique set of probabilities from ESPN predicting, in real time, the winner of various NCAA football games, and the second contains the win probabilities of all candidates in the 2018 US midterm elections. We conclude with a discussion and possible future extensions in Section~\ref{discussion.sec}. Proofs of all theorems are provided in the appendix.
\section{Methodology} \label{method.sec}
Let $\tilde p_1,\ldots, \tilde p_n$ represent initial estimates of events $A_1,\ldots, A_n$ occurring. In practice, we assume that $\tilde p_1,\ldots, \tilde p_n$ have already been generated, by previous analysis or externally, say, by an outside rating agency in the case of the investment example.
Our goal is to construct estimators $\hat p_1(\tilde p_1), \ldots, \hat p_n(\tilde p_n)$ which provide more accurate estimates for $p_1,\ldots, p_n$.
In order to derive the estimator we first choose a model for $\tilde p_i$ and select a loss function for $\hat p_i$, which allows us to compute the corresponding oracle estimator $p_{i0}$. Finally, we provide an approach for generating an estimator for the oracle $\hat p_i$. In this section we only consider the setting where $\tilde p_i$ is assumed to be an unbiased estimator for $p_i$. We extend our approach to the more general setting where $\tilde p_i$ may be a biased estimator in Section~\ref{biased.sec}.
\subsection{Modeling $\tilde p_i$ and Selecting a Loss Function}
Given that $\tilde p_i$ is a probability, we model its conditional distribution using the beta distribution\footnote{We consider a more general class of distributions for $\tilde p_i$ in Section~\ref{sec.mixture}}. In particular, we model \begin{equation} \label{beta.model}
\tilde p_i|p_i \sim Beta(\alpha_i, \beta_i),\quad\text{where} \quad \alpha_i=\frac{p_i}{\gamma^*}, \quad\beta_i=\frac{1-p_i}{\gamma^*}, \end{equation} and $\gamma^*$ is a fixed parameter which influences the variance of $\tilde p_i$. Under \eqref{beta.model}, \begin{equation} \label{pt.mean.var}
E(\tilde p_i|p_i)=p_i \quad\text{and} \quad Var(\tilde p_i|p_i)=\frac{\gamma^*}{1+\gamma^*}p_i(1-p_i), \end{equation} so $\tilde p_i$ is an unbiased estimate for $p_i$,
which becomes more accurate as $\gamma^*$ tends to zero. Figure~\ref{beta.plot} provides an illustration of the density function of $\tilde p_i$ for three different values of $p_i$. In principle, this model could be extended to incorporate observation specific variance terms $\gamma_i^*$. Unfortunately, in practice $\gamma^*$ needs to be estimated, which would be challenging if we assumed a separate term for each observation. However, in some settings it may be reasonable to model $\gamma_i^*=w_i \gamma^*$, where $w_i$ is a known weighting term, in which case only one parameter needs to be estimated.
\begin{figure}
\caption{\small Density functions for $\tilde p_i$ given $p_i=0.002$ (blue / solid), $p_i=0.01$ (orange / dot-dashed), and $p_i=0.03$ (green / dashed). In all three cases $\gamma^*=0.001$.}
\label{beta.plot}
\end{figure}
Next, we select a loss function for our estimator to minimize. One potential option would be to use a standard squared error loss, $L(\hat p_i) = E\left(p_i-\hat p_i\right)^2$.
However, this loss function is not the most reasonable approach in this setting. Consider for example the event corresponding to a bond defaulting, or a patient dying during surgery. If the bond has junk status, or the surgery is highly risky, the true probability of default or death might be $p_i=0.26$, in which case an estimate of $\hat p_i=0.25$ would be considered very accurate. It is unlikely that an investor or patient would have made a different decision if they had instead been provided with the true probability of $0.26$.
However, if the bond, or surgery, are considered very safe we might provide an estimated probability of $\hat p_i=0.0001$, when the true probability is somewhat higher at $p_i=0.01$. The absolute error in the estimate is actually slightly lower in this case, but the patient or investor might well make a very different decision when given a $1\%$ probability of a negative outcome vs a one in ten thousand chance.
In this sense, the error between $p_i$ and $\hat p_i$ {\it as a percentage of $\hat p_i$}
is a far more meaningful measure of precision. In the first example we have a percentage error of only 4\%, while in the second instance the percentage error is almost 10,000\%,
indicating a far more risky proposition. To capture this concept of relative error we introduce as our measure of accuracy a quantity we call the ``Excess Certainty", which is defined as \begin{equation} \label{EC}
\text{EC}(\hat p_i)= \frac{p_i-\hat p_i}{\min(\hat p_i,1-\hat p_i)}. \end{equation} In the first example $\text{EC}=0.04$, while in the second example $\text{EC}=99$. Note, we include $\hat p_i$ in the denominator rather than $p_i$ because we wish to more heavily penalize settings where the estimated risk is far lower than the true risk (irrational exuberance) compared to the alternative where true risk is much lower.
Ideally, the excess certainty of any probability estimate should be very close to zero. Thus, we adopt the following expected loss function, \begin{equation} \label{loss.fn}
L(\hat p_i, \tilde p_i)=E_{p_i}\left(\text{EC}(\hat p_i)^2|\tilde p_i\right), \end{equation}
where the expectation is taken over $p_i$, conditional on $\tilde p_i$. Our aim is to produce an estimator $\hat p_i$ that minimizes \eqref{loss.fn} conditional on the observed value of~$\tilde p_i$. It is worth noting that if our goal was solely to remove selection bias then we could simply compute $E(p_i|\tilde p_i)$, which would be equivalent to minimizing $E\left[\left(p_i-\hat p_i\right)^2|\tilde p_i\right]$. Minimizing \eqref{loss.fn} generates a similar shrinkage estimator, which also removes the selection bias, but, as we discuss in the next section, it actually provides additional shrinkage to account for the fact that we wish to minimize the relative, or percentage, error.
\subsection{The Oracle Estimator}
We now derive the oracle estimator, $p_{i0}$, which minimizes the loss function given by \eqref{loss.fn}, \begin{equation} \label{argmin}
p_{i0}=\arg\min_a E_{p_i}\left[\text{EC}(a)^2|\tilde p_i\right]. \end{equation} Our ECAP estimate aims to approximate the oracle. Theorem~\ref{oracle.thm} below provides a relatively simple closed form expression for $p_{i0}$ and a bound on the minimum reduction in loss from using $p_{i0}$ relative to any other estimator. \begin{theorem} \label{oracle.thm} For any distribution of $\tilde p_i$, \begin{equation} \label{oracle.general}
p_{i0}=\begin{cases}
\min\left(E(p_i|\tilde p_i)+ \frac{Var(p_i|\tilde p_i)}{E(p_i|\tilde p_i)}\,,\,0.5\right), & E(p_i|\tilde p_i)\le 0.5\\
\max\left(0.5\,,\,E(p_i|\tilde p_i)- \frac{Var(p_i|\tilde p_i)}{1-E(p_i|\tilde p_i)}\right), & E(p_i|\tilde p_i)>0.5.
\end{cases} \end{equation}
Furthermore, for any $p'_i\ne p_{i0}$,
\begin{equation} \label{L.diff}
L(p'_i,\tilde p_i) - L(p_{i0},\tilde p_i) \ge \begin{cases} E\left(p_i^2|\tilde p_i\right)\left[\frac{1}{p'_i}-\frac{1}{p_{i0}}\right]^2, &p_{i0}\le 0.5\\
E\left([1-p_i]^2|\tilde p_i\right)\left[\frac{1}{1-p'_i}-\frac{1}{1-p_{i0}}\right]^2,&p_{i0}\ge0.5.
\end{cases} \end{equation} \end{theorem} \begin{remark} Note that both bounds in \eqref{L.diff} are valid when $p_{i0}=0.5$. \end{remark}
We observe from this result that the oracle estimator starts with the conditional expectation $E(p_i|\tilde p_i)$ and then shifts the estimate towards $0.5$ by an amount $\frac{Var(p_i|\tilde p_i)}{\min(E(p_i|\tilde p_i),1-E(p_i|\tilde p_i))}$. However, if this would move the estimate past $0.5$ then the estimator simply becomes $0.5$.
Figure~\ref{EC.plot} plots the average excess certainty \eqref{EC} from using $\tilde p_i$ to estimate $p_i$ (orange lines) and from using $p_{i0}$ to estimate $p_i$ (green lines), for three different values of $\gamma^*$. Recall that an ideal EC should be zero, but the observed values for $\tilde p_i$ are far larger, especially for higher values of $\gamma^*$ and lower values of $\tilde p_i$. Note that, as a consequence of the minimization of the expected squared loss function~\eqref{loss.fn}, the oracle is slightly conservative with a negative EC, which is due to the variance term in~\eqref{oracle.general}.
\begin{figure}
\caption{\small Average excess certainty as a function of $\tilde p_i$ for three different values of $\gamma^*$ (orange / dashed line). All plots exhibit excess certainty far above zero but the issue grows worse as $\gamma^*$ gets larger, corresponding to more variance in $\tilde p_i$. The green (solid) line in each plot corresponds to the average excess certainty for the oracle estimator $p_{i0}$.}
\label{EC.plot}
\end{figure}
It is worth noting that Theorem~\ref{oracle.thm} applies for any distribution of $\tilde p_i|p_i$ and does not rely on our model \eqref{beta.model}. If we further assume that \eqref{beta.model} holds, then Theorem~\ref{EandVar} provides explicit forms for $E(p_i|\tilde p_i)$ and $Var(p_i|\tilde p_i)$.
\begin{theorem} \label{EandVar} Under~\eqref{beta.model}, \begin{eqnarray}
\label{mui}E(p_i|\tilde p_i) &=&\mu_i \ \equiv \ \tilde p_i + \gamma^*\left[g^*(\tilde p_i)+1-2\tilde p_i\right]\\ \label{sigmai}
Var(p_i|\tilde p_i)&=& \sigma^2_i\ \equiv \ \gamma^*\tilde p_i(1-\tilde p_i) + {\gamma^*}^2\tilde p_i(1-\tilde p_i)\left[{g^*}'(\tilde p_i)-2\right], \end{eqnarray} where $g^*(\tilde p_i) = \tilde p_i(1-\tilde p_i) v^*(\tilde p_i)$, $v^*(\tilde p_i)=\frac{\partial}{\partial \tilde p_i} \log f^*(\tilde p_i)$ is the score function of $\tilde p_i$ and $f^*(\tilde p_i)$ is the marginal density of $\tilde p_i$.
\end{theorem} If we also assume that the distribution of $p_i$ is symmetric then further simplifications are possible. \begin{corollary} \label{cor.EandVar} If the prior distribution of $p_i$ is symmetric about $0.5$, then \begin{equation} \label{oracle}
p_{i0}=\begin{cases}
\min\left(E(p_i|\tilde p_i)+ \frac{Var(p_i|\tilde p_i)}{E(p_i|\tilde p_i)}\,,\,0.5\right), & \tilde p_i\le 0.5\\
\max\left(0.5\,,\,E(p_i|\tilde p_i)- \frac{Var(p_i|\tilde p_i)}{1-E(p_i|\tilde p_i)}\right), & \tilde p_i>0.5,
\end{cases} \end{equation} \begin{equation} \label{g}
g^*(0.5)=0,\quad \text{and} \quad g^*(\tilde p_i)=-g^*(1-\tilde p_i). \end{equation} \end{corollary}
A particularly appealing aspect of Theorem~\ref{EandVar} and its corollary is that $g^*(\tilde p_i)$ is only a function of the marginal distribution of $\tilde p_i$, so that it can be estimated directly using the observed probabilities~$\tilde p_i$. In particular, we do not need to make any assumptions about the distribution of~$p_i$ in order to compute $g^*(\tilde p_i)$.
\subsection{Estimation} \label{score.sec}
In order to estimate $p_{i0}$ we must form estimates for $g^*(\tilde p_i)$, its derivative ${g^*}'(t)$, and $\gamma^*$.
\subsubsection{Estimation of $g$} \label{sec.estimation} Let $\hat g(\tilde p)$ represent our estimator of $g^*(\tilde p)$. Given that $g^*(\tilde p)$ is a function of the marginal distribution of $\tilde p_i$, i.e. $f^*(\tilde p_i)$, then one could estimate $g^*(\tilde p_i)$ by $\tilde p_i(1-\tilde p_i)\hat f'(\tilde p_i)/\hat f(\tilde p_i)$, where $\hat f(\tilde p_i)$ and $\hat f'(\tilde p_i)$ are respectively estimates for the marginal distribution of $\tilde p_i$ and its derivative. However, this approach requires dividing by the estimated density function, which can produce a highly unstable estimate in the boundary points, precisely the region we are most interested in.
Instead we directly estimate $g^*(\tilde p)$ by choosing $\hat g(\tilde p)$ so as to minimize the risk function, which is defined as $R(g)=E[g(\tilde p)-g^*(\tilde p)]^2$ for every candidate function~$g$. The following result provides an explicit form for the risk.
\begin{theorem} \label{risk.lemma} Suppose that model~\eqref{beta.model} holds, and the prior for~$p$ has a bounded density. Then, \begin{equation}
R(g) = E g(\tilde p)^2+2E\left[ g(\tilde p)(1-2\tilde p)+\tilde p(1-\tilde p) g'(\tilde p)\right] +C\label{full.risk} \end{equation} for all bounded and differentiable functions~$g$, where~$C$ is a constant that does not depend on~$g$. \end{theorem} \begin{remark} We show in the proof of Theorem~\ref{risk.lemma} that $g^*$ is bounded and differentiable so \eqref{full.risk} holds for $g=g^*$. \end{remark}
Theorem~\ref{risk.lemma} suggests that we can approximate the risk, up to an irrelevant constant, by \begin{equation} \label{emp.risk}
\hat R(g) = \frac1n \sum_{i=1}^n g(\tilde p_i)^2 + 2 \frac1n \sum_{i=1}^n \left[ g(\tilde p_i)(1-2\tilde p_i)+\tilde p_i(1-\tilde p_i) g'(\tilde p_i)\right]. \end{equation} However, simply minimizing \eqref{emp.risk} would provide a poor estimate for $g^*(\tilde p)$ because, without any smoothness constraints, $\hat R(g)$ can be trivially minimized. Hence, we place a smoothness penalty on our criterion by minimizing \begin{equation} \label{risk.criterion}
Q(g) = \hat R(g) + \lambda \int g''(\tilde p)^2d\tilde p, \end{equation} where $\lambda>0$ is a tuning parameter which adjusts the level of smoothness in $g(\tilde p)$. We show in our theoretical analysis in Section~\ref{theory.sec} (see the proof of Theorem~\ref{g.thm}) that, much as with the more standard curve fitting setting, the solution to criteria of the form in \eqref{risk.criterion} can be well approximated using a natural cubic spline, which provides a computationally efficient approach to compute $ g(\tilde p)$.
Let ${\bf b}(x)$ represent the vector of basis functions for a natural cubic spline, with knots at $\tilde p_1, \ldots, \tilde p_n$, restricted to satisfy ${\bf b}(0.5)={\bf 0}$. Then, in minimizing $Q(g)$ we only need to consider functions of the form $g(\tilde p) = {\bf b}(\tilde p)^T\boldsymbol \eta$, where $\boldsymbol \eta$ is the basis coefficients. Thus, \eqref{risk.criterion} can be re-expressed as \begin{equation} \label{qn.ncs.probs}
Q_n(\boldsymbol \eta) = \frac1n \sum_{i=1}^n \boldsymbol \eta^T{\bf b}(\tilde p_i){\bf b}(\tilde p_i)^T \boldsymbol \eta + 2\frac1n \sum_{i=1}^n \left[(1-2\tilde p_i){\bf b}(\tilde p_i)^T+\tilde p_i(1-\tilde p_i) {\bf b}'(\tilde p_i)^T\right]\boldsymbol \eta + \lambda \boldsymbol \eta^T\Omega\boldsymbol \eta \end{equation} where $\Omega=\int {\bf b}''(\tilde p){\bf b}''(\tilde p)^Td\tilde p$. Standard calculations show that \eqref{qn.ncs.probs} is minimized by setting \begin{equation} \label{ls.eta.probs}
\hat \boldsymbol \eta = -\left(\sum_{i=1}^n{\bf b}(\tilde p_i){\bf b}(\tilde p_i)^T+n\lambda\Omega\right)^{-1} \sum_{i=1}^n \left[(1-2\tilde p_i){\bf b}(\tilde p_i)+\tilde p_i(1-\tilde p_i) {\bf b}'(\tilde p_i)\right]. \end{equation} If the prior distribution of $p_i$ is not assumed to be symmetric, then $g^*(\tilde p_i)$ should be directly estimated for $0\le \tilde p_i\le 1$. However, if the prior is believed to be symmetric this approach is inefficient, because it does not incorporate the identity $g^*(\tilde p_i)=-g^*(1-\tilde p_i)$. Hence, a superior approach involves flipping all of the $\tilde p_i>0.5$ across~$0.5$, thus converting them into $1-\tilde p_i$, and then using both the flipped and the unflipped~$\tilde p_i$ to estimate $g(\tilde p_i)$ between $0$ and $0.5$. Finally, the identity $\hat g(\tilde p_i)=-\hat g(1-\tilde p_i)$ can be used to define~$\hat g$ on $(0.5,1]$. This is the approach we use for the remainder of the paper.
Equation~(\ref{ls.eta.probs}) allows us to compute estimates for $E(p_i|\tilde p_i)$ and $\text{Var}(p_i|\tilde p_i)$:
\begin{eqnarray} \label{mu.hat}\hat \mu_i &=& \tilde p_i + \hat\gamma({\bf b}(\tilde p_i)^T\hat\boldsymbol \eta+1-2\tilde p_i) \\ \label{sigma.hat}\hat \sigma^2_i &=& \hat\gamma\tilde p_i(1-\tilde p_i)+ {\hat\gamma}^2\tilde p_i(1-\tilde p_i)[{\bf b}'(\tilde p_i)^T\hat\boldsymbol \eta-2]. \end{eqnarray} Equations~\eqref{mu.hat} and \eqref{sigma.hat} can then be substituted into \eqref{oracle} to produce the ECAP estimator $\hat p_i$.
\subsubsection{Estimation of $\lambda$ and $\gamma^*$} \label{gamma.sec}
In computing \eqref{mu.hat} and \eqref{sigma.hat} we need to provide estimates for $\gamma^*$ and $\lambda$. We choose $\lambda$ so as to minimize a cross-validated version of the estimated risk \eqref{emp.risk}. In particular, we randomly partition the probabilities into $K$ roughly even groups: $G_1,\ldots, G_{K}$. Then, for given values of $\lambda$ and $k$, $\hat \boldsymbol \eta_{k\lambda}$ is computed via \eqref{ls.eta.probs}, with the probabilities in $G_k$ excluded from the calculation. We then compute the corresponding estimated risk on the probabilities in $G_k$: $$R_{k\lambda}=\sum_{i\in G_k} \hat h_{ik}^2 + 2\sum_{i\in G_k} \left[(1-2\tilde p_i)\hat h_{ik}+\tilde p_i(1-\tilde p_i) \hat h'_{ik}\right],$$ where $\hat h_{ik}={\bf b}(\tilde p_i)^T \hat \boldsymbol \eta_{k\lambda}$ and $\hat h'_{ik}={\bf b}'(\tilde p_i)^T \hat \boldsymbol \eta_{k\lambda}$. This process is repeated $K$ times for $k=1,\ldots, K$, and $$ R_\lambda=\frac1n\sum_{k=1}^K R_{k\lambda}$$ is computed as our cross-validated risk estimate. Finally, we choose $\hat \lambda = \arg\min_\lambda R_\lambda$.
To estimate $\gamma^*$ we need a measure of the accuracy of $\tilde p_i$ as an estimate of $p_i$. In some cases that information may be available from previous analyses. For example, if the estimates~$\tilde p_i$ were obtained by fitting a logistic regression model, we could compute the standard errors on the estimated coefficients and hence form a variance estimate for each~$\tilde p_i$. We would estimate~$\gamma^*$ by matching the computed variance estimates to the expression~(\ref{pt.mean.var}) for the conditional variance under the ECAP model.
Alternatively, we can use previously observed outcomes of $A_i$ to estimate $\gamma^*$. Suppose that we observe $$Z_i=\begin{cases}1 & A_i\text{ occured},\\ 0&A_i\text{ did not occur}, \end{cases}$$ for $i= 1,\ldots, n$. Then a natural approach is to compute the (log) likelihood function for $Z_i$. Namely, \begin{equation} \label{log.like}
l_\gamma = \sum_{i:Z_i=1} \log(\hat p^\gamma_i) + \sum_{i:Z_i=0} \log(1-\hat p^\gamma_i), \end{equation} where $\hat p^\gamma_i$ is the ECAP estimate generated by substituting in a particular value of $\gamma$ into~\eqref{mu.hat} and~\eqref{sigma.hat}. We then choose the value of~$\gamma$ that maximizes~\eqref{log.like}.
As an example of this approach, consider the ESPN data recording probabilities of victory for various NCAA football teams throughout each season. To form an estimate for~$\gamma^*$ we can take the observed outcomes of the games from last season (or the first couple of weeks of this season if there are no previous games available), use these results to generate a set of $Z_i$, and then choose the~$\gamma$ that maximizes \eqref{log.like}. One could then form ECAP estimates for future games during the season, possibly updating the~$\gamma$ estimate as new games are played.
\subsection{Large sample results} \label{theory.sec}
In this section we investigate the large sample behavior of the ECAP estimator. More specifically, we show that, under smoothness assumptions on the function~$g^*$, the ECAP adjusted probabilities are consistent estimators of the corresponding oracle probabilities, defined in~\eqref{argmin}. We establish an analogous result for the corresponding values of the loss function, defined in~\eqref{loss.fn}. In addition to demonstrating consistency we also derive the rates of convergence. Our method of proof takes advantage of the theory of empirical processes,
however, the corresponding arguments go well beyond a simple application of the existing results.
We let~$f^*$ denote the marginal density of the observed $\tilde p_i$ and define the $L_2(\tilde P)$ norm of a given function~$u(\tilde p)$ as $\|u\|=[\int_0^1 u^2(\tilde p)f^*(\tilde p)d\tilde p]^{1/2}$. We denote the corresponding empirical norm, $[(1/n)\sum_{i=1}^n u^2(\tilde p_i)]^{1/2}$, by $\|u\|_n$. To simplify the presentation of the results, we define \begin{equation*} r_n=n^{-4/7}\lambda_n^{-1}+n^{-2/7}+\lambda_n \qquad \text{and} \qquad s_n=1+n^{-4/7}\lambda_n^{-2}. \end{equation*} We write~$\hat g$ for the minimizer of criterion~(\ref{risk.criterion}) over all natural cubic spline functions~$g$ that correspond to the sequence of~$n$ knots located at the observed~$\tilde p_i$. For concreteness, we focus on the case where criterion~(\ref{risk.criterion}) is computed over the entire interval $[0,1]$. However, all of the results in this section continue to hold if $\hat g$ is determined by only computing the criterion over $[0,0.5]$, according to the estimation approach described in Section~\ref{sec.estimation}. The following result establishes consistency and rates of convergence for $\hat g$ and $\hat g'$. \begin{theorem} \label{g.thm} If $g^*$ is twice continuously differentiable on $[0,1]$, $f^*$ is bounded away from zero and $n^{-8/21}\ll\lambda_n\ll 1$, then \begin{equation*}
\|\hat g - g^*\|_n = O_p\big(r_n\big), \qquad \|\hat g' - {g^*}'\|_n= O_p\big(\sqrt{r_ns_n}\big). \end{equation*}
The above bounds also hold for the $\|\cdot\|$ norm. \end{theorem} \begin{remark} The assumption $n^{-8/21}\ll\lambda_n\ll 1$ implies that the error bounds for $\hat g$ and $\hat g'$ are of order $o_p(1)$. \end{remark} When $\lambda_n\asymp n^{-2/7}$, Theorem~\ref{g.thm} yields an $n^{-2/7}$ rate of convergence for $\hat{g}$. This rate matches the optimal rate of convergence for estimating the derivative of a density under the corresponding smoothness conditions \citep{stone1980optimal}.
Given a value $\tilde p$ in the interval $(0,1)$, we define the ECAP estimator, $\hat p=\hat p(\tilde p)$, by replacing $\tilde p_i$, $\gamma^*$, and $g$ with~$\tilde p$,$\hat\gamma$ and $\hat g$, respectively, in the expression for the oracle estimator provided by formulas~(\ref{mui}), (\ref{sigmai}) and~(\ref{oracle}). Thus, we treat~$\hat p$ as a random function of~$\tilde p$, where the randomness comes from the fact that $\hat p$ depends on the training sample of the observed probabilities~$\tilde p_i$. By analogy, we define $p_0$ via~(\ref{oracle}), with~$\tilde p_i$ replaced by~$\tilde p$, and view~$p_0$ as a (deterministic) function of~$\tilde p$.
We define the function $W_0(\tilde p)$ as the expected loss for the oracle estimator: \begin{equation*}
W_0(\tilde p)=E_p\big[EC\left(p_0(\tilde p)\right)^2 |\tilde p\big], \end{equation*} where the expected value is taken over the true~$p$ given the corresponding observed probability~$\tilde p$. Similarly, we define the random function $\widehat W(\tilde p)$ as the expected loss for the ECAP estimator: \begin{equation*}
\widehat W(\tilde p)=E_p\big[EC\left(\hat p(\tilde p)\right)^2 |\tilde p\big], \end{equation*} where the expected value is again taken over the true~$p$ given the corresponding~$\tilde p$. The randomness in the function $\widehat W(\tilde p)$ is due to the dependence of~$\hat p$ on the training sample $\tilde p_1,...,\tilde p_n$.
To state the asymptotic results for $\hat p$ and $\hat W$, we implement a minor technical modification in the estimation of the conditional variance via formula (\ref{sigmai}). After computing the value of $\hat\sigma^2$, we set it equal to $\max\{\hat\sigma^2,c\sqrt{r_ns_n}\}$, where~$c$ is allowed to be any fixed positive constant. This ensures that, as the sample size grows, $\hat\sigma^2$ does not approach zero too fast. We note that this technical modification is only used to establish consistency of $\widehat W(\tilde p)$ in the next theorem; all the other results in this section hold both with and without this modification. \begin{theorem} \label{cons.thm}
If $g^*$ is twice continuously differentiable on $[0,1]$, $f^*$ is bounded away from zero, $n^{-8/21}\ll\lambda_n\ll 1$ and $|\hat \gamma-\gamma^*|=o_p(1)$, then \begin{equation*}
\|\hat{p} - p_0\|= o_p(1) \qquad\text{and}\qquad \|\hat{p} - p_0\|_n = o_p(1). \end{equation*}
If, in addition, $|\hat \gamma-\gamma^*|=O_p(\sqrt{r_ns_n})$, then
\begin{equation*}
\int\limits_0^1\big|\widehat{W}(\tilde p) - W_0(\tilde p)\big|f^*(\tilde p)d\tilde p = o_p(1) \qquad\text{and}\qquad
\frac1n\sum_{i=1}^n\big|\widehat{W}(\tilde p_i) - W_0(\tilde p_i)\big| = o_p(1). \end{equation*}
\end{theorem}
The next result provides the rates of convergence for $\hat p$ and $\hat W$. \begin{theorem} \label{rate.thm}
If $g^*$ is twice continuously differentiable on $[0,1]$, $f^*$ is bounded away from zero, $n^{-8/21}\ll\lambda_n\ll 1$ and $|\hat \gamma-\gamma^*|=O_p\big(\sqrt{r_ns_n}\big)$, then \begin{eqnarray*}
\int\limits_{\epsilon}^{1-\epsilon}\big|\hat{p}(\tilde p) - p_0(\tilde p)\big|^2f^*(\tilde p)d\tilde p =
O_p\big(r_ns_n\big), &\quad& \int\limits_{\epsilon}^{1-\epsilon}\big|\widehat{W}(\tilde p) - W_0(\tilde p)\big|f^*(\tilde p)d\tilde p = O_p\big(r_ns_n\big),\\ \\
\sum_{i:\, \epsilon\le\tilde p_i\le 1-\epsilon}\frac1n\big|\hat{p}(\tilde p_i) - p_0(\tilde p_i)\big|^2 =
O_p\big(r_ns_n\big)&\,\quad\text{and}\,\quad& \sum_{i:\, \epsilon\le\tilde p_i\le 1-\epsilon}\frac1n\big|\widehat{W}(\tilde p_i) - W_0(\tilde p_i)\big| = O_p\big(r_ns_n\big), \end{eqnarray*} for each fixed positive~$\epsilon$. \begin{remark} The assumption $n^{-8/21}\ll\lambda_n\ll 1$ ensures that all the error bounds are of order $o_p(1)$.
\end{remark} \end{theorem} In Theorem~\ref{rate.thm} we bound the integration limits away from zero and one, because the rate of convergence changes as~$\tilde p$ approaches those values. However, we note that~$\epsilon$ can be set to an arbitrarily small value. The optimal rate of convergence for $\widehat{W}$ is provided in the following result. \begin{corollary}\label{W.rate.crl}
Suppose that $\lambda_n$ decreases at the rate~$n^{-2/7}$ and $|\hat \gamma-\gamma^*|=O_p(n^{-1/7})$. If~$f^*$ is bounded away from zero and~$g^*$ is twice continuously differentiable on $[0,1]$, then \begin{equation*}
\int\limits_{\epsilon}^{1-\epsilon}\big|\widehat{W}(\tilde p) - W_0(\tilde p)\big|d\tilde p = O_p\big(n^{-2/7}\big)\qquad\text{and}\qquad \sum_{i:\, \epsilon\le\tilde p_i\le 1-\epsilon}\frac1n\big|\widehat{W}(\tilde p_i) - W_0(\tilde p_i)\big|= O_p\big(n^{-2/7}\big), \end{equation*} for every positive~$\epsilon$. \end{corollary} Corollary~\ref{W.rate.crl} follows directly from Theorem~\ref{rate.thm} by balancing out the components in the expression for~$r_n$.
\section{ECAP Extensions} \label{extension.sec}
In this section we consider two possible extensions of \eqref{beta.model}, the model for $\tilde p_i$. In particular, in the next subsection we discuss the setting where~$\tilde p_i$ can no longer be considered an unbiased estimator for~$p_i$, while in the following subsection we suggest a generalization of the beta model.
\subsection{Incorporating Bias in $\tilde p_i$} \label{biased.sec}
So far, we have assumed that $\tilde p_i$ is an unbiased estimate for $p_i$. In practice probability estimates~$\tilde p_i$ may exhibit some systematic bias. For example, in Section~\ref{emp.sec} we examine probability predictions from the \href{www.FiveThirtyEight.com}{FiveThirtyEight.com} website on congressional house, senate, and governors races during the 2018 US midterm election. After comparing the actual election results with the predicted probability of a candidate being elected, there is clear evidence of bias in the estimates \citep{Silver18}. In particular the leading candidate won many more races than would be suggested by the probability estimates. This indicates that the \href{www.FiveThirtyEight.com}{FiveThirtyEight.com} probabilities were overly conservative, i.e., that in comparison to $p_i$ the estimate~$\tilde p_i$ was generally closer to $0.5$; for example, $E(\tilde p_i|p_i)<p_i$ when $p_i>0.5$.
In this section we generalize \eqref{beta.model} to model situations where $E(\tilde p_i|p_i)\ne p_i$. To achieve this goal we replace \eqref{beta.model} with \begin{equation} \label{bias.alpha}
\tilde p_i|p_i \sim Beta(\alpha_i, \beta_i),\quad\text{where} \quad p_i=h_\theta(\alpha_i\gamma^*)=h_\theta(1-\beta_i\gamma^*), \end{equation} $h_\theta(\cdot)$ is a prespecified function, and $\theta$ is a parameter which determines the level of bias of $\tilde p_i$. In particular, \eqref{bias.alpha} implies that for any invertible $h_\theta$, \begin{equation}
p_i= h_\theta(E(\tilde p_i|p_i)), \end{equation} so that if $h_\theta(x)=x$, i.e., $h_\theta(\cdot)$ is the identity function, then \eqref{bias.alpha} reduces to \eqref{beta.model}, and $\tilde p_i$ is an unbiased estimate for $\tilde p_i$.
To produce a valid probability model $h_\theta(\cdot)$ needs to satisfy several criteria: \begin{enumerate} \item $h_0(x)=x$, so that \eqref{bias.alpha} reduces to \eqref{beta.model} when $\theta=0$.
\item $h_\theta(1-x)=1-h_\theta(x)$, ensuring that the probabilities of events~$A_i$ and~$A_i^c$ sum to~$1$.
\item $h_\theta(x)=x$ for $x=0, x=0.5$ and $x=1$.
\item $h_\theta(\alpha)$ is invertible for values of $\theta$ in a region around zero, so that $E(\tilde p_i|p_i)$ is unique. \end{enumerate} The simplest polynomial function that satisfies all these constraints is
$$h_\theta(x) = (1-0.5\theta)x - \theta[x^3 - 1.5 x^2],$$ which is invertible for $-4 \le \theta \le 2.$ Note that for $\theta=0$, we have $h_0(x)=x$, which corresponds to the unbiased model~\eqref{beta.model}. However, if $\theta>0$, then $\tilde p_i$ tends to overestimate small $p_i$ and underestimate large $p_i$, so the probability estimates are overly conservative. Alternatively, when $\theta<0$, then $\tilde p_i$ tends to underestimate small $p_i$ and overestimate large $p_i$, so the probability estimates exhibit excess certainty. Figure~\ref{ebias.plot} provides examples of $E(\tilde p_i|p_i)$ for three different values of $\theta$, with the green line representing probabilities resulting in excess certainty, the orange line overly conservative probabilities, and the black line unbiased probabilities.
\begin{figure}
\caption{\small Plots of $E(\tilde p_i|p_i)$ as a function of $p_i$ for different values of $\theta$. When $\theta=0$ (black / solid) the estimates are unbiased. $\theta=2$ (orange / dashed) corresponds to a setting where $\tilde p_i$ systematically underestimates large values of $p_i$, while $\theta=-3$ (green / dot-dashed) represents a situation where $\tilde p_i$ is an overestimate for large values of $p_i$.}
\label{ebias.plot}
\end{figure}
One of the appealing aspects of this model is that the ECAP oracle \eqref{oracle} can still be used to generate an estimator for $p_i$. The only change is in how $E(p_i|\tilde p_i)$ and $Var(p_i|\tilde p_i)$ are computed. The following result
allows us to generalize Theorem~\ref{EandVar} to the biased setting to compute $E(p_i|\tilde p_i)$ and $Var(p_i|\tilde p_i)$.
\begin{theorem} \label{bias.thm} Suppose that model~\eqref{bias.alpha} holds, $p_i$ has a bounded density, and $\mu_i$ and $\sigma_i^2$ are respectively defined as in~\eqref{mui} and~\eqref{sigmai}.
Then, \begin{eqnarray}
\label{bias.e} E(p_i|\tilde p_i) &=& \mu_i+0.5\theta\left[3\sigma_i^2-6\mu_i\sigma_i^2+3\mu_i^2-\mu_i-2\mu_i^3\right]+O\big(\theta{\gamma^*}^{3/2}\big)\\
\label{bias.v}Var(p_i|\tilde p_i) &=& (1-0.5\theta)^2\sigma_i^2
+\theta\sigma_i^2\big[3\mu_i(1-\mu_i)(3\theta\mu_i(1-\mu_i)-0.5\theta+1) \big]
+O\big(\theta{\gamma^*}^{3/2}\big). \end{eqnarray}
\end{theorem}
The remainder terms in the above approximations are of smaller order than the leading terms when~$\gamma^*$ is small, which is typically the case in practice. As we demonstrate in the proof of Theorem~\ref{bias.thm}, explicit expressions can be provided for the remainder terms.
However,
the approximation error involved in estimating these expressions is likely to be much higher than any bias from excluding them. Hence, we ignore these terms when estimating $E(p_i|\tilde p_i)$ and $Var(p_i|\tilde p_i)$:
\begin{eqnarray}
\widehat{E(p_i|\tilde p_i)} &=& \hat\mu_i+0.5\theta\left[3\hat\sigma_i^2-6\hat\mu_i\hat\sigma_i^2+3\hat\mu_i^2-\hat\mu_i-2\hat\mu_i^3\right]\\
\widehat{Var(p_i|\tilde p_i)} &=& (1-0.5\theta)^2\hat\sigma_i^2
+\theta\hat\sigma_i^2\big[3\hat\mu_i(1-\hat\mu_i)(3\theta\hat\mu_i(1-\hat\mu_i)-0.5\theta+1) \big]. \end{eqnarray}
The only remaining issue in implementing this approach involves producing an estimate for~$\theta$. However, this can be achieved using exactly the same maximum likelihood approach as the one used to estimate~$\gamma^*$, which is described in Section~\ref{gamma.sec}. Thus, we now choose both $\theta$ and $\gamma$ to jointly maximize the likelihood function \begin{equation} \label{bias.log.like}
l_{\theta,\gamma} = \sum_{i:Z_i=1} \log(\hat p^{\theta,\gamma}_{i}) + \sum_{i:Z_i=0} \log(1-\hat p^{\theta,\gamma}_{i}), \end{equation} where $\hat p^{\theta,\gamma}_{i}$ is the bias corrected ECAP estimate generated by substituting in particular values of~$\gamma$ and~$\theta$. In all other respects, the bias corrected version of ECAP is implemented in an identical fashion to the unbiased version.
\subsection{Mixture Distribution} \label{sec.mixture} We now consider another possible extension of \eqref{beta.model}, where we believe that~$\tilde p_i$ is an unbiased estimator for~$p_i$ but find the beta model assumption to be unrealistic. In this setting one could potentially model~$\tilde p_i$ using a variety of members of the exponential family. However, one appealing alternative is to extend \eqref{beta.model} to a mixture of beta distributions:
\begin{equation} \label{beta.mixture.model}
\tilde p_i|p_i \sim \sum_{k=1}^K w_k Beta(\alpha_{ik}, \beta_{ik}),\quad\text{where} \quad \alpha_{ik}=\frac{c_k p_i}{\gamma^*}, \quad\beta_{ik}=\frac{1-c_k p_i}{\gamma^*}, \end{equation} and $w_k$ and $c_k$ are predefined weights such that $\sum_k w_k=1$ and $\sum_k w_kc_k=1$. Note that \eqref{beta.model} is a special case of \eqref{beta.mixture.model} with $K=w_1=c_1=1$.
As $K$ grows, the mixture model can provide as flexible a model as desired, but it also has a number of other appealing characteristics. In particular, under this model it is still the case that $E(\tilde p_i|p_i)=p_i$. In addition, Theorem~\ref{EandVar.mixture} demonstrates that simple closed form solutions still exist for $E(p_i|\tilde p_i)$ and $Var(p_i|\tilde p_i)$, and, hence, also the oracle ECAP estimator $p_{i0}$.
\begin{theorem} \label{EandVar.mixture} Under~\eqref{beta.mixture.model}, \begin{eqnarray}
\label{mui.mixture}E(p_i|\tilde p_i) &=&\mu_i\sum_{k=1}^K \frac{w_k}{c_k}\\ \label{sigmai.mixture}
Var(p_i|\tilde p_i)&=& (\sigma_i^2+\mu_i^2)\sum_{k=1}^K \frac{w_k}{c_k^2}-\mu_i^2\left(\sum_{k=1}^K\frac{w_k}{c_k}\right)^2, \end{eqnarray} where $\mu_i$ and $\sigma_i^2$ are defined in \eqref{mui} and \eqref{sigmai}. \end{theorem} The generalized ECAP estimator can thus be generated by substituting $\hat \mu_i$ and $\hat \sigma^2_i$, given by formulas~(\ref{mu.hat}) and~(\ref{sigma.hat}), into~\eqref{mui.mixture} and~\eqref{sigmai.mixture}. The only additional complication involves computing values for $w_k$ and $c_k$. For settings with a large enough sample size, this could be achieved using a variant of the maximum likelihood approach discussed in Section~\ref{gamma.sec}. However, we do not explore that approach further in this paper.
\section{Simulation Results} \label{sim.sec}
In Section~\ref{sec.unbiased} we compare ECAP to competing methods under the assumption of unbiasedness in $\tilde p_i$. We further extend this comparison to the setting where $\tilde p_i$ represents a potentially biased estimate in Section~\ref{sec.biased}.
\subsection{Unbiased Simulation Results} \label{sec.unbiased}
In this section our data consists of $n=$ 1,000 triplets $(p_i,\tilde p_i, Z_i)$ for each simulation. The $p_i$ are generated from one of three possible prior distributions; Beta$(4,4)$, an equal mixture of Beta$(6,2)$ and Beta$(2,6)$, or Beta$(1.5,1.5)$. The corresponding density functions are displayed in Figure~\ref{fig:figure3}.
\begin{figure}
\caption{Distributions of $p$ used in the simulation}
\label{fig:figure3}
\end{figure}
Recall that ECAP models $\tilde p_i$ as coming from a beta distribution, conditional on $p_i$. However, in practice there is no guarantee that the observed data will exactly follow this distribution. Hence, we generate the observed data according to: \begin{equation} \label{sim.model}
\tilde p_i = p_i + p^q_i (\tilde p^{\text{o}}_i-p_i), \end{equation}
where $\tilde p_i^{\text{o}}|p_i\sim$ Beta$(\alpha,\beta)$ and $q$ is a tuning parameter. In particular for $q=0$ \eqref{sim.model} generates observations directly from the ECAP model, while larger values of $q$ provide a greater deviation from the beta assumption. In practice we found that setting $q=0$ can result in $\tilde{p}$'s that are so small they are effectively zero ($\tilde p_i = 10^{-20}$, for example). ECAP is not significantly impacted by these probabilities but, as we show, other approaches can perform extremely poorly in this scenario. Setting $q>0$
prevents pathologic scenarios and allows us to more closely mimic what practitioners will see in real life.
We found that $q=0.05$ typically gives a reasonable amount of dispersion so we consider settings where either $q=0$ or $q=0.05$. We also consider different levels of the conditional variance for~$\tilde p_i$, by taking~$\gamma^*$ as either~$0.005$ or~$0.03$. Finally, we generate $Z_i$, representing whether event $A_i$ occurs, from a Bernoulli distribution with probability $p_i$.
We implement the following five approaches: the {\it Unadjusted} method, which simply uses the original probability estimates~$\tilde p_i$, two implementations of the proposed {\it ECAP} approach (ECAP Opt and ECAP MLE), and two versions of the James Stein approach (JS Opt and JS MLE).
For the proposed ECAP methods, we select~$\lambda$ via the cross-validation procedure in Section~\ref{gamma.sec}. ECAP Opt is an oracle-type implementation of the ECAP methodology, in which we select~$\gamma$ to minimize the average expected loss, defined in \eqref{loss.fn}, over the training data. Alternatively, ECAP MLE makes use of the $Z_i$'s and estimates~$\gamma^*$ using the maximum likelihood approach described in Section~\ref{gamma.sec}. The James-Stein method we use is similar to its traditional formulation.
In particular the estimated probability is computed using \begin{equation} \label{js.sim}
\hat p^{JS}_i = \bar \tilde p + (1-c)\left(\tilde p_i - \bar \tilde p\right), \end{equation} where $\bar \tilde p=\frac{1}{n}\sum_{j=1}^n \tilde p_j$ and $c$ is a tuning parameter chosen to optimize the estimates.\footnote{To maintain consistency with ECAP we flip all $\tilde p_i>0.5$ across $0.5$ before forming $\hat p^{JS}_i$ and then flip the estimate back.}
Equation \eqref{js.sim} is a convex combination of $\tilde p_i$ and the average observed probability $\bar \tilde p$.
The JS Opt implementation selects~$c$ to minimize the average expected loss in the same fashion as for ECAP Opt, while
the JS MLE implementation selects~$c$ using the maximum likelihood approach described in Section~\ref{gamma.sec}. Note that ECAP Opt and JS Opt represent optimal situations that can not be implemented in practice because they require knowledge of the true distribution of $p_i$.
\begin{table}[t] \captionof{table}{Average expected loss for different methods over multiple unbiased simulation scenarios. Standard errors are provided in parentheses.} \label{tab:title} \begin{center}
{\small \begin{tabular}{c|c||l|l|l|l} $\gamma^*$ & \textbf{q} & \textbf{Method Type} & $\text{Beta}(4, 4)$ & \begin{tabular}[c]{@{}l@{}}$0.5$*Beta($6$,$2$) +\\ $0.5$*Beta($2$,$6$)\end{tabular} & $\text{Beta}(1.5, 1.5)$ \\ \hline \multirow{10}{*}{0.005} & \multirow{5}{*}{0} & Unadjusted & 0.0116 (0.0001) & 44.9824 (43.7241) & 3.9$\times 10^{12}$ (3.9$\times 10^{12}$) \\ \cline{3-6}
& & ECAP Opt & 0.0095 (0.0001) & 0.0236 (0.0002) & 0.0197 (0.0001) \\ \cline{3-6}
& & JS Opt & 0.0100 (0.0001) & 0.0241 (0.0002) & 0.0204 (0.0002) \\ \cline{3-6}
& & ECAP MLE & 0.0109 (0.0002) & 0.0302 (0.0006) & 0.0263 (0.0007) \\ \cline{3-6}
& & JS MLE & 0.0121 (0.0003) & 1.1590 (0.8569) & 4.8941 (4.7526) \\ \cline{2-6}
& \multirow{5}{*}{0.05} & Unadjusted & 0.0100 (0.0001) & 0.0308 (0.0006) & 0.0273 (0.0006) \\ \cline{3-6}
& & ECAP Opt & 0.0085 (0.0000) & 0.0196 (0.0001) & 0.0166 (0.0001) \\ \cline{3-6}
& & JS Opt & 0.0090 (0.0000) & 0.0201 (0.0001) & 0.0172 (0.0001) \\ \cline{3-6}
& & ECAP MLE & 0.0098 (0.0002) & 0.0238 (0.0005) & 0.0197 (0.0004) \\ \cline{3-6}
& & JS MLE & 0.0105 (0.0002) & 0.0265 (0.0006) & 0.0245 (0.0007) \\ \hline \hline \multirow{10}{*}{0.03} & \multirow{5}{*}{0} & Unadjusted & 2.1$\times 10^{8}$ (2.1$\times 10^{8}$) & 2.4$\times 10^{14}$ (1.6$\times 10^{14}$) & 1.6$\times 10^{15}$ (5.5$\times 10^{14}$) \\ \cline{3-6}
& & ECAP Opt & 0.0391 (0.0002) & 0.0854 (0.0004) & 0.0740 (0.0004) \\ \cline{3-6}
& & JS Opt & 0.0537 (0.0002) & 0.0986 (0.0005) & 0.0899 (0.0005) \\ \cline{3-6}
& & ECAP MLE & 0.0452 (0.0010) & 0.1607 (0.0187) & 0.1477 (0.0187) \\ \cline{3-6}
& & JS MLE & 0.0636 (0.0019) & 1.4$\times 10^{13}$ (1.4$\times 10^{13}$) & 1.2$\times 10^{14}$ (1.1$\times 10^{14}$) \\ \cline{2-6}
& \multirow{5}{*}{0.05} & Unadjusted & 0.0887 (0.0010) & 0.3373 (0.0047) & 0.2780 (0.0043) \\ \cline{3-6}
& & ECAP Opt & 0.0364 (0.0002) & 0.0765 (0.0004) & 0.0665 (0.0004) \\ \cline{3-6}
& & JS Opt & 0.0488 (0.0002) & 0.0874 (0.0005) & 0.0801 (0.0005) \\ \cline{3-6}
& & ECAP MLE & 0.0411 (0.0008) & 0.1035 (0.0050) & 0.0896 (0.0036) \\ \cline{3-6}
& & JS MLE & 0.0558 (0.0011) & 0.1213 (0.0066) & 0.1235 (0.0071) \\ \end{tabular}} \end{center} \end{table}
In each simulation run we generate both training and test data sets. Each method is fit on the training data. We then calculate $EC(\hat p_i)^2$ for each point in the test data and average over these observations.
The results for the three prior distributions, two values of $\gamma^*$, and two values of $q$, averaged over 100 simulation runs, are reported in Table~\ref{tab:title}. Since the ECAP Opt and JS Opt approaches both represent oracle type methods, they should be compared with each other.
The ECAP Opt method statistically significantly outperforms its JS counterpart in each of the twelve settings, with larger improvements in the noisy setting where $\gamma^*=0.03$. The ECAP MLE method is statistically significantly better than the corresponding JS approach in all but four settings. However, those four settings, correspond to $q=0$ and actually represent situations where JS MLE has failed because it has extremely large excess certainty, which impacts both the mean and standard error. Alternatively, the performance of the ECAP approach remains stable even in the presence of extreme outliers.
Similarly, the ECAP MLE approach statistically significantly outperforms the Unadjusted approach, often by large amounts, except for the five settings with large outliers, which result in extremely bad average performance for the latter method.
\subsection{Biased Simulation} \label{sec.biased}
In this section we extend the results to the setting where the observed probabilities may be biased, i.e., $E(\tilde p_i|p_i)\ne p_i$. To do this we generate $\tilde p_i$ according to \eqref{bias.alpha} using four different values for $\theta$, $\{-3,-1,0,2\}$. Recall that $\theta<0$ corresponds to anti-conservative data, where~$\tilde p_i$ tends to be too close to $0$ or $1$, $\theta=0$ represents unbiased observations, and $\theta>0$ corresponds to conservative data, where~$\tilde p_i$ tends to be too far from $0$ or $1$. In all other respects our data is generated in an identical fashion to that of the unbiased setting.\footnote{Because the observed probabilities are now biased, we replace $p_i$ in \eqref{sim.model} with $E(\tilde p_i|p_i)$.}
To illustrate the biased setting we opted to focus on the $q=0.05$ with $\gamma^*=0.005$ setting. We also increased the sample size to $n=$ 5,000 because of the increased difficulty of the problem. The two ECAP implementations now require us to estimate three parameters: $\lambda, \gamma$ and $\theta$. We estimate $\lambda$ in the same fashion as previously discussed, while $\gamma$ and $\theta$ are now chosen over a two-dimensional grid of values, with $\theta$ restricted to lie between $-4$ and $2$. The two JS methods remain unchanged.
\begin{table}[t] \centering \captionof{table}{Average expected loss for different methods over multiple biased simulation scenarios.} \label{table:biasedtable}
{\small \begin{tabular}{l|l||l|l|l} \textbf{} & \textbf{Method Type} & Beta($4$,$4$) & \begin{tabular}[c]{@{}l@{}}$0.5$*Beta($6$,$2$) +\\ $0.5$*Beta($2$,$6$)\end{tabular} & Beta($1.5$, $1.5$) \\ \hline \hline \multirow{5}{*}{$\theta=-3$} & Unadjusted & 0.1749 (0.0005) & 0.7837 (0.0025) & 0.6052 (0.0030) \\ \cline{2-5}
& ECAP Opt & 0.0019 (0.0000) & 0.0109 (0.0000) & 0.0086 (0.0000) \\ \cline{2-5}
& JS Opt & 0.0609 (0.0002) & 0.2431 (0.0005) & 0.1526 (0.0003) \\ \cline{2-5}
& ECAP MLE & 0.0026 (0.0001) & 0.0126 (0.0001) & 0.0100 (0.0001) \\ \cline{2-5}
& JS MLE & 0.0633 (0.0003) & 0.2712 (0.0014) & 0.1707 (0.0011) \\ \hline \multirow{5}{*}{$\theta=-1$} & Unadjusted & 0.0319 (0.0001) & 0.1389 (0.0007) & 0.1130 (0.0008) \\ \cline{2-5}
& ECAP Opt & 0.0051 (0.0000) & 0.0150 (0.0000) & 0.0124 (0.0001) \\ \cline{2-5}
& JS Opt & 0.0142 (0.0000) & 0.0477 (0.0001) & 0.0361 (0.0001) \\ \cline{2-5}
& ECAP MLE & 0.0059 (0.0001) & 0.0171 (0.0002) & 0.0149 (0.0003) \\ \cline{2-5}
& JS MLE & 0.0155 (0.0002) & 0.0541 (0.0008) & 0.0413 (0.0010) \\ \hline \multirow{5}{*}{$\theta=0$} & Unadjusted & 0.0099 (0.0000) & 0.0305 (0.0002) & 0.0275 (0.0003) \\ \cline{2-5}
& ECAP Opt & 0.0084 (0.0000) & 0.0195 (0.0001) & 0.0164 (0.0001) \\ \cline{2-5}
& JS Opt & 0.0088 (0.0000) & 0.0199 (0.0001) & 0.0171 (0.0001) \\ \cline{2-5}
& ECAP MLE & 0.0093 (0.0001) & 0.0224 (0.0003) & 0.0200 (0.0004) \\ \cline{2-5}
& JS MLE & 0.0094 (0.0001) & 0.0233 (0.0005) & 0.0219 (0.0005) \\ \hline \multirow{5}{*}{$\theta=2$} & Unadjusted & 0.0652 (0.0001) & 0.2419 (0.0003) & 0.1776 (0.0003) \\ \cline{2-5}
& ECAP Opt & 0.0240 (0.0001) & 0.0614 (0.0002) & 0.0502 (0.0001) \\ \cline{2-5}
& JS Opt & 0.0652 (0.0001) & 0.2419 (0.0003) & 0.1776 (0.0003) \\ \cline{2-5}
& ECAP MLE & 0.0255 (0.0002) & 0.0744 (0.0012) & 0.0599 (0.0009) \\ \cline{2-5}
& JS MLE & 0.0652 (0.0001) & 0.2419 (0.0003) & 0.1776 (0.0003) \\ \end{tabular}} \end{table}
The results, again averaged over $100$ simulation runs, are presented in Table~\ref{table:biasedtable}. In the two settings where $\theta<0$ we note that the unadjusted and JS methods all exhibit significant deterioration in their performance relative to the unbiased $\theta=0$ scenario. By comparison, the two ECAP methods significantly outperform the JS and unadjusted approaches. A similar pattern is observed for $\theta>0$. In this setting all five methods deteriorate, but ECAP is far more robust to the biased setting than unadjusted and JS.
It is perhaps not surprising that the bias corrected version of ECAP outperforms the other methods when the data is indeed biased. However, just as interestingly, even in the unbiased setting ($\theta=0$) we still observe that ECAP outperforms its JS counterpart, despite the fact that ECAP must estimate $\theta$. This is likely a result of the fact that ECAP is able to accurately estimate $\theta$. Over all simulation runs and settings, ECAP Opt and ECAP MLE respectively averaged absolute errors of only~$0.0681$ and $0.2666$ in estimating $\theta$.
\section{Empirical Results} \label{emp.sec}
In this section we illustrate ECAP on two real world data sets. Section~\ref{sec.espn} contains our results analyzing ESPN's probability estimates from NCAA football games, while Section~\ref{sec.538} examines probability estimates from the 2018 US midterm elections. Given that for real data $p_i$ is never observed, we need to compute an estimate of $EC(\hat p_i)$. Hence, we choose a small window $\delta$, for example $\delta =[0,0.02]$, and consider all observations for which $\tilde p_i$ falls within $\delta$.\footnote{In this section, for simplicity of notation, we have flipped all probabilities greater than $0.5$, and the associated $Z_i$ around $0.5$ so $\delta=[0,0.02]$ also includes probabilities between $0.98$ and $1$.} We then estimate $p_i$ via $\bar p_\delta=\frac{1}{n_\delta}\sum_{i=1}^n Z_i \delta_i$, where $\delta_i=I(\tilde p_i\in \delta)$ and $n_\delta=\sum_{i=1}^n \delta_i$. Hence we can estimate EC using
\begin{equation} \label{se.ec}
\widehat{EC}_\delta(\bar{\hat{p_\delta}}) = \frac{\bar p_\delta - \bar{\hat{p_\delta}}}{\bar{\hat{p_\delta}}}, \end{equation} where $\bar{\hat{p_\delta}}=\frac{1}{n_\delta}\sum_{i=1}^n \hat p_i \delta_i$.
\subsection{ESPN NCAA Football Data} \label{sec.espn}
\begin{figure}
\caption{\small A screenshot of the NCAA football win probabilities publicly available on ESPN's website. USC vs. Texas (2017)}
\label{espn_web.plot}
\end{figure}
Each year there are approximately 1,200 Division 1 NCAA football games played within the US. For the last several seasons ESPN has been producing automatic win probability estimates for every game. These probabilities update in real time after every play.
Figure~\ref{espn_web.plot} provides an example of a fully realized game between the University of Southern California (USC) and the University of Texas at Austin (TEX) during the 2017 season. For most of the game the probability of a USC win hovers around 75\% but towards the end of the game the probability starts to oscillate wildly, with both teams having high win probabilities, before USC ultimately wins.\footnote{The game was not chosen at random.} These gyrations are quite common and occasionally result in a team with a high win probability ultimately losing. Of course even a team with a 99\% win probability will end up losing 1\% of the time so these unusual outcomes do not necessarily indicate an error, or selection bias issue, with the probability estimates.
To assess the accuracy of ESPN's estimation procedure we collected data from the 2016 and 2017 NCAA football seasons. We obtained this unique data set by scrapping the win probabilities, and ultimate winning team, for a total of 1,722 games (about 860 per season), involving an average of approximately 180 probabilities per game. Each game runs for 60 minutes, although the clock is often stopped. For any particular time point $t$ during these 60 minutes, we took the probability estimate closest to $t$ in each of the individual games. We used the entire data set, 2016 and 2017, to compute $\bar{p}_\delta$, which represents the ideal gold standard. However, this estimator is impractical in practice because we would need to collect data over two full years to implement it. By comparison, we used only the 2016 season to fit ECAP and ultimately to compute $\bar{\hat{p}}_\delta$. We then calculated $\widehat{EC}_\delta(\bar{\hat{p_\delta}},t)$ for both the raw ESPN probabilities and the adjusted ECAP estimates. The intuition here is that $\widehat{EC}_\delta(\bar{\hat{p_\delta}},t)$ provides a comparison of these estimates to the ideal, but unrealistic, $\bar{p}_\delta$.
In general we found that $\widehat{EC}_\delta(\bar{\hat{p_\delta}},t)$ computed on the ESPN probabilities was not systematically different from zero, suggesting ESPN's probabilities were reasonably accurate. However, we observed that, for extreme values of $\delta$, $\widehat{EC}_\delta(\bar{\hat{p_\delta}},t)$ was well above zero towards the end of the games. Consider, for example, the solid orange line in Figure~\ref{espn.plot}, which plots $\widehat{EC_\delta}(\bar{\hat{p_\delta}},t)$ using $\delta=[0,0.02]$ at six different time points during the final minute of these games. We observe that excess certainty is consistently well above zero.
The $90\%$ bootstrap confidence intervals (dashed lines), generated by sampling with replacement from the probabilities that landed inside $\delta_i$, demonstrate that the difference from zero is statistically significant for most time points.
This suggests that towards the end of the game ESPN's probabilities are too extreme i.e. there are more upsets then would be predicted by their estimates.
\begin{figure}
\caption{ Empirical EC in both the unadjusted and ECAP setting with $\delta=[0,0.02]$.}
\label{espn.plot}
\label{plot:espn_ecap}
\end{figure}
Next we applied the unbiased implementation of ECAP, i.e. with $\theta=0$, separately to each of these six time points and computed $\widehat{EC_\delta}(t)$ for the associated ECAP probability estimates. To estimate the out of sample performance of our method, we randomly picked half of the 2016 games to estimate $\gamma^*$, and then used ECAP to produce probability estimates on the other half. We repeated this process 100 times and averaged the resulting $\widehat{EC}_\delta(\bar{\hat{p_\delta}},t)$ independently for each time point. The solid green line in Figure~\ref{espn.plot} provides the estimated excess certainty. ECAP appears to work well on this data, with excess certainty estimates close to zero and confidence intervals that contain zero at most time points. Notice also that ECAP is consistently producing a slightly negative excess certainty, which is actually necessary to minimize the expected loss function \eqref{loss.fn}, as demonstrated in Figure~\ref{EC.plot}. Interestingly this excess certainty pattern in the ESPN probabilities is no longer apparent in data for the 2018 season, suggesting that ESPN also identified this as an issue and applied a correction to their estimation procedure.
\subsection{Election Data} \label{sec.538}
Probabilities have increasingly been used to predict election results. For example, news organizations, political campaigns, and others, often attempt to predict the probability of a given candidate winning a governors race, or a seat in the house, or senate. Among other uses, political parties can use these estimates to optimize their funding allocations across hundreds of different races.
In this section we illustrate ECAP using probability estimates produced by the \href{www.FiveThirtyEight.com}{FiveThirtyEight.com} website during the 2018 US midterm election cycle. FiveThrityEight used three different methods, {\it Classic, Deluxe}, and {\it Lite}, to generate probability estimates for every governor, house, and senate seat up for election, resulting in 506 probability estimates for each of the three methods.
Interestingly a previous analysis of this data \citep{Silver18} showed that the FiveThirtyEight probability estimates appeared to be overly conservative i.e. the leading candidate won more often than would have been predicted by their probabilities. Hence, we should be able to improve the probability estimates using the bias corrected version of ECAP from Section~\ref{biased.sec}. We first computed $\widehat{EC}_\delta(\bar{\hat{p_\delta}})$ on the unadjusted FiveThirtyEight probability estimates using two different values for $\delta$ i.e. $\delta_1=[0,0.1]$ and $\delta_2=[0.1,0.2]$. We used wider windows for $\delta$ in comparison to the ESPN data because we only had one third as many observations. The results for the three methods used by FiveThirtyEight are shown in Table~\ref{table:538table}. Notice that for all three methods and both values of $\delta$ the unadjusted estimates are far below zero and several are close to $-1$, the minimum possible value. These results validate the previous analysis suggesting the FiveThirtyEight estimates are systematically conservatively biased.
\begin{table}[t] \centering \captionof{table}{Bias corrected ECAP adjustment of FiveThirtyEight's 2018 election probabilities. Reported average $\widehat{EC}_\delta$.} \label{table:538table}
{\small \begin{tabular}{ll||rr} Method & Adjustment & \textbf{$\delta_1$} & \textbf{$\delta_2$} \\ \hline \hline \multirow{2}{*}{\textbf{Classic}} & Unadjusted & -0.6910 & -0.8361 \\ \cline{2-4}
& ECAP & -0.2880 & -0.0734 \\ \hline \multirow{2}{*}{\textbf{Deluxe}} & Unadjusted & -0.4276 & -0.8137 \\ \cline{2-4}
& ECAP & -0.0364 & 0.1802 \\ \hline \multirow{2}{*}{\textbf{Lite}} & Unadjusted & -0.8037 & -0.8302 \\ \cline{2-4}
& ECAP & -0.3876 & -0.1118 \\ \end{tabular}} \end{table}
Next we applied ECAP separately to each of the three sets of probability estimates, with the value of $\theta$ chosen using the MLE approach previously described. Again the results are provided in Table~\ref{table:538table}. ECAP appears to have significantly reduced the level of bias, with most values of $\widehat{EC}_\delta(\bar{\hat{p_\delta}})$ close to zero, and in one case actually slightly above zero. For the Deluxe method with $\delta_1$, ECAP has an almost perfect level of excess certainty.
For the Classic and Lite methods, $\theta=2$ was chosen by ECAP for both values of~$\delta$, representing the largest possible level of bias correction. For the Deluxe method, ECAP selected $\theta=1.9$. Figure~\ref{plot:ecap_v_538unadj} demonstrates the significant level of correction that ECAP applies to the classic method FiveThirtyEight estimates. For example, ECAP adjusts probability estimates of $0.8$ to $0.89$ and estimates of $0.9$ to $0.97$.
\section{Discussion} \label{discussion.sec}
In this article, we have convincingly demonstrated both theoretically and empirically that probability estimates are subject to selection bias, even when the individual estimates are unbiased. Our proposed ECAP method applies a novel non-parametric empirical Bayes approach to adjust both biased and unbiased probabilities, and hence produce more accurate estimates. The results in both the simulation study and on real data sets demonstrate that ECAP can successfully correct for selection bias, allowing us to use the probabilities with a higher level of confidence when selecting extreme values.
\begin{figure}
\caption{ECAP bias corrected probabilities vs original FiveThirtyEight probability from classic method.}
\label{plot:ecap_v_538unadj}
\end{figure}
There are a number of possible areas for future work. For example, the ESPN data contains an interesting time series structure to the probabilities, with each game consisting of a probability function measured over 60 minutes. Our current method treats each time point independently and adjusts the probabilities accordingly. However, one may be able to leverage more power by incorporating all time points simultaneously using some form of functional data analysis. Another potential area of exploration involves the type of data on which ECAP is implemented. For example, consider a setting involving a large number of hypothesis tests and associated p-values, $\tilde p_1,\ldots, \tilde p_n$. There has been much discussion recently of the limitations around using p-values. A superior approach would involve thresholding based on the posterior probability of the null hypothesis being true i.e. $p_i=P(H_{0i}|X_i)$. Of course, in general, $p_i$ is difficult to compute which is why we use the p-value $\tilde p_i$. However, if we were to treat $\tilde p_i$ as a, possibly biased, estimate of $p_i$, then it may be possible to use a modified version of ECAP to estimate $p_i$. If such an approach could be implemented it would likely have a significant impact in the area of multiple hypothesis testing.
\appendix
\section{Proof of Theorem~\ref{oracle.thm}}
We begin by computing the derivative of the loss function, $$L(x)=\begin{cases}
\frac{1}{x^2}E(p_i^2|\tilde p_i)-\frac2x E(p_i|\tilde p_i)+1&x\le 0.5\\
\frac{1}{(1-x)^2}E(p_i^2|\tilde p_i)-\frac{2x}{(1-x)^2} E(p_i|\tilde p_i)+\left(\frac{x}{1-x}\right)^2&x> 0.5. \end{cases}$$ We have \begin{eqnarray*} \frac{\partial L}{\partial x} &=& \begin{cases}
-\frac{2}{x^3}E(p_i^2|\tilde p_i)+\frac2{x^2} E(p_i|\tilde p_i)&x< 0.5\\
\frac{2}{(1-x)^3}E(p_i^2|\tilde p_i)-\frac{2(1+x)}{(1-x)^3} E(p_i|\tilde p_i)+\frac{2x}{(1-x)^3}&x> 0.5 \end{cases}\\ &\propto& \begin{cases}
-E(p_i^2|\tilde p_i)+x E(p_i|\tilde p_i)&x< 0.5\\
E(p_i^2|\tilde p_i)-E(p_i|\tilde p_i)+x(1-E(p_i|\tilde p_i)) &x> 0.5. \end{cases} \end{eqnarray*}
Note that~$L$ is a continuous function. If $E(p_i|\tilde p_i)\le 0.5$ and $x^*=E(p_i^2|\tilde p_i)/E(p_i|\tilde p_i)\le0.5$ then algebraic manipulations show that $\partial L/\partial x$ is negative for all $x<x^*$ and positive for $x>x^*$. Hence, $p_{i0}=x^*=E(p_i|\tilde p_i) + Var(p_i|\tilde p_i)/E(p_i|\tilde p_i)$ minimizes~$L$. Alternatively, if $E(p_i|\tilde p_i)\le0.5$ and $x^*=E(p_i^2|\tilde p_i)/E(p_i|\tilde p_i)\ge0.5$ then $\partial L/\partial x$ is negative for all $x<0.5$ and positive for all $x>0.5$, so $L$ is minimized by $p_{i0}=0.5$.
Analogous arguments show that if $E(p_i|\tilde p_i)>0.5$ and $x^*=E(p_i^2|\tilde p_i)/(1-E(p_i|\tilde p_i))>0.5$, then $\partial L/\partial x$ is negative for all $x<x^*$, zero at $x=x^*$ and positive for $x>x^*$. Hence, $p_{i0}=x^*=E(p_i|\tilde p_i) + Var(p_i|\tilde p_i)/(1-E(p_i|\tilde p_i))$ will minimize $L$. Alternatively, if $E(p_i|\tilde p_i)>0.5$ and $x^*=E(p_i^2|\tilde p_i)/(1-E(p_i|\tilde p_i))<0.5$ then $\partial L/\partial x$ is negative for all $x<0.5$ and positive for all $x>0.5$, so $L$ is minimized by $p_{i0}=0.5$.
To prove the second result, first suppose $E(p_i|\tilde p_i)\le0.5$ and $p_{i0}<0.5$, in which case $L(p_{i0}) = 1 - E(p_i^2|\tilde p_i)/p_{i0}^2$. Now
let $\tilde L(p_i') = E\left(\left(\frac{p_i-p_i'}{p_i'}\right)^2|\tilde p_i\right)= \frac{1}{{p_i'}^2}E(p_i^2|\tilde p_i) - \frac{2}{p_i'}E(p_i|\tilde p_i)+1$. Note that $\tilde L(p_i')\le L(p_i')$ with equality for $p_i'\le 0.5$. Hence,
$$L(p_i')-L(p_{i0})\ge \tilde L(p_i')-L(p_{i0}) = E(p_i^2|\tilde p_i)\left(\frac{1}{{p_i'}^2} + \frac{1}{p_{i0}^2}\right)- \frac{2}{p_i'}E(p_i|\tilde p_i) = E(p_i^2|\tilde p_i)\left( \frac{1}{p_i'}-\frac{1}{p_{i0}}\right)^2.$$
Now consider the case $E(p_i|\tilde p_i)\le0.5$ and $p_{i0}=0.5$. Note that this implies $2E(p_i^2|\tilde p_i)>E(p_i|\tilde p_i)$. If~$p_i'\le0.5$, then
$$L(p_i')-L(p_{i0})= E(p_i^2|\tilde p_i)\left(\frac{1}{{p_i'}^2} -4\right)- 2E(p_i|\tilde p_i)\left(\frac{1}{{p_i'}} -2\right) \ge E(p_i^2|\tilde p_i)\left( \frac{1}{p_i'}-\frac{1}{0.5}\right)^2.$$ Also note that $$\left( \frac{1}{p_i'}-\frac{1}{0.5}\right)^2\ge \left( \frac{1}{1-p_i'}-\frac{1}{0.5}\right)^2. $$ Alternatively, if~$p_i'>0.5$, then
$$L(p_i')-L(p_{i0})\ge \tilde{L}(1-p_i')-L(p_{i0})\ge E(p_i^2|\tilde p_i)\left( \frac{1}{1-p_i'}-\frac{1}{0.5}\right)^2.$$ Observe that $$\left( \frac{1}{1-p_i'}-\frac{1}{0.5}\right)^2\ge \left( \frac{1}{p_i'}-\frac{1}{0.5}\right)^2. $$ Consequently, we have shown that
$$L(p_i')-L(p_{i0})\ge E(p_i^2|\tilde p_i) \max\left(\left[ \frac{1}{p_i'}-\frac{1}{0.5}\right]^2, \left[\frac{1}{1-p_i'}-\frac{1}{0.5}\right]^2\right) $$
when $E(p_i|\tilde p_i)\le0.5$ and $p_{i0}=0.5$.
Thus, we have established the result for the case $E(p_i|\tilde p_i)\le0.5$ Finally, consider the case $E(p_i|\tilde p_i)\ge0.5$. The result follows by repeating the argument from the case $E(p_i|\tilde p_i)<0.5$ while replacing all of the probabilities with their complements, i.e., by replacing~$p_i$, $p_{i0}$ and $p_i'$ with $1-p_i$, $1-p_{i0}$ and $1-p_i'$, respectively.
\section{Proof of Theorem~\ref{EandVar} and Corollary~\ref{cor.EandVar}}
Throughout the proof, we omit the subscript~$i$, for the simplicity of notation. We let $f_{\tilde p}(\tilde p|p)$ denote the conditional density of $\tilde p$ given that the corresponding true probability equals~$p$, and define $f_{X}(x|p)$ by analogy for the random variable~$X=\log(\tilde p/[1-\tilde p])$. We will slightly abuse the notation and not distinguish between the random variable and its value in the case of $\tilde p$, $p$ and~$\eta$.
According to model~\eqref{beta.model}, we have $f_{\tilde p}(\tilde p|p)=B(p/{\gamma^*},(1-p)/{\gamma^*})^{-1}\tilde p^{p/{\gamma^*}-1}(1-\tilde p)^{(1-p)/{\gamma^*}-1}$ where~$B(\cdot)$ denotes the beta function. Hence, writing~$B$ for $B(p/{\gamma^*},(1-p)/{\gamma^*})$, we derive
\begin{eqnarray}
\log(f_{\tilde p}(\tilde p|p)) &=& -\log B + \left(\frac{p}{{\gamma^*}}-1\right)[\log \tilde p - \log(1-\tilde p)] + (1/{\gamma^*}-2)\log(1-\tilde p)\nonumber\\
\label{eq.exp.fam} &=& -\log B + \eta x + (1/{\gamma^*}-2)\log(1-\tilde p) \end{eqnarray} where $\eta=\frac{p}{{\gamma^*}}-1$ and $x=\log \frac{\tilde p}{1-\tilde p}$.
Standard calculations show that \begin{equation} \label{ptilde.dist}
f_X(x|p)= f_{\tilde p}(\tilde p|p) \frac{e^x}{(1+e^x)^2} = f_{\tilde p}(\tilde p|p) \tilde p(1-\tilde p). \end{equation} Note that $\log(1-\tilde p)=-\log(1+e^x)$, and hence \begin{eqnarray*}
\log(f_{X}(x|p))&=&-\log B + \eta x - (1/{\gamma^*}-2)\log(1+e^x)+x - 2\log(1+e^x)\\ &=& -\log B + \eta x +x- 1/{\gamma^*}\log(1+e^x)\\ &=& -\log B + \eta x +l_h(x), \end{eqnarray*} where $l_h(x)=x- 1/{\gamma^*}\log(1+e^x)$.
Consequently, we can apply Tweedie's formula \citep{efron2011} to derive
$$E(p/{\gamma^*}-1|\tilde p)=E(\eta|x)=v_X(x)-l_h'(x)=v_X(x)-1 + \frac{1}{{\gamma^*}}\frac{e^x}{1+e^x}= v_X(x)+\frac{\tilde p}{{\gamma^*}}-1,$$ where $v_{X}(x)=(d f_{X}(x)/dx)/f_{X}(x)$ and $f_X$ is the density of~$X$. This implies
$$E(p|\tilde p) = \tilde p+{\gamma^*} v_X(x).$$ In addition, we have \begin{eqnarray*} \frac{d f_X(x)}{dx}&=& \frac{d f_{\tilde p}(\tilde p)}{d\tilde p}\frac{d\tilde p}{dx}\frac{e^x}{(1+e^x)^2}+ f_{\tilde p}(\tilde p) \frac{e^x(1+e^x)^2-2e^{2x}(1+e^x)}{(1+e^x)^4}\\ &=& \frac{d f_{\tilde p}(\tilde p)}{d\tilde p}\left(\frac{e^x}{(1+e^x)^2}\right)^2+ f_{\tilde p}(\tilde p)\frac{e^x}{(1+e^x)^2}\frac{1-e^x}{1+e^x}. \end{eqnarray*} Using the unconditional analog of formula~\eqref{ptilde.dist}, we derive \begin{eqnarray*} v_X(x)&=&\frac{d f_X(x)/dx}{f_X(x)}\\ &=& \frac{d f_{\tilde p}(\tilde p)/d\tilde p}{f_{\tilde p}(\tilde p)}\frac{e^x}{(1+e^x)^2}+\frac{1-e^x}{1+e^x}\\ &=& v_{\tilde p}(\tilde p) \tilde p(1-\tilde p) +1-2\tilde p, \end{eqnarray*} where $v_{\tilde p}(\tilde p)=(d f_{\tilde p}(\tilde p)/d\tilde p)/f_{\tilde p}(\tilde p).$ Thus,
$$E(p|\tilde p) = \tilde p + {\gamma^*}(\tilde p(1-\tilde p)v_{\tilde p}(\tilde p)+ 1-2\tilde p).$$ Similarly, again by Tweedie's formula,
$$Var(p/{\gamma^*}-1|\tilde p)=Var(\eta|x)=v'_X(x)-l_h''(x)=v'_X(x)+\frac{1}{{\gamma^*}}\frac{e^x}{(1+e^x)^2}= v'_X(x)+\frac{\tilde p(1-\tilde p)}{{\gamma^*}},$$ which implies
$$Var(p|\tilde p) = {\gamma^*}\tilde p(1-\tilde p)+{\gamma^*}^2 v'_X(x).$$ Noting that $$v'_X(x) =\tilde p(1-\tilde p)[v'_{\tilde p}(\tilde p) \tilde p(1-\tilde p) + v_{\tilde p}(\tilde p)(1-2\tilde p)-2],$$ we derive
$$Var(p|\tilde p) = {\gamma^*}^2\tilde p(1-\tilde p)[v'_{\tilde p}(\tilde p) \tilde p(1-\tilde p) + v_{\tilde p}(\tilde p)(1-2\tilde p)-2]+ {\gamma^*}\tilde p(1-\tilde p).$$ If we define $g^*(\tilde p)=\tilde p(1-\tilde p)v_{\tilde p}(\tilde p)$, then \begin{eqnarray*}
E(p|\tilde p) &=& \tilde p + {\gamma^*}(g(\tilde p)+1-2\tilde p)\\
Var(p|\tilde p) &=& {\gamma^*}\tilde p(1-\tilde p)+{\gamma^*}^2\tilde p(1-\tilde p)[g'(\tilde p)-2]. \end{eqnarray*} This completes the proof of Theorem~\ref{EandVar}.
Finally, we establish some properties of $g^*(\tilde p)$ and prove Corollary~\ref{cor.EandVar}. We denote the marginal density of~$\tilde p$ by~$f$. First note that $g(1-\tilde p)=-\tilde p(1-\tilde p)f'(1-\tilde p|p)/f(1-\tilde p|p)$. If $h(p)$ represents the prior density for~$p$, then \begin{equation} \label{f.marginal}
f(\tilde p)=\int_0^{1} B(\alpha,\beta)^{-1}\tilde p^{\alpha-1}(1-\tilde p)^{\beta-1}h(p)dp. \end{equation} Because function~$h$ is bounded, differentiation under the integral sign is justified, and hence \begin{equation} \label{f.prime}
f'(\tilde p)=\int_0^{1} B(\alpha,\beta)^{-1}\left\{(\alpha-1)\tilde p^{\alpha-2}(1-\tilde p)^{\beta-1}-(\beta-1)\tilde p^{\alpha-1}(1-\tilde p)^{\beta-2}\right\}h(p)dp,
\end{equation}
where $\alpha=p/{\gamma^*}$ and $\beta=(1-p)/{\gamma^*}$. Substituting $p^*=1-p$ we get $$f(1-\tilde p)=\int_0^{1} B(\beta,\alpha)^{-1}\tilde p^{\alpha-1}(1-\tilde p)^{\beta-1}h(1-p^*)dp^*=f(\tilde p)$$ and $$f'(1-\tilde p)=\int_0^{1} B(\beta,\alpha)^{-1}\left\{(\alpha-1)\tilde p^{\alpha-2}(1-\tilde p)^{\beta-1}-(\beta-1)\tilde p^{\alpha-1}(1-\tilde p)^{\beta-2}\right\}h(1-p^*)dp^*=f'(\tilde p),$$ provided $h(p)=h(1-p)$. Hence, $g^*(1-\tilde p)=-\tilde p(1-\tilde p)f'(\tilde p)/f(\tilde p)=-g^*(\tilde p)$. By continuity of $g^*(\tilde p)$ this result also implies $g^*(0.5)=0$.
To complete the proof of Corollary~\ref{cor.EandVar}, we note that under the assumption that the distribution of~$p_i$ is symmetric, the conditional expected value $E(p_i|\tilde p_i)$ lies on the same side of~$0.5$ as~$\tilde p_i$.
\section{Proof of Theorem~\ref{risk.lemma}}
As before, we denote the marginal density of~$\tilde p$ by~$f$. First, we derive a bound for~${g^*}$. Note that $-1\le \alpha-1\le \frac{1}{{\gamma^*}}$ and, similarly, $-1\le \beta-1\le \frac{1}{{\gamma^*}}$. Hence, by \eqref{f.marginal} and \eqref{f.prime}, \begin{equation*}
\nonumber -\left(1-\tilde p+\frac{\tilde p}{{\gamma^*}}\right)f(\tilde p)\le \tilde p(1-\tilde p) f'(\tilde p)\le \left((1-\tilde p)\frac{1}{{\gamma^*}}+\tilde p\right)f(\tilde p), \end{equation*} which implies \begin{equation} \label{g.bound}
|{g^*}(\tilde p)|\le \frac{1}{{\gamma^*}}. \end{equation}
Next, note that \begin{eqnarray} \lim_{\tilde p\rightarrow 0} \tilde p(1-\tilde p) f(\tilde p) &=& 0\label{lim1}\qquad\text{and}\\ \lim_{\tilde p\rightarrow 1} \tilde p(1-\tilde p) f(\tilde p) &=& 0.\label{lim2} \end{eqnarray} Observe that \begin{eqnarray} R( g(\tilde p)) &=& E(\left( g(\tilde p)- {g^*}(\tilde p)\right)^2\nonumber\\ &=& E g(\tilde p)^2-2E\left\{ g(\tilde p) {g^*}(\tilde p)\right\}+C\nonumber\\ &=& E g(\tilde p)^2-2\int_0^1 \left\{ g(\tilde p) \tilde p(1-\tilde p) \frac{f'(\tilde p)}{f(\tilde p)}\right\} f(\tilde p)d\tilde p+C\nonumber\\ &=& E g(\tilde p)^2-2\left[ g(\tilde p)\tilde p(1-\tilde p) f(\tilde p)\right]^1_0+2\int_0^1 \left[ g(\tilde p)(1-2\tilde p)+\tilde p(1-\tilde p) g'(\tilde p)\right]f(\tilde p)d\tilde p +C \nonumber\\ &=& E g(\tilde p)^2+2\int_0^1 \left[ g(\tilde p)(1-2\tilde p)+\tilde p(1-\tilde p) g'(\tilde p)\right]f(\tilde p)d\tilde p +C\label{full.risk.prf} \end{eqnarray} where $C$ is a constant that does not depend on~$g$, and the second to last line follows via integration by parts. Note the last line holds when~$ g$ is bounded, because by \eqref{lim1}, $$\lim_{\tilde p\rightarrow 0} g(\tilde p)\tilde p(1-\tilde p) f(\tilde p) = 0,$$ and by \eqref{lim2},
$$\lim_{\tilde p\rightarrow 1} g(\tilde p)\tilde p(1-\tilde p) f(\tilde p) = 0.$$
In particular, due to the inequality~(\ref{g.bound}), the relationship~(\ref{full.risk.prf}) holds when~$ g$ is the true function~${g^*}$.
\section{Proof of Theorem~\ref{g.thm}} \label{sec:proof.asympt}
We write $\mathcal{G}_N$ for the class of all natural cubic spline functions $g$ on $[0,1]$ that correspond to the sequence of~$n$ knots located at the observed~$\tilde p_i$. Given a function~$g$, we define $s_g(\tilde p)=2[g(\tilde p)(1-2\tilde p)+\tilde p(1-\tilde p)g'(\tilde p)]$ and $I^2(g) = \int_0^1 [g''(\tilde p)]^2 d\tilde p$. We also denote $(1/n)\sum_{i=1}^n g^2(\tilde p_i)$ and $\int_0^1 g(\tilde p)f^*(\tilde p)d\tilde p$ by~$\|g\|^2_n$ and~$\|g\|^2$, respectively.
By Lemma~\ref{lem1} in Appendix~\ref{append.sup.res}, there exists $g^*_N\in\mathcal{G}_N$, such that $\|g^*_N-g^*\|^2=O_p(\lambda_n^2)$ and \begin{equation*}
\|\hat g - g^*_N\|^2 + \lambda_n^2 I^2(\hat g)\le O_p\Big(n^{-2/7}\|\hat g - g^*_N\|\Big) + O_p\Big(n^{-4/7} I(\hat g) \Big)+O_p\Big(n^{-4/7} +\lambda_n^2\Big). \end{equation*}
We consider two possible cases (a) $n^{-4/7} I(\hat g)\le n^{-2/7}\|\hat g - g^*_N\|+n^{-4/7}+\lambda_n^2$ and (b) $n^{-4/7} I(\hat g)> n^{-2/7}\|\hat g - g^*_N\|+n^{-4/7}+\lambda_n^2$.
Under (a) we have \begin{equation}
\|\hat g - g^*_N\|^2 +\lambda^2_n I^2(\hat g)\le O_p\Big(n^{-2/7}\|\hat g - g^*_N\|\Big) +O_p\Big(n^{-4/7}+\lambda^2_n\Big). \end{equation}
It follows that $\|\hat g - g^*_N\|=O_p(n^{-2/7}+\lambda_n)$ and $I^2(\hat g )=O_p(n^{-4/7}\lambda_n^{-2}+1)$. However, taking into account the case (a) condition, we also have $I^2(\hat g )=O_p(n^{4/7}\lambda^2_n+1)$, thus leading to $I(\hat g )=O_p(1)$.
Under (b) we have \begin{equation} \label{case.b.ineq}
\|\hat g - g^*_N\|^2 + \lambda_n^2 I^2(\hat g)\le O_p\Big(n^{-4/7}I(\hat g )\Big). \end{equation}
It follows that $I(\hat g )=O_p(n^{-4/7}\lambda_n^{-2})$ and $\|\hat g - g^*_N\|=O_p(n^{-4/7}\lambda_n^{-1})$.
Collecting all the stochastic bounds we derived, and using the fact that~$f^*$ is bounded away from zero, we deduce \begin{equation*}
\|\hat g - g^*_N\| = O_p(n^{-4/7}\lambda_n^{-1}+n^{-2/7}+\lambda_n) \qquad\text{and}\qquad I(\hat g)=O_p(1+n^{-4/7}\lambda_n^{-2}) \end{equation*}
Using the bound $\|g^*_N-g^*\|^2=O_p(\lambda_n^2)$, together with the definitions of $r_n$ and~$s_n$, we derive \begin{equation} \label{g.bounds}
\|\hat g - g^*\| = O_p(r_n) \qquad\text{and}\qquad I(\hat g - g^*)=O_p(1+n^{-4/7}\lambda_n^{-2}). \end{equation}
Applying Lemma 10.9 in \cite{van2000applications}, which builds on the interpolation inequality of \cite{agmon1965lectures}, we derive $\|\hat{g}' - {g^*}'\| = O_p(\sqrt{r_n s_n})$. This establishes the error bounds for $\hat g$ and $\hat g'$ with respect to the $\|\cdot\|$ norm.
To derive the corresponding results with respect to the $\|\cdot\|_n$ norm, we first apply bound~(\ref{nu_n_dg2}), in which we replace~$g^*_N$ with~$g^*$. It follows that \begin{equation*}
\|\hat g - g^*\|^2_n-\|\hat g - g^*\|^2 =
({\tilde P}_n-{\tilde P})[\hat g-{g^*}]^2=
o_p\Big(\|\hat g - g^*\|^2\Big) + O_p\Big(n^{-1}I^2(\hat g - g^*)\Big), \end{equation*} where we use the notation from the proof of Lemma~\ref{lem1}. Because bounds~(\ref{g.bounds}) together with the assumption $\lambda_n\gg n^{-8/21}$ imply \begin{equation*} I(\hat g - g^*)=O\Big(n^{-4/7}n^{16/21}\Big)=O\Big(n^{4/21}\Big), \end{equation*} we can then derive \begin{equation*}
\|\hat g - g^*\|^2_n=O\Big(\|\hat g - g^*\|^2\Big) + O_p\Big(n^{-13/21}\Big). \end{equation*}
Because $r_n\ge n^{-2/7}$, we have $r_n^2\ge n^{-13/21}$. Consequently, $ \|\hat g - g^*\|^2_n=O(r_n^2)$, which establishes the analog of the first bound in~(\ref{g.bounds}) for the $\|\cdot\|_n$ norm.
It is only left to derive $\|\hat{g}' - {g^*}'\|_n=O_p(\sqrt{r_ns_n})$. Applying Lemma~17 in \cite{meier2009high}, in conjunction with Corollary~5 from the same paper, in which we take ${\gamma^*}=2/3$ and $\lambda=n^{-3/14}$, we derive \begin{equation*}
({\tilde P}_n-{\tilde P})[\hat{g}' - {g^*}']^2 =O_p\Big(n^{-5/14}\Big[ \|\hat{g}' - {g^*}'\|^2 + n^{-2/7} I^2(\hat{g}' - {g^*}')\Big] \Big). \end{equation*} Consequently, \begin{equation*}
\|\hat{g}' - {g^*}'\|^2_n =O_p\Big(\|\hat{g}' - {g^*}'\|^2\Big)+O_p\Big(n^{-9/14}I^2(\hat{g}' - {g^*}') \Big). \end{equation*} Taking into account bound~(\ref{g.bounds}), the definition of~$s_n$, the assumption $\lambda_n\gg n^{-8/21}$ and the inequality $r_n\ge n^{-2/7}$,we derive \begin{equation*} n^{-9/14}I^2(\hat{g}' - {g^*}')=O_p\Big(n^{-9/14}s_n^2\Big)=O_p\Big(n^{-19/42}s_n\Big)=O_p\Big(r_ns_n\Big). \end{equation*}
Thus, $\|\hat{g}' - {g^*}'\|^2_n=O_p(\|\hat{g}' - {g^*}'\|^2+r_ns_n)=O_p(r_ns_n)$, which completes the proof.
\section{Proof of Theorem~\ref{cons.thm}} We will take advantage of the results in Theorem~\ref{rate.thm}, which are established independently from Theorem~\ref{cons.thm}. We will focus on proving the results involving integrals, because the results for the averages follow by an analogous argument with minimal modifications.
We start by establishing consistency of $\hat p$. Fixing an arbitrary positive~$\tilde\epsilon$, identifying a positive~$\epsilon$ for which $\tilde P(0,\epsilon)+\tilde P(1-\epsilon,1)\le \tilde\epsilon/2$, and noting that $\hat p$ and $p_0$ fall in $[0,1]$ for every~$\tilde p$, we derive \begin{equation*}
\|\hat p - p_0 \|^2 \le \tilde\epsilon/2 + \int_{\epsilon}^{1-\epsilon}|\hat p(\tilde p)-p_0(\tilde p)|^2f^*(\tilde p)dp. \end{equation*} By Theorem~\ref{rate.thm}, the second term on the right-hand side of the above display is $o_p(1)$. Consequently, \begin{equation*}
P\Big(\|\hat p - p_0 \|^2 >\tilde\epsilon\Big)\rightarrow0 \quad\text{as}\quad n\rightarrow\infty. \end{equation*}
As the above statement holds for every fixed positive~$\tilde\epsilon$, we have established that $\|\hat p - p_0 \|=o_p(1)$.
We now focus on showing consistency for~$\hat W$. Note that \begin{equation*} [\hat\mu^2(\tilde p)+\hat\sigma^2(\tilde p)]^2/\hat\mu^2(\tilde p) \ge [2\hat\mu(\tilde p)\hat\sigma(\tilde p)]^2/[\hat\mu^2(\tilde p)]=4\hat\sigma^2(\tilde p). \end{equation*} Thus, the definition of~$\hat p$ implies $\hat p^2(\tilde p)\ge \hat\sigma^2(\tilde p) \wedge 0.25$, and also $\hat p(\tilde p)\ge \hat\mu(\tilde p)\wedge 0.5$, for every~$\tilde p\in(0,1)$. Writing~$p$ for the true probability corresponding to the observed~$\tilde p$, we then derive \begin{eqnarray}
\widehat W(\tilde p)=\frac{E_p\Big[(p-\hat p(\tilde p))^2|\tilde p\Big]}{\hat p^2(\tilde p)}&\le& \frac{\sigma^2(\tilde p)}{\hat p^2(\tilde p)}+
\frac{[\hat p(\tilde p) -\mu(\tilde p)]^2}{\hat p^2(\tilde p)}\nonumber\\
\nonumber\\
&\le&
\frac{|\hat\sigma^2(\tilde p)-\sigma^2(\tilde p)|}{4\hat\sigma^2(\tilde p)}+
\frac{[\hat \mu(\tilde p) -\mu(\tilde p)]^2}{\hat\sigma^2(\tilde p)}+7. \label{W.hat.bnd} \end{eqnarray}
By Theorem~\ref{rate.thm}, we have $\|\hat\sigma^2-\sigma^2\|=O_p(\sqrt{r_ns_n})=o_p(1)$ and $\|\hat\mu-\mu\|=O_p(\sqrt{r_ns_n})=o_p(1)$. Fix an arbitrary positive~$\epsilon$ and define $A_{\epsilon}=(0,\epsilon)\cup(1-\epsilon,1)$. Applying the Cauchy-Schwarz inequality, and using the imposed technical modification of the ECAP approach to bound $\hat\sigma^2$ below, we derive \begin{equation*}
\int_{A_{\epsilon}} \frac{|\hat\sigma^2(\tilde p)-\sigma^2(\tilde p)|}{\hat\sigma^2(\tilde p)}f^*(\tilde p)d\tilde p\le
[\tilde PA_{\epsilon}]^{1/2} \frac{\|\hat\sigma^2-\sigma^2\|}{c\sqrt{r_ns_n}}=
[\tilde PA_{\epsilon}]^{1/2}O_p(1)=O_p(\epsilon^{1/2}). \end{equation*} Similarly, we derive \begin{equation*}
\int_{A_{\epsilon}} \frac{|\hat\mu(\tilde p)-\mu(\tilde p)|^2}{\hat\sigma^2(\tilde p)}f^*(\tilde p)d\tilde p\le
\int_{A_{\epsilon}} \frac{|\hat\mu(\tilde p)-\mu(\tilde p)|}{\hat\sigma^2(\tilde p)}f^*(\tilde p)d\tilde p=
O_p(\epsilon^{1/2}). \end{equation*}
Note that $|W_0(\tilde p)|\le 1$ for every~$\tilde p$. Thus, combining the bounds for the terms in~(\ref{W.hat.bnd}) with the corresponding bound for $|\hat W - W_0|$ in Theorem~\ref{rate.thm}, we derive \begin{equation*}
\int_0^1 \big|\widehat W(\tilde p) - W_0(\tilde p)\big|f^*(\tilde p)d\tilde p=O_p(\epsilon^{1/2})+o_p(1). \end{equation*}
As this bound holds for every positive~$\epsilon$, we deduce that $\int_0^1 \big|\widehat W(\tilde p) - W_0(\tilde p)\big|f^*(\tilde p)d\tilde p=o_p(1)$.
\section{Proof of Theorem~\ref{rate.thm}}
We build on the results of Theorem~\ref{g.thm} to derive the rate of convergence for~$\hat\mu$ and~$\widehat{W}$ for a fixed positive~$\epsilon$. Continuity and positivity of $\mu(\tilde p)$ and $p_0(\tilde p)$ imply that both functions are bounded away from zero on the interval $[\epsilon,1-\epsilon]$. Applying Lemma 10.9 in \cite{van2000applications}, we derive $\|\hat g - g^*\|_{\infty} = O_p(r_n^{3/4} s_n^{1/4})$.
Because $n^{-8/21}\ll\lambda_n\ll 1$, we have $\|\hat g - g^*\|_{\infty} = o_p(1)$, which implies $\sup_{[\epsilon,1-\epsilon]}|\hat \mu(\tilde p) - \mu(\tilde p)| = o_p(1)$. Also note that $\hat p(\tilde p)\ge \hat \mu(\tilde p)$ for all~$\tilde p$. Consequently, there exists an event with probability tending to one, on which random functions~$\hat p(\tilde p)$ and $\hat\mu(\tilde p)$ are bounded away from zero on the interval $[\epsilon,1-\epsilon]$. The stated error bounds for~$\hat p$ then follow directly from this observation and the error bounds for~$\hat g$ and~$\hat g'$ in Theorem~\ref{g.thm}.
For the remainder of the proof we restrict our attention to the event~A (whose probability tends to one), on which functions~$p_0$ and~$\hat p$ are both bounded away from zero on $[\epsilon,1-\epsilon]$. We write~$p$ for the true probability corresponding to the observed~$\tilde p$, define $G(q)=E[(p-q)^2/q^2|\tilde p]$
and note that $G'(q)={2(q-p^*)E(p|\tilde p)}/{q^3}$.
Let $p^*$ be the minimizer of $G$, given by $p^*=E[p|\tilde p]+{Var[p|\tilde p]}/{E[p|\tilde p]}$. Denote by $\hat p^*$ our estimator of~$p^*$, which is obtained by replacing the conditional expected value and variance in the above formula by their ECAP estimators. While~$p^*$ and $\hat p^*$ depend on~$\tilde p$, we will generally suppress this dependence in the notation for simplicity. Note that for~$\tilde p\in[\epsilon,1-\epsilon]$, functions~$p^*$ and~$\hat p^*$ are both bounded away from zero on the set~$A$.
Fix an arbitrary~$\tilde p\le0.5$. Define events $A_1=A\cap\{p^*\le0.5,\hat p^*\le0.5\}$, $A_2=A\cap\{p^*>0.5,\hat p^*\le0.5\}$, $A_3=A\cap\{p^*\le0.5,\hat p^*>0.5\}$ and $A_4=A\cap\{p^*>0.5,\hat p^*>0.5\}$. Note that $A_4$ implies $\hat p=p_0=0.5$. Writing Taylor expansions for function~$G$ near~$p^*$ and~$0.5$, we derive the following bounds, which hold for some universal constant~$c$ that depends only on~$\epsilon$: \begin{eqnarray*}
\big|W_0(\tilde p)-\widehat{W}(\tilde p)\big|1_{\{A\}}&=&\big|G(p^*)-G(\hat p^*)\big|1_{\{A_1\}}+\big|G(0.5)-G(\hat p^*)\big|1_{\{A_2\}}+\big|G(p^*)-G(0.5)\big|1_{\{A_3\}}\\ &\le& c\big(p^*-\hat p^*)^21_{\{A_1\}}+c\big(0.5-\hat p^*)^21_{\{A_2\}}+c\big(p^*-0.5)^21_{\{A_3\}}\\ &\le& c\big(p^*-\hat p^*)^2.
\end{eqnarray*} Analogous arguments derive the above bound for $\tilde p>0.5$. The rate of convergence for~$\widehat{W}$ then follows directly from the error bounds for~$\hat g$ and~$\hat g'$ in Theorem~\ref{g.thm}.
\section{Proof of Theorem~\ref{bias.thm}} \label{prf.bias.thm}
Throughout the proof we drop the subscript~$i$ for the simplicity of notation. First note that the derivations in the proof of Theorem~\ref{EandVar} also give $E({\gamma^*}\alpha|\tilde p)=\mu$ and $Var({\gamma^*}\alpha|\tilde p)=\sigma^2$, where $\mu$ and $\sigma^2$ are respectively defined in \eqref{mui} and \eqref{sigmai}. These identities hold for both the unbiased and biased versions of the model. The only difference is in how ${\gamma^*}\alpha$ relates to $p$. Note that \begin{eqnarray}
E(p|\tilde p) &=& E(h({\gamma^*}\alpha)|\tilde p)\nonumber=(1-0.5\theta) E({\gamma^*}\alpha|\tilde p) - \theta[E({\gamma^*}^3\alpha^3|\tilde p)-1.5E({\gamma^*}^2\alpha^2|\tilde p)]\nonumber\\ \label{bias.e.proof}&=& (1-0.5\theta)\mu -\theta[s_3 + 3\mu\sigma^2 + \mu^3 - 1.5\sigma^2-1.5\mu^2], \end{eqnarray}
where we use~$s_k$ to denote the $k$-th conditional central moment of ${\gamma^*}\alpha$ given~$\tilde p$. By Lemma~\ref{lem.mom.appr} in Appendix~\ref{append.sup.res}, the~$s_3$ term in~\eqref{bias.e.proof} is $O({\gamma^*}^{3/2})$, which leads to the stated approximation for $E(p|\tilde p)$.
We also have
$$Var(p|\tilde p) = Var(h({\gamma^*}\alpha)|\tilde p) = (1-0.5\theta)^2\sigma^2 + \theta a,$$
where $a = \theta Var({\gamma^*}^3 \alpha^3 - 1.5 {\gamma^*}^2\alpha^2|\tilde p) - (1-0.5\theta)Cov({\gamma^*}\alpha,{\gamma^*}^3\alpha^3 -1.5{\gamma^*}^2\alpha^2|\tilde p)$.
It is only left to show that $a=O({\gamma^*}^{3/2})$. A routine calculation yields \begin{equation*}
a=\sigma_i^2\big[3\mu(1-\mu)(3\theta\mu(1-\mu)-0.5\theta+1) \big]+O\Big(\sum_{k=3}^6[\sigma^k+s_k]\Big). \end{equation*} By Lemma~\ref{lem.mom.appr}, the remainder term is $O({\gamma^*}^{3/2})$, which completes the proof.
\section{Proof of Theorem~\ref{EandVar.mixture}}
We use the notation from the proof of Theorem \ref{EandVar}. In particular, we omit the subscript~$i$ throughout most of the proof, for the simplicity of the exposition. We represent $\tilde p$ as $\sum_{k=1}^KI_{\{\mathcal{I}=k\}}\xi_k$, where $\xi_k|p\sim Beta(\alpha_k,\beta_k)$,$\alpha_k=c_kp/\gamma^*$, $\beta_k=(1-c_kp)/\gamma^*$, and~$\mathcal{I}$ is a discrete random variable independent of~$p$ and $\xi_k$, whose probability distribution is given by $P(\mathcal{I}=k)=w_k$ for $k=1,...,K$.
Note that \begin{equation*}
f_{\xi_k}(\tilde p|p)=B\Big(\frac{c_kp}{\gamma^*}, \frac{1-c_kp}{\gamma^*}\Big)^{-1}\tilde p^{\frac{c_kp}{\gamma^*} -1 }(1-\tilde p)^{\frac{1-c_kp}{\gamma^*}-1}. \end{equation*} Hence, writing $B$ for $B(\frac{c_kp}{\gamma^*}, \frac{1-c_kp}{\gamma^*})$, we derive \begin{equation*} \begin{split}
\log(f_{\xi_k}(\tilde p|p) )&=-\log B + \Big( \frac{c_kp}{\gamma^*} -1 \Big) \log\tilde p + \Big( \frac{1-c_kp}{\gamma^*} -1 \Big) \log(1-\tilde p) \\ &= -\log B +\frac{c_kp}{\gamma^*}\log\tilde p -\log\tilde p+\frac{1-c_kp}{\gamma^*} \log(1-\tilde p)-\log(1-\tilde p)\\ &= -\log B + p\frac{c_k}{\gamma^*} \log\Big( \frac{\tilde p}{1-\tilde p}\Big) -\log\tilde p +\frac{1}{\gamma^*} \log(1-\tilde p)-\log(1-\tilde p) \\ &= -\log B +\eta x - \log\tilde p +\Big(\frac{1-\gamma^*}{\gamma^*}\Big)\log(1-\tilde p), \end{split} \end{equation*} where we've defined $\eta=p\frac{c_1}{\gamma^*}$ and $x=\log \big({\tilde p}/[{1-\tilde p}]\big)$. Repeating the derivations in the proof of Theorem \ref{EandVar} directly below display~(\ref{eq.exp.fam}), we derive \begin{equation*} \begin{split}
E(p|\xi_k=\tilde p)&=\frac{1}{c_k} \Big[ \gamma^*\Big(g^*(\tilde p)+1-2\tilde p\Big)+\tilde p\Big] \\
Var(p|\xi_k=\tilde p) &= \frac{1}{c_k^2} \Big[ \gamma^{*2} \Big( \tilde p(1-\tilde p) ({g^*}'(\tilde p)-2) +\gamma^*\tilde p(1-\tilde p)\Big) \Big]. \end{split} \end{equation*} Consequently, \begin{equation} \begin{split}
E(p|\tilde p, \mathcal{I}=k)&=E(p|\xi_k=\tilde p)=\frac{1}{c_k} \Big[ \gamma^*\Big(g^*(\tilde p)+1-2\tilde p\Big)+\tilde p\Big] \\
Var(p|\tilde p, \mathcal{I}=k) &= Var(p|\xi_k=\tilde p)=\frac{1}{c_k^2} \Big[ \gamma^{*2} \Big( \tilde p(1-\tilde p) ({g^*}'(\tilde p)-2) +\gamma^*\tilde p(1-\tilde p)\Big) \Big] \end{split} \end{equation}
Applying the law of total probability and using the fact that~$\mathcal{I}$ and~$\tilde p$ are independent, we derive
$$E(p|\tilde p)= \sum_{k=1}w_kE(p|\tilde p,\mathcal{I}=k)= \sum_{k=1}^K \frac{w_k}{c_k} \Big[ \gamma\Big( g^{*}(\tilde p)+1-2\tilde p \Big) +\tilde p \Big]. $$ By the law of total variance, we also have \begin{eqnarray*}
Var(p|\tilde p)&=&
\sum_{k=1}w_k Var(p|\tilde p,\mathcal{I}=k)+\sum_{k=1}w_kE^2(p|\tilde p,\mathcal{I}=k)-\big[\sum_{k=1}w_kE(p|\tilde p,\mathcal{I}=k)\big]^2\\ &=& \sum_{k=1}^K \frac{w_k}{c_k^2} \big[ \gamma^{*2} \tilde p(1-\tilde p) ({g^*}'(\tilde p)-2) +\gamma^*\tilde p(1-\tilde p) \big] + \big[ \gamma^*(g^*(\tilde p)+1-2\tilde p)+\tilde p\big] ^2 \big[ \sum_{k=1}^K\frac{w_k}{c_k^2}- \big( \sum_{k=1}^K\frac{w_k}{c_k} \big)^2\big]. \end{eqnarray*} To complete the proof, we use formulas $$\mu_i=\tilde p_i+\gamma^*[g^*(\tilde p_i)+1-2\tilde p_i]\qquad\text{and}\qquad \sigma_i^2=\gamma^*\tilde p_i(1-\tilde p_i)+\gamma^{*2}\tilde p_i(1-\tilde p)[g^{*'}(\tilde p_i)-2]$$ to rewrite the above expressions as
$E(p_i|\tilde p_i) = \mu_i \sum_{k=1}^K {w_k}/{c_k}$ and \begin{equation*}
Var(p_i|\tilde p_i)=\sum_{k=1}^K \frac{w_k}{c_k^2} \sigma_i^2 + \mu_i^2\sum_{k=1}^K \frac{w_k}{c_k^2} - \mu_i^2\Big( \sum_{k=1}^K \frac{w_k}{c_k} \Big)^2 = (\sigma_i^2+\mu_i^2)\sum_{k=1}^K \frac{w_k}{c_k^2} -\mu_i^2 \Big( \sum_{k=1}^K \frac{w_k}{c_k} \Big)^2. \end{equation*}
\section{Supplementary Results} \label{append.sup.res}
\begin{lemma} \label{lem1}
Under the conditions of Theorem~\ref{rate.thm}, there exists a function $g^*_N\in\mathcal{G}_N$, such that $\|g^*_N-g^*\|^2=O_p(\lambda_n^{2})$ and \begin{equation*}
\|\hat g - g^*_N\|^2 + \lambda_n^2 I^2(\hat g)\le
O_p\Big(n^{-2/7}\|\hat g - g^*_N\|\Big) + O_p\Big(n^{-4/7} I(\hat g) \Big)+O_p\Big(n^{-4/7} +\lambda_n^2\Big). \end{equation*} \end{lemma} \noindent\textbf{Proof of Lemma~\ref{lem1}}. We will use the empirical process theory notation and write ${\tilde P}_n g$ and~${\tilde P}g$ for $(1/n)\sum_{i=1}^n g(\tilde p_i)$ and $\int_0^1 g(\tilde p)f^*(\tilde p)d\tilde p$, respectively. Using the new notation, criterion~(\ref{risk.criterion}) can be written as follows: \begin{equation*} Q_n(g) = {\tilde P}_n g^2 + {\tilde P}_n s_g + \lambda_n^2 I^2(g). \end{equation*}
As we showed in the proof of Theorem~\ref{risk.lemma}, equality ${\tilde P}g^2 + {\tilde P} s_g = \|g-g^*\|^2$ holds for every candidate function~$g\in\mathcal{G}_N$. Consequently, \begin{equation*}
Q_n(g) = \|g-g^*\|^2 + ({\tilde P}_n-{\tilde P})g^2 + ({\tilde P}_n-{\tilde P}) s_g + \lambda_n^2 I^2(g). \end{equation*}
Let $g^*_N$ be a function in $\mathcal{G}_N$ that interpolates~$g^*$ at points $\{0,\tilde p_1,...,\tilde p_n,1\}$, with two additional constraints: ${g^*_N}'(0)={g^*}'(0)$ and ${g^*_N}'(1)={g^*}'(1)$.
A standard partial integration argument \citep[similar to that in ][for example]{green1993nonparametric} shows that $I(g^*_N)\le I(g^*)$, which also implies that ${g^*_N}'$ is uniformly bounded. Furthermore, we have $\|g^*_N-g^*\|_{\infty}=O_p(\log(n)/n)$ by the maximum spacing results for the uniform distribution \citep[for example]{shorack2009empirical}, the boundedness away from zero assumption on~$f^*$ and the boundedness of ${g^*_N}'$. Consequently, $\|g^*_N-g^*\|^2=O_p(\lambda_n^2)$.
Because $Q_n(\hat g)\le Q_n(g^*_N)$, we then have \begin{equation} \label{basic_ineq}
\|\hat g - g^*_N\|^2 + \lambda_n^2 I^2(\hat g)\le ({\tilde P}_n-{\tilde P})[{g^*_N}^2-\hat g^2] + ({\tilde P}_n-{\tilde P})[s_{g^*_N}-s_{\hat g}] + \lambda_n^2 [I^2(g^*_N)+1]. \end{equation} Note that \begin{equation} \label{g-sq.ineq} ({\tilde P}_n-{\tilde P})[{g^*_N}^2-\hat g^2] =-({\tilde P}_n-{\tilde P})[\hat g-{g^*_N}]^2 - ({\tilde P}_n-{\tilde P}){g^*_N}[\hat g-{g^*_N}]. \end{equation} Applying Lemma~17 in \cite{meier2009high}, in conjunction with Corollary~5 from the same paper, in which we take $\gamma=2/5$ and $\lambda=n^{-1/2}$, we derive \begin{equation} \label{nu_n_dg2}
({\tilde P}_n-{\tilde P})[{g^*_N}-\hat g]^2 = O_p\Big(n^{-1/5}\|\hat g - g^*_N\|^2\Big) + O_p\Big(n^{-1} I^2(\hat g - g^*_N) \Big). \end{equation} Applying Corollary 5 in \cite{meier2009high} with the same~$\gamma$ and~$\lambda$ yields
$$({\tilde P}_n-{\tilde P}){g^*_N}[\hat g-{g^*_N}] = O_p\Big(n^{-2/5}\sqrt{\|{g^*_N}[\hat g - g^*_N]\|^2 + n^{-4/5} I^2({g^*_N}[\hat g - g^*_N])}\Big).$$ Using Lemma 10.9 in \cite{van2000applications} to express the $L_2$ norm of the first derivative in terms of the norms of the second derivative and the original function, we derive \begin{equation} \label{nu_n_dg}
({\tilde P}_n-{\tilde P}){g^*_N}[\hat g-{g^*_N}] = O_p\Big(n^{-2/5}\|\hat g - g^*_N\|\Big) + O_p\Big(n^{-4/5} I(\hat g - g^*_N) \Big). \end{equation} Applying Corollary 5 in \cite{meier2009high} with $\gamma=2/3, \lambda=n^{-3/14}$ and using Lemma 10.9 in \cite{van2000applications} again, we derive \begin{equation*}
({\tilde P}_n-{\tilde P})[s_{g^*_N}-s_{\hat g}] = O_p\Big(n^{-3/7}\|s_{g^*_N}-s_{\hat g}\|\Big) + O_p\Big(n^{-4/7} I(\hat g - g^*_N) \Big). \end{equation*} Hence, by Lemma 10.9 in \cite{van2000applications}, \begin{equation*}
({\tilde P}_n-{\tilde P})[s_{g^*_N}-s_{\hat g}] = O_p\Big(n^{-3/7}\|\hat g - g^*_N\|^{1/2}I^{1/2}(\hat g - g^*_N)\Big) + O_p\Big(n^{-4/7} I(\hat g - g^*_N) \Big), \end{equation*} which leads to \begin{equation}
\label{nu_n_ds}({\tilde P}_n-{\tilde P})[s_{g^*_N}-s_{\hat g}] = O_p\Big(n^{-2/7}\|\hat g - g^*_N\|\Big) + O_p\Big(n^{-4/7} I(\hat g) \Big)+O_p\Big([n^{-4/7}I(\hat g^*_N)\Big) \end{equation}
Combining (\ref{basic_ineq})-(\ref{nu_n_ds}), and noting the imposed assumptions on~$\lambda_n$, we arrive at \begin{equation*}
\|\hat g - g^*_N\|^2 + \lambda_n^2 I^2(\hat g)\le
O_p\Big(n^{-2/7}\|\hat g - g^*_N\|\Big) + O_p\Big(n^{-4/7} I(\hat g) \Big)+O_p\Big(n^{-4/7} +\lambda_n^2\Big), \end{equation*} which completes the proof of Lemma~\ref{lem1}.
\begin{lemma} \label{lem.mom.appr} Under the conditions of Theorem~\ref{bias.thm}, \begin{equation*}
\sigma^2=O({\gamma^*})
\quad\text{and}\quad s_k=O({\gamma^*}^{3/2}),\;\;\text{for}\;\; k\ge3. \end{equation*} \end{lemma}
\noindent \textbf{Proof of Lemma~\ref{lem.mom.appr}}. We first show that $E(t-\tilde p|\tilde p)=O(\sqrt{\gamma^*})$ as $\gamma^*$ tends to zero, where we write~$t$ for the quantity $\alpha\gamma^*=h_{\theta}^{-1}(p)$. This result will useful for establishing the stated bound for~$s_k$.
Throughout the proof we use expression $\gtrsim$ to denote inequality $\ge$ up to a multiplicative factor equal to a positive constant that does not depend on~$\gamma^*$. We use an analogous agreement for the $\lesssim$ expression. We write $f_c(\tilde p)$ for the conditional density of $\tilde p$ given $t=c$, write $f(\tilde p)$ for the marginal density of $\tilde p$, and write $m_{\theta}(t)$ for the marginal density of $t$. In the new notation, we have \begin{equation*}
E\big(|t-\tilde p|\big|\tilde p\big)={\int\limits_{0}^{1}|t-\tilde p|f_{t}(\tilde p)m_{\theta}(t)[f(\tilde p)]^{-1}dt}. \end{equation*}
Using Stirling's approximation for the Gamma function, $\Gamma(x)=e^{-x}x^{x-1/2}(2\pi)^{1/2}[1+O(1/x)]$, and applying the bound $x\Gamma(x)=O(1)$ when~$t$ is close to zero or one, we derive the following bounds as $\tau$ tends to infinity: \begin{eqnarray*}
\sqrt{\tau}E\big(|t-\tilde p|\big|\tilde p\big)
&=&\int\limits_0^1 \sqrt{\tau}|t-\tilde p|\frac{\Gamma(\tau)}{\Gamma(t\tau)\Gamma(q\tau)}\tilde p^{t\tau-1}\tilde q^{q\tau-1}m_{\theta}(t)[f(\tilde p)]^{-1}dt\\
&\lesssim&\int\limits_0^1 \sqrt{\tau}|t-\tilde p|\frac1{\sqrt{2\pi}}[\tilde p/t]^{t\tau}(\tilde q/q)^{q\tau}\sqrt{tq\tau}m_{\theta}(t)[f(\tilde p)]^{-1}[\tilde p\tilde q]^{-1}dt\\
&\lesssim& \int\limits_{0}^{1} \sqrt{\tau}|t-\tilde p|e^{-\frac{\tau(t-\tilde p)^2}{18}}\sqrt{\tau}dt. \end{eqnarray*} Implementing a change of variable, $v=\sqrt{\tau}(t-\tilde p)$, we derive \begin{equation*}
\sqrt{\tau}\big|E\big(t-\tilde p\big|\tilde p\big)\big|\lesssim \int\limits_{\mathbb{R}} |v|e^{-v^2/18}dv=O(1). \end{equation*}
Consequently, $E(t-\tilde p|\tilde p)=O(1/\sqrt{\tau})=O(\sqrt{\gamma^*})$.
We now bound $E\big([t-\tilde p]^2\big|\tilde p\big)$ using a similar argument. Following the arguments in the derivations above, we arrive at \begin{eqnarray*}
\tau E\big([t-\tilde p]^2\big|\tilde p\big)&\lesssim& \int\limits_{0}^{1} \tau(t-\tilde p)^2\frac1{\sqrt{2\pi}}e^{-\frac{\tau(t-\tilde p)^2}{18}}\sqrt{\tau}m_{\theta}(t)dt. \end{eqnarray*} Implementing a change of variable, $v=\sqrt{\tau}(t-\tilde p)$, we conclude that \begin{equation*}
\tau E\big([t-\tilde p]^2\big|\tilde p\big)\lesssim \int\limits_{\mathbb{R}} v^2e^{-v^2/18}dv = O(1). \end{equation*}
Thus, we have established \begin{equation} \label{s_k.bnds.12}
E\big([t-\tilde p]^k\big|\tilde p\big)=O({\gamma^*}^{k/2}), \quad\text{for}\quad k\in\{1,2\}. \end{equation}
Analogous arguments lead to bounds $E\big([t-\tilde p]^k\big|\tilde p\big)=O({\gamma^*}^{3/2})$ for $k\ge3$. We complete the proof of the lemma by noting that \begin{equation*}
s_k=O\Big(E\Big([t-\tilde p]^k\Big|\tilde p\Big)\Big)+O\big({\gamma^*}^{3/2}\big), \quad\text{for}\quad k\ge 2. \end{equation*}
When $k=2$ the above approximation follows from $\sigma^2\le E\big([t-\tilde p]^2\big|\tilde p\big)$, and when $k=3$ it follows from~(\ref{s_k.bnds.12}) and \begin{eqnarray*}
s_3&=&E\Big([t-\tilde p]^3\Big|\tilde p\Big)+3E\Big([t-\tilde p]^2\Big|\tilde p\Big)E\Big(\tilde p-t\Big|\tilde p\Big)
+3E\Big(t-\tilde p\Big|\tilde p\Big)E^2\Big(\tilde p-t\Big|\tilde p\Big)+E^3\Big(\tilde p-t\Big|\tilde p\Big)\\
&=&E\Big([t-\tilde p]^3\Big|\tilde p\Big)+O\big({\gamma^*}^{3/2}\big). \end{eqnarray*} The derivations for~$k\ge4$ are analogous.
\end{document} | arXiv |
\begin{document}
\title{On the Injectivity of Mean Value Mapping between Convex Quadrilaterals}
\author[Dieci]{Luca Dieci} \address{School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332 U.S.A.} \email{[email protected]} \author[Difonzo]{Fabio V. Difonzo} \address{Dipartimento di Matematica, Universit\`a degli Studi di Bari Aldo Moro, Via E. Orabona 4, 70125 Bari, Italy} \email{[email protected]}
\subjclass{52A21, 26E25}
\keywords{mean-value coordinates, convex quadrilaterals}
\begin{abstract}
We prove that Mean Value mapping between convex quadrilaterals is injective, affirmatively proving a conjecture stated in \cite{FloaterKosinka2010}. \end{abstract}
\maketitle
\pagestyle{myheadings} \thispagestyle{plain} \markboth{L. DIECI AND F.V. DIFONZO}{ON THE INJECTIVITY OF MEAN VALUE MAPPING}
\section{Introduction}
Let $n,d$ be integers, with $n\geq d$ and let \[ \Lambda\mathrel{\mathop:}=\{\lambda\in\mathbb{R}^n\,:\,\mathds{1}^\top\lambda=1\},\quad \Lambda_+\mathrel{\mathop:}=\{\lambda\in[0,1]^n\,:\,\mathds{1}^\top\lambda=1\}, \] where $\mathds{1}$ is the vector in $\mathbb{R}^n$ with all components equal to one. It follows that $\Lambda_+\subseteq\Lambda$.
Given a convex polytope $P=\mathrm{conv}\{v_i\}_{i=1}^n$ in $\mathbb{R}^d$, where the $v_i$'s are affinely independent (see \cite{grunbaum2003convex}), a set of \emph{nonnegative generalized barycentric coordinates} for a point $p\in P$ (e.g., see \cite{FloaterHormannKos2006}) is an $n$-tuple $\mu \in \Lambda_+$ satisfying the underdetermined, full rank, linear system \begin{equation}\label{eq:V1gbc} \bmat{V \\ \mathds{1}^\top}\mu=\bmat{p \\ 1},\quad V\mathrel{\mathop:}=\bmat{v_1 & \cdots & v_n}\ . \end{equation} If there is some $i=1,\ldots,n$ such that $\mu_i<0$ then we call $\mu$ a set of \emph{generalized barycentric coordinates} for $p\in P$.
Hereafter, $\mathrm{conv} V$ will indicate the convex hull of the $v_i$'s, that is the polytope $P$. Further, let $\nu\in\mathbb{R}^{n\times(n-d-1)}$ be such that \begin{equation}\label{eq:nu} \langle\nu\rangle=\ker\bmat{V \\ \mathds{1}^\top}. \end{equation}
Now, take two different sets of $n$ affinely independent vertices in $\mathbb{R}^d$, $V$ and $\tilde V$, and consider the map \begin{equation}\label{eq:barycentricMapping} f:P \to \tilde P ,\quad p \mapsto \tilde p\mathrel{\mathop:}=\tilde V\mu(p) \end{equation} where $\mu(p)$ is a set of nonnegative generalized barycentric coordinates for $p\in\ P$ and $\tilde P\mathrel{\mathop:}=\mathrm{conv}\tilde V$. Such a map $f$ is called \emph{barycentric mapping} between the polytopes $P,\,\tilde P$. The problem addressed in \cite{FloaterKosinka2010} is whether or not this map is injective.
In this work, we will restrict to the case of a polygon $P$ (that is, $d=2$) and to the barycentric coordinates given by the {\bf mean-value coordinates}. These were originally proposed by M. Floater in 2005 (see \cite{floater2003,HormannFloater2006}), who defined them, for any $p\in P$, as \begin{equation}\label{eq:MVcoordinates} \lambda_{i}(p)\mathrel{\mathop:}=\frac{w_i(p)}{\sum_{j=1}^{n}w_j(p)},\quad w_i(p)\mathrel{\mathop:}=\frac{\tan\left(\frac{\alpha_{i-1}}{2}\right)+\tan\left(\frac{\alpha_{i}}{2}
\right)}{\|v_i-p\|},\,\,i=1,\ldots,n, \end{equation} where the angles $\alpha_i$'s are as in Figure \ref{fig:polytope} and the norm is the Euclidean norm. \begin{wrapfigure}{r}{0.5\textwidth} \centering \begin{tikzpicture}[scale=0.50]\usetikzlibrary{calc} \coordinate [label=below:$v_{i-1}$] (A) at (-2.5,-2.5);
\coordinate [label=below:$v_{i}$] (B) at (1,-2); \coordinate [label=right:$v_{i+1}$] (C) at (3,1); \coordinate (D) at (0,3.5); \coordinate (E) at (-4,4); \coordinate (F) at (-5,1.5); \coordinate (G) at (-5,-0.5); \coordinate [label=above:$p$] (P) at (-1,1); \fill (A) circle (1.4pt) (B) circle (1.4pt) (C) circle (1.4pt) (D) circle (1.4pt) (E) circle (1.4pt) (F) circle (1.4pt) (G) circle (1.4pt) (P) circle (1.4pt); \draw (A)--(B)--(C); \draw[dotted] (C)--(D)--(E)--(F)--(G)--(A); \draw (P)--(A) (P)--(B) (P)--(C); \draw [bend right] ($(P)!0.4!(A)$) to node [pos=0.5,below] {$\alpha_{i-1}$} ($(P)!0.4!(B)$) [bend right] ($(P)!0.3!(B)$) to node [pos=0.5,right] {$\alpha_{i}$} ($(P)!0.3!(C)$); \end{tikzpicture} \caption{General convex polygon.} \label{fig:polytope} \end{wrapfigure}
As already thoroughly explained in \cite{FloaterKosinka2010}, any map of the form \eqref{eq:barycentricMapping} injectively maps the boundary of $P$ to the boundary of $\tilde P$. In the same paper \cite{FloaterKosinka2010}, Floater and Kosinka showed that if $n\ge 5$, then the mean-value coordinates are not injective, unlike other nonnegative barycentric coordinates, such as Wachspress coordinates \cite{wachspress1975}, that are injective for any pair of strictly convex polygons. Yet, Floater and Kosinka left open the important case of quadrilaterals (i.e, $n=4$), which can be stated as follows. \begin{conj}\label{conj} Let $v_i,i=1,2,3,4$ be such that no three of them are aligned, and let the same hold for $\tilde v_i,i=1,2,3,4$. Then the mapping \[ f:\mathrm{int} P \to \mathrm{int}\tilde P ,\quad p \mapsto \tilde p\mathrel{\mathop:}=\tilde V\mu(p). \] relative to the mean-value coordinates $\mu(p)$ between convex quadrilaterals is injective. \end{conj} \begin{rem}\label{rem:int} The fact that \eqref{eq:barycentricMapping} maps $\mathrm{int}P$ to $\mathrm{int}\tilde P$ is a consequence of the fact that $\mu(p)$ are the mean value coordinates, in particular all its components are positive, and of the following reasoning. Let $f(p)$ belong to the boundary of $\tilde P$, say the edge $\tilde v_1\tilde v_2$ without loss of generality, and let $\tilde\lambda=\bmat{1-\alpha \\ \alpha \\ 0 \\ 0}\in\mathbb{R}^4$ be its barycentric coordinates for some $\alpha\in[0,1]$. Therefore $\tilde V\mu(p)=\tilde V\tilde\lambda$, and thus there must exist some $c\neq0$ such that \[ \mu(p)=\tilde\lambda+c\tilde\nu, \] where $\tilde\nu$ spans $\ker\bmat{\tilde V \\ \mathds{1}^\top}$. According to Theorem \ref{thm:mom_MV} below it follows that $\mathrm{sgn}\mu_3(p)\neq\mathrm{sgn}\mu_4(p)$, which is not possible. \end{rem} Although the authors of \cite{FloaterKosinka2010} reported on extensive numerical simulations leading them to believe the conjecture to be true, a rigorous proof of Conjecture \ref{conj} is still lacking and our purpose in this note is to prove that Conjecture \ref{conj} holds true.
Our proof of Conjecture \ref{conj} is motivated by the following result, that gives an equivalence between the mean-value coordinates on convex quadrilaterals and the solution of the following regularized linear system \begin{equation}\label{eq:momSys} {\footnotesize
\bmat{V\\ \mathds{1}^\top \\ d^\top} \lambda(p)=\bmat{p \\ 1 \\ 0},\quad d(p)\mathrel{\mathop:}=\bmat{\|v_1-p\| & -\|v_2-p\| & \|v_3-p\| & -\|v_4-p\|}^\top. } \end{equation} \begin{thm}\label{thm:mom_MV} For each $p\in Q$ the system \eqref{eq:momSys} is nonsingular, and its unique solution $\lambda_{MV}(p)$ is given by the mean-value coordinates \eqref{eq:MVcoordinates}. In particular, all the components of $\lambda_{MV}(p)$ are nonnegative for $p\in Q$, and are strictly positive for $p\in\mathrm{int}Q$. \\ Moreover, the general solution $\mu$ to \eqref{eq:V1gbc} can be written as $\mu=\lambda_{MV}+c\nu$, where $\nu\in\mathbb{R}^4$ is as in \eqref{eq:nu} and $\mathrm{sgn}(\nu)=\pm\bmat{1 & -1 & 1 & -1}^\top$. \end{thm} \begin{proof} What is left to prove is that $\mathrm{sgn}(\nu)=\bmat{1 & -1 & 1 & -1}^\top$, as the remaining parts come from \cite[Theorem 3.9]{dd}. \\ Let $\tau$ be the barycentric coordinates of $v_4$ with respect to the triangle $\mathrm{conv}\{v_1,v_2,v_3\}$, that is the unique solution to \[ \bmat{v_1 & v_2 & v_3 \\ 1 & 1 & 1}\tau=\bmat{v_4 \\ 1}. \] Using Cramer's rule we get \[ \tau=\frac{1}{\mathcal{A}_{123}}\bmat{\mathcal{A}_{423} \\ \mathcal{A}_{143} \\ \mathcal{A}_{124}}, \] where $\mathcal{A}_{ijk}\mathrel{\mathop:}=\frac12\det\bmat{v_i & v_j & v_k \\ 1 & 1 & 1}$ represents the signed area of the triangle $v_i,v_j,v_k$. Since $Q$ is convex, it then follows that $\mathrm{sgn}(\tau)=\bmat{1 & -1 & 1}$ (e.g., see \cite{coxeter1989introduction}). Let $\nu'\mathrel{\mathop:}=\bmat{\tau \\ -1}$. Then \[ \bmat{V \\ \mathds{1}^\top}\nu'=\bmat{0 \\ 0 \\ 0}, \] which implies that $\nu'\in\ker\bmat{V \\ \mathds{1}^\top}$. Thus, there exists $\alpha\neq0$ such that $\nu'=\alpha\nu$, and the claim follows. \end{proof} Hereafter, we choose $\nu$ so that $\mathrm{sgn}(\nu)=\bmat{1 & -1 & 1 & -1}^\top$. \\ We are going to need the following. \begin{prop}\label{prop:momGamma} For any $p\in\mathrm{int}Q$ there exist $\alpha_p,\beta_p,\varepsilon_p\in(0,1)$ such that \begin{equation}\label{eq:pConvAb} p=(1-\varepsilon_p)a_p+\varepsilon_p b_p, \end{equation} with \begin{align*} a_p &\mathrel{\mathop:}= (1-\alpha_p)v_1+\alpha_p v_2, \\ b_p &\mathrel{\mathop:}= (1-\beta_p)v_3+\beta_p v_4 \end{align*} and \begin{equation}\label{eq:momCond} d(p)^{\top}\lambda(p)=0,\quad\lambda(p)\mathrel{\mathop:}=\bmat{(1-\varepsilon_p)(1-\alpha_p) & (1-\varepsilon_p)\alpha_p & \varepsilon_p(1-\beta_p) & \varepsilon_p\beta_p}^{\top}, \end{equation} where $d(p)$ is as in \eqref{eq:momSys}. \\ Analogously, there exist $\gamma_p,\delta_p,\varphi_p\in(0,1)$ such that \[ p=(1-\varphi_p)c_p+\varphi_p d_p, \] with \begin{align*} c_p &\mathrel{\mathop:}= (1-\gamma_p)v_2+\gamma_p v_3, \\ d_p &\mathrel{\mathop:}= (1-\delta_p)v_4+\delta_p v_1 \end{align*} and \begin{equation}\label{eq:momCond2} d(p)^{\top}\lambda(p)=0,\quad\lambda(p)\mathrel{\mathop:}=\bmat{\varphi_p\delta_p & (1-\varphi_p)(1-\gamma_p) & (1-\varphi_p)\gamma_p & \varphi_p(1-\delta_p)}^{\top}, \end{equation} where $d(p)$ is as in \eqref{eq:momSys}. \end{prop} \begin{proof} Let $\lambda(p)$ be the unique solution to \eqref{eq:momSys}. Therefore, the claim is proved by setting \begin{subequations}\label{eq:alphaBetaGamma} \begin{align} \alpha_p &\mathrel{\mathop:}= \frac{\lambda_{2}(p)}{\lambda_{1}(p)+\lambda_{2}(p)},\,\, \beta_p\mathrel{\mathop:}=\frac{\lambda_{4}(p)}{\lambda_{3}(p)+\lambda_{4}(p)},\,\, \varepsilon_p\mathrel{\mathop:}=\lambda_{3}(p)+\lambda_{4}(p), \\ \gamma_p &\mathrel{\mathop:}= \frac{\lambda_{3}(p)}{\lambda_{2}(p)+\lambda_{3}(p)},\,\, \delta_p\mathrel{\mathop:}=\frac{\lambda_{1}(p)}{\lambda_{4}(p)+\lambda_{1}(p)},\,\, \varphi_p\mathrel{\mathop:}=\lambda_{4}(p)+\lambda_{1}(p), \end{align} \end{subequations} which are all well defined since $\lambda(p)>0$ componentwise from Theorem \ref{thm:mom_MV}. \end{proof}
In order to prove the claim, we need some properties of triangular coordinates on a convex quadrilateral $Q$. To fix the idea, let us consider the triangle $\mathcal{T}_4\mathrel{\mathop:}=\mathrm{conv}\{v_1,v_2,v_3\}$ (and similarly for the other $\mathcal{T}_i,i=1,2,3$); the following holds, mutatis mutandis, for the other three triangles built from $Q$ drawing its diagonals. \begin{defn} For each $p\in Q$, we define the \emph{triangular coordinates} $\tau^{4}(p)\in\Lambda$ as the unique solution to the linear system \[ \bmat{v_1 & v_2 & v_3 & v_4 \\ 1 & 1 & 1 & 1 \\ 0 & 0 & 0 & 1}\tau=\bmat{p \\ 1 \\ 0}. \] \end{defn} Let us note that, as long as the $\mathcal{T}_4$ has nonzero area, the system above is nonsingular; moreover, in this case its unique solution has the fourth component equal to zero; further, $\tau^4(p)\in\Lambda_+$ if and only if $p\in\mathcal{T}_4$.
It easily follows that \begin{align*} d(p)^\top\tau^i(p) &\neq 0,\quad p\in\textrm{int}Q,\,\,i=1,2,3,4, \\ d(p)^\top\tau^i(p) &= 0,\quad p\in v_jv_{j+1},\,\,i=1,2,3,4,\,\textrm{ and }j,j+1\neq i, \end{align*} because, otherwise, if for some $p\in\textrm{int}Q$: $d(p)^\top\tau^i(p)=0$, then from Theorem \ref{thm:mom_MV} $\tau^i(p)=\lambda_{MV}(p)$, that is not possible as $\lambda_{MV}(p)$ has no zero component in the interior of $Q$.
\section{Proof of Conjecture \ref{conj}}\label{sec:conj}
We are ready to prove the main result of the paper.
\begin{thm} Conjecture \ref{conj} is true. \end{thm} \begin{proof} Let $V,\tilde V$ be such that their respective convex hulls are the two strictly convex quadrilaterals $Q,\tilde Q$. Moreover, let $p,q\in\mathrm{conv} V$, $p\neq q$, and let $\tilde p\mathrel{\mathop:}=\tilde V\lambda(p),\tilde q\mathrel{\mathop:}=\tilde V\lambda(q)$, where $\lambda$ are the mean-value coordinates. Therefore there exist $c(p),c(q)\in\mathbb{R}$ such that \begin{align*} \lambda(p) &= \tau^4(p)+c(p)\nu, \\ \lambda(q) &= \tau^4(q)+c(q)\nu. \end{align*} Now, since $d(p)^\top\lambda(p)=0$ and $e_4^\top\tau^4(p)=0$, it follows that \[ c(p)=-\frac{d(p)^\top\tau^4(p)}{d(p)^\top\nu}=\frac{\lambda_4(p)}{\nu_4}, \] where $e_i$ is the $i$-th vector of the canonical base in $\mathbb{R}^4$. Thus \[ \lambda_4(p)=-\frac{d(p)^\top\tau^4(p)}{d(p)^\top\nu}\nu_4. \] The argument above holds for any $\tau^i(p),i=1,2,3$, so that we can conclude that \begin{equation}\label{eq:lambdaiNui} \lambda_i(p)=-\frac{d(p)^\top\tau^i(p)}{d(p)^\top\nu}\nu_i,\quad i=1,2,3,4. \end{equation} Since $\mathds{1}^\top\nu=0$, then \[ \sum_{i=1}^4\frac{1}{d(p)^\top\nu}\nu_i=0, \] and hence, from \eqref{eq:lambdaiNui}, \[ \sum_{i=1}^4\frac{1}{d(p)^\top\tau^i(p)}\lambda_{i}(p)=0. \] Thus, by Theorem \ref{thm:mom_MV} we conclude that, for some constant $l>0$, it must be \[
d_i(p)\mathrel{\mathop:}=\|p-v_i\|=(-1)^{i+1}\frac{l}{d(p)^\top\tau^i(p)}, \] which, from \eqref{eq:lambdaiNui}, gives \begin{equation}\label{eq:lambdaiNuidi}
\lambda_i(p)=\frac{l}{d_i(p)d(p)^\top\nu}|\nu_i|,\quad i=1,2,3,4. \end{equation} Then, plugging \eqref{eq:lambdaiNuidi} in \eqref{eq:alphaBetaGamma} we obtain \[
\alpha_p=\frac{d_1(p)|\nu_2|}{d_1(p)|\nu_2|+d_2(p)|\nu_1|},\quad
\beta_p=\frac{d_3(p)|\nu_4|}{d_3(p)|\nu_4|+d_4(p)|\nu_3|}. \] Now, from Proposition \ref{prop:momGamma}, since $f$ is linear and injective on the boundary of $Q$, it follows \begin{align*} f(p) &= (1-\varepsilon_p)f(a_p)+\varepsilon_pf(b_p)=(1-\varphi_p)f(c_p)+\varphi_pf(d_p), \\ f(q) &= (1-\varepsilon_q)f(a_q)+\varepsilon_qf(b_q)=(1-\varphi_q)f(c_q)+\varphi_qf(d_q). \end{align*} If $\alpha_p<\alpha_q$ and $\beta_p\geq\beta_q$, then $a_pb_p\cap a_qb_q=\emptyset$, and thus $f(a_p)f(b_p)\cap f(a_q)f(b_q)=\emptyset$, so that $f(p)\neq f(q)$. \\ Let us now assume that $\alpha_p<\alpha_q$, $\beta_p<\beta_q$ and, without loss of generality, $\gamma_p<\gamma_q$: if we prove that $\delta_p\geq\delta_q$ the claim is proved (see Figure \ref{fig:mom}). \\ \begin{figure}
\caption{Mean Value Barycentric Coordinates case: case for the proof of the main Theorem.}
\label{fig:mom}
\end{figure} Let us argue by contradiction, assuming that $\delta_p<\delta_q$. It is a simple computation that \[ \delta_p<\delta_q\Leftrightarrow\frac{d_4(p)}{d_1(p)}<\frac{d_4(q)}{d_1(q)}. \] Similarly, one computes that \begin{align*} \alpha_p<\alpha_q\Leftrightarrow\frac{d_1(p)}{d_2(p)}<\frac{d_1(q)}{d_2(q)}, \\ \beta_p<\beta_q\Leftrightarrow\frac{d_3(p)}{d_4(p)}<\frac{d_3(q)}{d_4(q)}, \\ \gamma_p<\gamma_q\Leftrightarrow\frac{d_2(p)}{d_3(p)}<\frac{d_2(q)}{d_3(q)}. \end{align*} Taking the product of the right-hand sides above we get $1<1$, which is a contradiction, and this concludes the proof. \end{proof}
\section{Injectivity of bilinear barycentric mapping on quadrilaterals}
Using an analogous construction as in Section \ref{sec:conj}, here we prove that also bilinear barycentric mappings between convex quadrilaterals are injective.
Let $Q\mathrel{\mathop:}=\mathrm{conv}\{v_i\,:\,i=1,2,3,4\}$ be a strictly convex quadrilateral. It is known that, given $p\in Q$ there exist unique $\alpha,\beta\in[0,1]$ such that the bilinear barycentric coordinates of $p$ are given by \[ \lambda(p)=\bmat{(1-\alpha)(1-\beta) \\ \alpha(1-\beta) \\ \alpha\beta \\ (1-\alpha)\beta}. \] This implies that, letting \[ a_p\mathrel{\mathop:}=(1-\alpha)v_1+\alpha v_2,\quad b_p\mathrel{\mathop:}=(1-\alpha)v_4+\alpha v_3, \] we have $p=(1-\beta)a_p+\beta b_p$. It is an easy consequences of uniqueness that such $a_p\in v_1v_2,b_p\in v_4v_3$ are uniquely determined.
Now, let $p,q\in Q,p\neq q$. Then there exist unique $\alpha_p,\beta_p,\alpha_q,\beta_q$ such that, letting $a_p\mathrel{\mathop:}=(1-\alpha_p)v_1+\alpha_p v_2, b_p\mathrel{\mathop:}=(1-\alpha_p)v_4+\alpha_pv_3, a_q\mathrel{\mathop:}=(1-\alpha_q)v_1+\alpha_q v_2, b_q\mathrel{\mathop:}=(1-\alpha_q)v_4+\alpha_qv_3$, we have \[ p=(1-\beta_p)a_p+\beta_p b_p,\quad q =(1-\beta_q)a_q+\beta_q b_q. \] Since $p\neq q$, we have, without loss of generality, that $\alpha_p<\alpha_q$ or $\alpha_p=\alpha_q$ and $\beta_p\neq\beta_q$. \\ If $\alpha_p=\alpha_q$ and $\beta_p\neq\beta_q$, then $a_p=a_q=a$ and $b_p=b_q=b$ and, since $f$ is linear and injective on the boundary of $Q$, it follows \[ f(p)=(1-\beta_p)f(a)+\beta_p f(b)\neq(1-\beta_q)f(a)+\beta_q f(b)=f(q), \] since the mapping is restricted on the segment $ab$, and the claim is proved. \\
\begin{figure}
\caption{Bilinear Barycentric Coordinates case, with $\alpha_p=0.2,\alpha_q=0.6$.}
\label{fig:bilinear}
\end{figure} If $\alpha_p<\alpha_q$ then $a_p$ would precede $a_q$ on $v_1v_2$ and $b_p$ would precede $b_q$ on $v_4v_3$ (see Figure \ref{fig:bilinear}), and thus the segments $a_pb_p$ and $a_qb_q$ do not intersect each other in $Q$, implying that neither do $f(a_p)f(b_p), f(a_q)f(b_q)$. Therefore, if by contradiction $f(p)=f(q)$, it would follow \[ f(a_p)f(b_p)\cap f(a_q)f(b_q)\neq\emptyset, \] which is not possible, and the claim is proved.
Let us observe that bilinear barycentric coordinates, as well as Wachspress coordinates, are differentiable.
\section*{Acknowledgments} FVD has been supported by \textit{REFIN} Project, grant number 812E4967, and by INdAM-GNCS 2023 Project, grant number CUP$\_$E53C22001930001.
\end{document} | arXiv |
How many perfect squares are two-digit and divisible by $3?$
Recall that no perfect squares are negative, because squares of all negative numbers are positive and squares of positive numbers are also positive (and $0^2=0$). Since all perfect squares are either $0$ or positive, the only two-digit perfect squares are: \begin{align*}
4^2&=16\\
5^2&=25\\
6^2&=36\\
7^2&=49\\
8^2&=64\\
9^2&=81
\end{align*} Out of these six perfect squares, only $36$ and $81$ are divisible by $3.$ Note that if a perfect square, $a^2,$ is divisible by $3,$ then $a$ also must have been divisible by $3,$ (as $6$ and $9$ in this case.) Therefore, $\boxed{2}$ perfect squares are two-digit and divisible by $3.$ | Math Dataset |
\begin{document}
\title{\bf About some exponential inequalities \\ related to the sinc function} \maketitle
\begin{center} {\em Marija Ra\v sajski, Tatjana Lutovac, Branko Male\v sevi\' c${}^{\;\mbox{\scriptsize $\ast$}}\!$} \end{center}
\begin{center} {\footnotesize \textit{ School of Electrical Engineering, University of Belgrade, \\[-0.25 ex] Bulevar kralja Aleksandra 73, 11000 Belgrade, Serbia}} \end{center}
\noindent {\small \textbf{Abstract.} {\small In this paper we prove some exponential inequalities involving the sinc function. We analyze and prove inequalities with constant exponents as well as inequalities with certain polynomial exponents.$\;$Also, we establish intervals in which these inequalities hold.}
\footnote{$\!\!\!\!\!\!\!\!\!\!\!\!\!\!$ {\scriptsize ${}^{\mbox{\scriptsize $\ast$}}$Corresponding author. \\[0.0 ex] Emails: \\[0.0 ex] {\em Marija Ra\v sajski} {\tt $<[email protected]$>$}, {\em Tatjana Lutovac} {\tt $<[email protected]$>$}, \\[0.0 ex] {\em Branko Male\v sevi\' c} {\tt $<[email protected]$>$}}}
{\footnotesize Keywords: Exponential inequalities, sinc function }
{\small \tt MSC: Primary 33B10; Secondary 26D05}
\section{Introduction}
Inequalities related to the sinc function, i.e. $\displaystyle \mbox{\rm sinc}\,x\!=\!\mbox{\small $\dfrac{\sin x}{x}$}$ \mbox{${\big (}\displaystyle x \neq 0 {\big )}$}, occur in many fields of mathematics and engineering \cite{D_S_Mitrinovic_1970}, \cite{Mortici_2011}, \cite{Rahmatollahi_DeAbreu_2012}, \cite{G_V_Milovanovic_2014}, \cite{Cloud_Drachman_Lebedev_2014}, \cite{RIM2018}, \cite{AIDE2018} such as {\sc Fourier}\ analysis and its applications, information theory, radio transmission, optics, signal processing, sound recording, etc.
\noindent The following inequalities are proved in \cite{Z.-H._Yang_2014}: \begin{equation} \cos ^{2}{\!\displaystyle\frac{x}{2}}\leq \displaystyle\frac{\sin {x}}{x}\leq \cos ^{3}{\!\displaystyle\frac{x}{3 }}\leq \displaystyle\frac{2+\cos {x}}{3} \label{Z-H-Jang-1} \end{equation} for every $x\in \left( 0,\pi \right). $
\noindent In \cite{Lutovac2017}, the authors considered
possible refinements of the inequality (\ref {Z-H-Jang-1}) by a real analytic function $\varphi _{a}(x)\!=\!\left( \displaystyle\frac{\sin x}{x}\right) ^{\!a}\!\!, $ for $x\!\in \!\left( 0,\pi \right) $ and parameter $a\!\in \!
\mathbb{R}
$, and proposed and proved the following inequalities:
\begin{statement}{\rm (\cite{Lutovac2017}, Theorem 10)} \label{Brankova-teorema} The following inequalities hold true, for every ${x \!\in\! \left(0, \pi\right)}$ and $a \!\in\! \displaystyle\left(1, \mbox{\footnotesize $\displaystyle\frac{3}{2}$}\right):$
\vspace*{-3.0 mm}
\begin{equation} \label{Z-H-Jang} \cos^{2}{\!\displaystyle\frac{x}{2}} \leq \left(\displaystyle\frac{\sin{x}}{x}\right)^{\!a} \leq \displaystyle\frac{\sin{x}}{x}.
\end{equation} \end{statement}
In the paper \cite{Lutovac2017}, based on the analysis of the sign of the analytic function $$ F_a(x) = \left( \displaystyle\frac{\sin x}{x}\right) ^{\!a}-\cos ^{2}{\!\displaystyle\frac{x}{2}} $$ in the right neighborhood of zero, the corresponding inequalities for parameter values $\displaystyle a \geq \mbox{\footnotesize $\displaystyle\frac{3}{2}$}$ are discussed.
In this paper, in subsection 3.1, using the power series expansions and the application of the {\sc Wu}-{\sc Debnath} theorem, we prove that the inequality (\ref{Z-H-Jang}) holds for $a = \mbox{\footnotesize $\displaystyle\frac{3}{2}$}$. At the same time, this proof represents another proof of Statement~\ref{Brankova-teorema}.
Also, we analyze the cases $a\in \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right)$ and $\displaystyle a \geq 2$ and we prove the corresponding inequalities.
In subsection 3.2 we introduce and prove a new double-sided inequality of similar type involving polynomial exponents.
Finally, in subsection 3.3, we establish a relation between the cases of the constant and of the polynomial exponent.
\section{Preliminaries}
In this section we review some results that we use in our study.
In accordance with \cite{Gradshteyn-Ryzhik}, the following expansions hold: \begin{equation} \label{Series_ln_sin_x_over_x} \ln \frac{\sin x}{x} =
-\sum\limits_{k=1}^{\infty}{\frac{2^{2k-1}|\mbox{\bf B}_{2k}|}{k(2k)!}x^{2k}}, \qquad (0 < x < \pi), \end{equation} \begin{equation} \label{Series_ln_cos_x} \ln \cos x =
-\sum\limits_{k=1}^{\infty}{\frac{2^{2k-1}(2^{2k}-1)|\mbox{\bf B}_{2k}|}{k(2k)!}x^{2k}}, \qquad (-\pi/2 < x < \pi/2), \end{equation} where $\mbox{\bf B}_{i}$ ($i \!\in\! {\mathbb N}$) are {\sc Bernoulli}'s numbers.
The following theorem proved by {\sc Wu} and {\sc Debnath} in \cite{Wu_Debnath_2009}, is used in our proofs.
\noindent {\bf Theorem WD.} (\cite{Wu_Debnath_2009}, Theorem 2 ) \label{Debnath_Wu_T} {\em Suppose that $f(x)$ is a real function on $(a,b)$, and that $n$ is a positive integer such that $f ^{(k)}(a+), f^{(k)}(b-)$, $\left(k \!\in\! \{0,1,2, \ldots ,n\}\right)$ exist.
\noindent {\boldmath $(i)$} Supposing that $(-1)^{(n)} f^{(n)}(x)$~is~in\-cre\-asing on $(a,b)$, then for all $x \in (a,b)$ the following inequality holds$\,:$ \begin{equation} \label{Debnath_Wu_first} \begin{array}{c} \displaystyle\sum_{k=0}^{n-1}{\mbox{\small $\displaystyle\frac{f^{(k)}(b\mbox{\footnotesize $-$})}{k!}$}(x\!-\!b)^k} + \frac{1}{(a-b)^n} {\bigg (}\! f(a\mbox{\footnotesize $+$}) - \displaystyle\sum_{k=0}^{n-1}{\mbox{\small $\displaystyle\frac{(a\!-\!b)^{k}f^{(k)}(b\mbox{\footnotesize $-$})}{k!}$} \!{\bigg )} (x\!-\!b)^{n}} \\[2.5 ex] < f(x) < \displaystyle\sum_{k=0}^{n}{\frac{f^{(k)}(b\mbox{\footnotesize $-$})}{k!}(x\!-\!b)^{k}}. \end{array} \end{equation} Furthermore, if $(-1)^{n} f^{(n)}(x)$ is decreasing on $(a,b)$, then the reversed inequality of {\rm (\ref{Debnath_Wu_first})} holds.
\break
\noindent {\boldmath $(ii)$} Supposing that $f^{(n)}(x)$ is increasing on $(a,b)$, then for all $x \!\in\! (a,b)$ the following inequality also holds$\,:$ \begin{equation} \label{Debnath_Wu_second} \begin{array}{l} \displaystyle\sum_{k=0}^{n}{\frac{f^{(k)}(a\mbox{\footnotesize $+$})}{k!}(x-a)^{k}}
< f(x) < \\[2.5 ex] < \displaystyle \sum_{k=0}^{n-1}{\mbox{\small $\displaystyle\frac{f^{(k)}(a\mbox{\footnotesize $+$})}{k!}$}(x\!-\!a)^k} + \frac{1}{(b\!-\!a)^n} {\bigg (}\! f(b-) - \displaystyle\sum_{k=0}^{n-1}{\mbox{\small $\displaystyle\frac{(b-a)^{k}f^{(k)}(a\mbox{\footnotesize $+$})}{k!}$} \!{\bigg )} (x\!-\!a)^{n}}. \end{array} \end{equation} Furthermore, if $f^{(n)}(x)$ is decreasing on $(a,b)$, then the reversed inequality~of~\mbox{\rm (\ref{Debnath_Wu_second})} holds.}
\begin{remark} Note that inequalities $(\ref{Debnath_Wu_first})$ and $ (\ref{Debnath_Wu_second})$ hold for $n \in {\mathbb N}$ as well as for $n=0$. \\ Here, and throughout this paper, a sum where the upper bound of the summation is lower than the lower bound of the summation, is understood to be zero. \end{remark}
The following Theorem, which is a consequence of Theorem WD, was proved~in~\cite{JNSA2018}.
\begin{theorem}{\rm (\cite{JNSA2018}, Theorem 1)} \label{Natural_Extension_Theorem} Let the function $f\!:\!(a,b) \longrightarrow {\mathbb R}$ have the following power series expansion$\,:$ \begin{equation} f(x) = \displaystyle\sum_{k=0}^{\infty}{c_{k}(x-a)^k} \end{equation} for $x \!\in\! (a,b)$, where the sequence of coefficients $\{c_{k}\}_{k \in {\mathbb N}_0}$ has a finite number of non-positive members and their indices are in the set $J \!=\! \{j_0,\ldots,j_\ell\}$.
Then, for the function \begin{equation} F(x) = f(x)-\displaystyle\sum_{i=0}^{\ell}{c_{j_i}(x-a)^{j_i}} = \displaystyle\sum_{k \in {\mathbb N}_0 \backslash\!\;\! J}{c_{k}(x-a)^k}, \end{equation} and the sequence $\{C_{k}\}_{k \in {\mathbb N}_0}$ of the non-negative coefficients defined by$:$ \begin{equation} C_{k} = \left\{ \begin{array}{ccc} c_{k} \!&\!:\!&\! c_{k} > 0, \\[0.75 ex] 0 \!&\!:\!&\! c_{k} \leq 0; \end{array} \right. \end{equation} holds that$\,:$ \begin{equation} F(x) = \displaystyle\sum_{k=0}^{\infty}{C_{k}(x-a)^k}, \end{equation} for every $x \!\in\! (a,b)$.
It is also $F^{(k)}(a+)= k!\,C_{k}$ and the following inequalities hold$:$ \begin{equation} \! \begin{array}{l} \displaystyle\sum_{k=0}^{n}{C_k(x-a)^{k}} < F(x) < \\[3.0 ex] < \displaystyle\sum_{k=0}^{n-1}{C_k(x-a)^k} + \frac{1}{(b-a)^n} {\bigg (} F(b-) - \displaystyle\sum_{k=0}^{n-1}{C_k(b-a)^k} {\bigg )} (x-a)^{n}, \end{array} \end{equation} for every $x \in (a, b)$ and $n \in {\mathbb N}_0$, i.e.
\begin{equation} \!\! \begin{array}{l} \displaystyle \displaystyle\sum_{k=0}^{m}{\!C_k}{(x\!-\!a)^k} + \displaystyle\sum_{i=0}^{\ell}{\!c_{j_i}}{(x - a)^{j_i}} \, < \, f(x) \, < \\[3.0 ex] < \, \displaystyle \sum_{k = 0}^{m - 1}{\!C_k}{(x \!-\! a)^{k}} \!+\! \displaystyle\sum_{i = 0}^{\ell}{\!c_{j_i}}{(x \!-\! a)^{j_i}} \!+\! \displaystyle\frac{{(x \!-\! a)}^m}{{(b \!-\! a)}^m} \!\left(\!{f(b\mbox{\footnotesize $-$}) \!-\! \displaystyle\sum_{k=0}^{m-1}{\!C_k}{{(b\!-\!a)}^k} \!-\! \displaystyle\sum_{i=0}^{\ell}{\!c_{j_i}}{(b\!-\!a)}^{j_i}}\!\right) \end{array} \end{equation} for every $x \!\in\! (a,b)$ and $m > max\{j_0,\ldots,j_\ell\}$. \end{theorem}
\section{Main results} \subsection{Inequalities with constants in the exponents} First, we consider a connection between the number of zeros of a real analytic function and some properties of its derivatives. It is well known that the zeros of a non-constant analytic function are isolated \cite{Godement_2004}, see also \cite{Krantz_Parks_1992} and \cite{Malesevic2016}.
We prove the following assertion: \begin{theorem} \label{exactly_one_zero} Let $f\!: (0, c) \longrightarrow {\mathbb R}$ be real analytic function such that $f^{(k)}(x) > 0$ for $x \in (0,c)$ and $k= m, m+1, \ldots\, , {\big (}\,$for some $m \in {\mathbb N}\,{\big )}$.
\noindent\enskip If the following conditions hold$:$ \begin{itemize} \item[$1)$] there is a right neighbourhood of zero in which the following inequalities hold true: $\, f(x)<0, \, f'(x)<0, \, \ldots, f^{(m-1)}(x)<0,$ \item[and] \item[$2)$] $f(c_-)>0, f'(c_-)>0, \ldots, f^{(m-1)}(c_-)>0,$ \end{itemize} then there exists exactly one zero $ x_0 \in (0, c) $ of the function $ f $.
\end{theorem} {\bf Proof.} As $f^{(m)}(x) \!>\! 0$ for $x \!\in\! (0,c)$, it follows that $f^{(m-1)}(x)$ is monotonically increasing function for $x \!\in\! (0,c)$. Based on conditions $1)$ and $2)$, we conclude that there exists exactly one zero $ x_{m-1} \!\in\! (0, c)$ of the function $f^{(m-1)}(x)$. Next, we can conclude that function $f^{(m-2)}(x)$ is monotonically decreasing for $x \!\in\! (0,x_{m-1})$ and monotonically increasing for $x \!\in\! (x_{m-1},c)$. It is clear that function $ f^{(m-2)}(x) $ has exactly one minimum in the interval $(0, c)$ at point $x_{m-1}$ and $f^{(m-2)}(x_{m-1}) \!<\! 0$. On the basis of conditions 2), it follows that function $f^{(m-2)}(x)$ has exactly one root $x_{m-2}$ on the interval $(0, c)$ and $x_{m-2} \!\in\! (x_{m-1}, c)$.
\noindent By repeating the described procedure, we get the assertion given in the theorem.
$\Box$
Let us consider the family of functions \begin{equation}\displaystyle f_a(x) = a \ln \frac{\sin x}{x} - 2 \ln \cos\frac{x}{2}, \end{equation} for $x \!\in\! (0, \pi)$ and parameter $a \in (1, +\infty)$.
Obviously, the following equivalence is true: \begin{equation}\label{f_a} a_1 < a \Longleftrightarrow f_{a}(x) < f_{a_1}(x), \end{equation} for $a, a_1 > 1 $ and $x \!\in\! (0,\pi)$.
\break
Thus: \begin{equation} \frac{3}{2} < a \Longleftrightarrow f_{a}(x) < f_{\frac{3}{2}}(x), \quad \quad \mbox{for $x \!\in\! (0,\pi)$}. \end{equation}
Based on the power series expansions (\ref{Series_ln_sin_x_over_x}) and (\ref{Series_ln_cos_x}), we have: \begin{equation} \label{fun_f} f_a(x) = \displaystyle\sum_{k=1}^{\infty}{E_k \, x^{2k}} \end{equation} for $a>1$ and $x \in (0, \pi)$, where \begin{equation}\label{power-series-f}
E_k = \displaystyle\frac{{\big (}(2-a)\,4^k-2\,{\big )}|\mbox{\bf B}_{2k}|}{2 k \cdot (2k)!} \quad (k \!\in\! {\mathbb N}). \end{equation}
For $a \!=\! \mbox{\footnotesize $\displaystyle\frac{3}{2}$}$, it is true that $E_{1}=0$ and $E_{k}>0$ for $k=2,3 , \ldots$. Thus, from (\ref{fun_f}), we have $$ \displaystyle f_{\frac{3}{2}}(x)> 0 \quad \quad \mbox{for} \,\, x \in (0, \pi),$$ and consequently the following theorem holds:
\begin{theorem} The following inequalities hold true, for every $x \in (0, \pi)\!:$ $$ \cos^{2}{\!\displaystyle\frac{x}{2}} \leq \left(\displaystyle\frac{\sin{x}}{x}\right)^{\frac{3}{2}} \leq \displaystyle\frac{\sin{x}}{x}.
$$ \end{theorem}
As the inequality $$ \left(\displaystyle\frac{\sin{x}}{x}\right)^{\frac{3}{2}} \leq \left(\displaystyle\frac{\sin{x}}{x}\right)^{a} $$ holds for $x \!\in\! (0, \pi)$ and $a \!\in\! \left(1, \mbox{\footnotesize $\displaystyle\frac{3}{2}$}\right]$, the previous theorem can be thought of as a new proof of Statement~\ref{Brankova-teorema}.
Consider now the family of functions $ \displaystyle f_a(x) = a \ln \frac{\sin x}{x} - 2 \ln \cos\frac{x}{2}, $ for $x \!\in\! (0, \pi)$ and parameter $a \!>\! \mbox{\footnotesize $\displaystyle\frac{3}{2}$}$.
It easy to check that for the sequence $\{\alpha_k\}_{k \in {\mathbb N}}$ where \begin{equation} \alpha_k=2-\frac{2}{4^k}. \end{equation} the following equivalences are true: \begin{equation} \begin{array}{l} a = \alpha_k \;\Longleftrightarrow\; E_k = 0, \\[0.75 em] a \in \left(\alpha_k, \alpha_{k+1} \right) \;\Longleftrightarrow\; \left(\forall i \!\in\! \{1,2,\ldots,k \} \right) E_i < 0 \;\wedge\; \left(\forall i \! > \! k \right) E_i > 0. \end{array} \end{equation}
Let us now consider the function $\mathfrak{m} \!:\! \left[\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right) \longrightarrow {\mathbb N}_0$ defined by: \begin{equation}\label{definition_m(a)} \mathfrak{m}(a)=k \;\;\;\mbox{if and only if }\;\; a \in \left(\alpha_k, \alpha_{k+1} \right]. \end{equation}
It is not difficult to check that $\lim\limits_{a \rightarrow 2_{-}}\!\!\mathfrak{m}(a) = +\infty$, while for a fixed $a \!\in\! \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right)$ the number of negative elements of the sequence $\{ E_k \}_{k \in {\mathbb N}}$ is $\mathfrak{m}(a)$ and their indices are in the set $\{ 1, \ldots , \mathfrak{m}(a)\}$. For this reason, we distinguish two cases $a \!\in\! \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right)$ or $\,a\geq 2$.
As for the parameter $a = 2$ and $x \in (0, \pi)$ we have: $$ \left(\displaystyle\frac{\sin{x}}{x}\right)^{\!2} \leq \cos^{2}{\!\displaystyle\frac{x}{2}} \, \Longleftrightarrow \, \displaystyle \sin^{2}\dfrac{x}{2} \leq \left(\dfrac{x}{2}\right)^{2},$$ while for $a>2$ and $x \in (0, \pi)$ we have: $$ \left(\displaystyle\frac{\sin{x}}{x}\right)^{\!a} \leq \left(\displaystyle\frac{\sin{x}}{x}\right)^{\!2}.$$
Hence, we have proved the following theorem:
\begin{theorem} For every $a \geq 2$ and every $x \in (0, \pi)$ the following inequality holds true$:$ \begin{equation} \label{nejednakost_Corollary11} \left(\displaystyle\frac{\sin{x}}{x}\right)^{\!a} \leq \cos^{2}{\!\displaystyle\frac{x}{2}}. \end{equation} \end{theorem}
Consider now the case when the parameter $a \in \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right) $. As noted above, for any fixed $a \in \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right) $ there is a finite number of negative coefficients in the power series expansion~(\ref{power-series-f}), so it is possible to apply Theorem~\ref{Natural_Extension_Theorem}.
According to Theorem~\ref{Natural_Extension_Theorem}, the following inequalities hold:
\vspace*{-2.5 mm}
\begin{equation}\label{natural-extension-estimation} \begin{array}{l} \displaystyle\sum_{\,\,\,k=\mathfrak{m}(a)+1}^{n}{\!\!\!E_k}{x^k} + \!\! \displaystyle\sum_{i=0}^{\mathfrak{m}(a)-1}{\!E_{i}}{x^{i}} \, < \\[1.75 em] < \, f_a(x) \, < \\[0.50 em] < \!\left({f_a(c\mbox{\footnotesize $-$}) \,-\!\!\!\!\!\! \displaystyle\sum_{\,\,\,k=\mathfrak{m}(a)+1}^{n-1}{\!\!\!E_k}{c^k} \,-\!\! \displaystyle\sum_{i=0}^{\mathfrak{m}(a)-1}{\!E_{i}}{c^{i}}} \right)\! \displaystyle\frac{{x}^n}{{c}^n} \,\,+ \!\!\!\! \displaystyle\sum_{\,\,\,k=\mathfrak{m}(a)+1}^{n-1}{\!\!\!E_k}{x^k} + \!\! \displaystyle\sum_{i=0}^{\mathfrak{m}(a)-1}{\!E_{i}}{x^{i}}, \end{array} \end{equation}
for every $x \!\in\! \left(0,c\right)$, $c \!\in\! \left(0,\pi\right)$, $n \!>\! \mathfrak{m}(a)+1$ and $a \in \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right)$.
The family of functions $f_a(x)$, for $x \in (0, \pi)$ and $a \!\in\! \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right)$, satisfy the conditions $1)$ and $2)$ of Theorem~\ref{exactly_one_zero},
as we prove in the following Lemma:
\begin{lemma}\label{derivations_f} Consider the family of functions $ f_a(x) = a \ln \displaystyle\frac{\sin x}{x} - 2 \ln \cos \displaystyle\frac{x}{2}
$ for $x \!\in\! (0,\pi)$ and parameter $a \!\in\! \left(\mbox{\small $\displaystyle\frac{3}{2}$},2\right)$. Let $m=\mathfrak{m}(a)$, where $\mathfrak{m}(a)$ is defined as in~$(\ref{definition_m(a)})$.
Then, it is true that $~\mbox{\small $\displaystyle\frac{d^k}{dx^k}$}f_{a}(x) > 0$ for $k = m, m+1, \ldots\,$
and $x \!\in\! (0,\pi)$, and the following assertions hold true$:$
\noindent $1)$ There is a right neighbourhood of zero in which the following inequalities hold true$:$
$~f_{a}(x)<0, \,\, \mbox{\small $\displaystyle\frac{d}{dx}$}f_{a}(x)<0, \, \ldots, \mbox{\small $\displaystyle\frac{d^{m-1}}{dx^{m-1}}$}f_{a}(x)<0$,
\noindent $2)$ $~f_{a}(\pi_-)>0, \,\, \mbox{\small $\displaystyle\frac{d}{dx}$}f_{a}(\pi_-)>0, \ldots, \mbox{\small $\displaystyle\frac{d^{m-1}}{dx^{m-1}}$}f_{a}(\pi_-)>0$. \end{lemma}
\noindent {\bf Proof.}
Let us recall that for any fixed $a \in \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right) $ there is a finite number of negative coefficients in the power series expansion~(\ref{power-series-f}). Also, we have: $$ \left(\!\mbox{\small $\displaystyle\frac{d}{dx}$}f_{a}\!\right)\!(x) = a \!\left(\!\cot x \!-\! \frac{1}{x}\!\right)\! + \tan\frac{x}{2}. $$
For the derivations of the function $f_{a}(x)$ in the left neighborhood of $\pi$, it is enough to observe the following:
$$ \left(\!\mbox{\small $\displaystyle\frac{d}{dx}$}f_{a}\!\right)\!(\pi-x) = a \!\left(\!-\cot x \!-\! \frac{1}{\pi-x}\right) + \cot\frac{x}{2} = \frac{2-a}{x}-\frac{a}{\pi}+\left(\!a\!\left(\!\frac{1}{3}\!-\!\frac{1}{\pi^2}\!\right)\!-\!\frac{1}{6}\!\right)\!x+\ldots\,. $$ From this, the conclusions of the lemma can be directly derived.
$\Box$
Thus, for every $a \!\in\! \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right)$, the corresponding function $ f_a(x) = a \ln \displaystyle\frac{\sin x}{x} - 2 \ln \cos \displaystyle\frac{x}{2} $ has exactly one zero on the interval $(0, \pi)$. Let us denote it by $x_a$.
The following Theorem is a direct consequence of these considerations.
\begin{theorem} \label{Corollary 11} For every $a \!\in\! \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right)$, and every $x \in \left(0, x_a\right],$ where $0< x_a < \pi$, the following inequality holds true: \begin{equation} \label{nejednakost_Corollary11} \left(\displaystyle\frac{\sin{x}}{x}\right)^{\!a} \leq\, \cos^{2}{\!\displaystyle\frac{x}{2}}. \end{equation} \end{theorem}
For the selected discrete values of $a \!\in\! \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$}, 2\right) $, the zeros $x_a$ of the corresponding functions $f_ {a} (x)$ are shown in Table 1. Although the values $x_a$ can be obtained by any numerical method, the following remark can also be used to locate them.
\begin{remark} For a fixed $a \!\in\! \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right)$, select $n>\mathfrak{m}(a)+1$ and consider inequalities~$(\ref{natural-extension-estimation})$. Denote the corresponding polynomials on the left-hand side and the right-hand side~of $(\ref{natural-extension-estimation})$ by $P_L(x)$ and $P_R(x)$, respectively. These polynomials are of negative sign in a right neighborhood of zero $($see$\;${\rm \cite{Malesevic2016}},$\;$Theorem$\;${\rm 2.5.}$)$, and they have positive leading coefficients.$\,$ Then, the root $x_a$ of the equation $f_{a}(x) \!=\! 0$ is always localized between the smallest positive root of the equation $P_L (x) \!=\! 0$ and the smallest positive root of the~equation $P_R (x) \!=\! 0$. \end{remark}
\subsection{Inequalities with the polynomial exponents}
In this subsection we propose and prove a new double-sided inequality involving the sinc function with polynomial exponents.
To be more specific, we find two polynomials of the second degree which, when placed in the exponent of the sinc function, give an upper and a lower bound for $ {\cos}^2 \frac{x}{2}$.
\begin{theorem} For every $x\in (0,3.1)$ the following double-sided inequality holds$:$ \label{Teorema_PolyExp} \begin{equation} \label{nejednakost_PolyExp} {\left( {\frac{{\sin x}}{x}} \right)^{\!{p_1}\left( x \right)}} \!< \; {\cos}^2 \frac{x}{2} \;<\, {\left( {\frac{{\sin x}}{x}} \right)^{\!{p_2}\left( x \right)}}, \end{equation} where ${p_1}\left( x \right) = \mbox{\footnotesize $\displaystyle\frac{3}{2}$} + \mbox{\footnotesize $\displaystyle\frac{{{x^2}}}{{2{\pi^2}}}$}$ and ${p_2}\left( x \right) = \mbox{\footnotesize $\displaystyle\frac{3}{2}$} + \mbox{\footnotesize $\displaystyle\frac{{{x^2}}}{{80}}$}$. \end{theorem}
{\noindent \bf Proof.} Consider the equivalent form of the inequality (\ref{nejednakost_PolyExp}): \[{p_1}\left( x \right)\ln \frac{{\sin x}}{x} < 2\ln \cos \frac{x}{2} < {p_2}\left( x \right)\ln \frac{{\sin x}}{x}.\]
Now, let us introduce the following notation:
\[{G_i}\left( x \right) = {p_i}\left( x \right)\ln \frac{{\sin x}}{x} - 2\ln \cos \frac{x}{2},\] for $i=1,2$.
Based on the Theorem WD, from (\ref{Series_ln_sin_x_over_x}) we obtain: \begin{equation} \!\!\!\!\!\!\!\! \begin{array}{l} \label{lnsinxx} -\displaystyle\sum\limits_{k=1}^{m-1}{
\mbox{\small $\displaystyle\frac{2^{2k-1}|B_{2k}|}{k(2k)!}$}x^{2k}} + {\Big (}\mbox{\small $\displaystyle\frac{1}{c}$}{\Big )}^{\!\!2m}\!\! \left( \ln \mbox{\small $\displaystyle\frac{\sin c}{c}$} - \displaystyle\sum\limits_{k=1}^{m-1}{
\mbox{\small $\displaystyle\frac{2^{2k-1}|B_{2k}|}{k(2k)!}$} c^{2k}}\right) \!x^{2m}< \\[3.0 ex]
\;\;\;\;\, \,<\, \ln \mbox{\small $\displaystyle\frac{\sin x}{x}$} \, <-\displaystyle\sum\limits_{k=1}^{n}{
\mbox{\small $\displaystyle\frac{2^{2k-1}|B_{2k}|}{k(2k)!}$}x^{2k}}, \end{array} \end{equation} for $x \!\in\! \left(0,\pi \right)$ where $n, m \!\in\! {\mathbb N}$, $m,n \ge 2$.
Based on the Theorem WD, from (\ref{Series_ln_cos_x}) we obtain: \begin{equation} \!\!\!\!\!\!\!\!\!\!\!\!\!\! \begin{array}{l} -\!\!\displaystyle\sum\limits_{k=1}^{m-1}{\!
\mbox{\small $\displaystyle\frac{2^{2k-1}(2^{2k}\!-\!1)|B_{2k}|}{k(2k)!}$}x^{2k}} + {\Big (}\mbox{\small $\displaystyle\frac{1}{c}$}{\Big )}^{\!2m}\!\! \left(\! \ln \cos c - \!\displaystyle\sum\limits_{k=1}^{m-1}{
\mbox{\small $\displaystyle\frac{2^{2k-1}(2^{2k}\!-\!1)|B_{2k}|}{k(2k)!}$} c^{2k}}\!\right) \!x^{2m}<\\[3.0 ex] \,\,<\,\ln \cos x \, < -\!\!\displaystyle\sum\limits_{k=1}^{n}{\!
\mbox{\small $\displaystyle\frac{2^{2k-1}(2^{2k}\!-\!1)|B_{2k}|}{k(2k)!}$}x^{2k}}, \end{array} \!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\! \end{equation} for $x \!\in\! \left(0,c\right)$ and where $0 < c < \mbox{\small $\displaystyle\frac{\pi}{2}$}$, $n, m \!\in\! {\mathbb N}$, $m,n \ge 2$, i.e. \begin{equation} \label{lncosx2} \!\!\!\!\!\!\!\! \begin{array}{l} \!\!\displaystyle\sum\limits_{k=1}^{n}{\!
\mbox{\small $\displaystyle\frac{(2^{2k}\!-\!1)|B_{2k}|}{2k(2k)!}$}x^{2k}} < -\ln \cos \frac{x}{2} \, < \\[3.0 ex] \!<\! \!\displaystyle\sum\limits_{k=1}^{m-1}{\!
\mbox{\small $\displaystyle\frac{(2^{2k}\!-\!1)|B_{2k}|}{2k(2k)!}$}x^{2k}} - {\Big (}\mbox{\small $\displaystyle\frac{2}{c}$}{\Big )}^{\!2m}\!\! \left(\! \ln \cos \frac{c}{2} - \!\displaystyle\sum\limits_{k=1}^{m-1}{
\mbox{\small $\displaystyle\frac{(2^{2k}\!-\!1)|B_{2k}|}{2k(2k)!}$} c^{2k}}\!\right) \!x^{2m}, \end{array} \!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\! \end{equation} for $x \!\in\! \left(0,c\right)$ and $0 \!<\! c \!<\! \pi$, $n, m \!\in\! {\mathbb N}$, $m,n \ge 2$.
Now, let us introduce the notation: \[\begin{array}{l}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!{H_1}\left( {x,{m_1},{n_1},c_1} \right) = - {p_1}\left( x \right)\mathop \sum \limits_{k = 1}^{{m_1} - 1} \frac{{{2^{2k - 1}}\left| {{B_{2k}}} \right|}}{{k(2k)!}}{x^{2k}}-\\[3.0 ex]
- 2\left( { - \mathop \sum \limits_{k = 1}^{{m_1} - 1} \frac{{\left( {{2^{2k}} - 1} \right)\left| {{B_{2k}}} \right|}}{{2k(2k)!}}{x^{2k}} + \frac{1}{{{c_1^{2m_1}}}}\left( {\ln \frac{c_1}{2} + \mathop \sum \limits_{k = 1}^{{n_1} - 1} \frac{{\left( {{2^{2k}} - 1} \right)\left| {{B_{2k}}} \right|}}{{2k(2k)!}}{c_1^{2k}}} \right){x^{2m_1}}} \right), \end{array}\] for $m_1,n_1\in {\mathbb N}$, $m_1,n_1 \ge 2$, $c_1\in(0,\pi)$, and $x \in (0,c_1)$.
\[\begin{array}{c}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!{H_2}\left( {x,{m_2},{n_2},c_2} \right) = {p_2}\left( x \right)\left(\!\! { -\!\!\!\! \mathop \sum \limits_{k = 1}^{{m_2} - 1} \frac{{{2^{2k - 1}}\left| {{B_{2k}}} \right|}}{{k(2k)!}}{x^{2k}} + \frac{1}{{{c_2^{2m_2}}}}\left( {\ln \frac{{\sin c_2}}{c_2} + \!\!\!\! \mathop \sum \limits_{k = 1}^{{m_2} - 1} \frac{{{2^{2k - 1}}\left| {{B_{2k}}} \right|}}{{k(2k)!}}{c_2^{2k}}} \right){x^{2m_2}}} \right) +\\[3.0 ex]
+ 2\mathop \sum \limits_{k = 1}^{{n_2}} \frac{{\left( {{2^{2k}} - 1} \right)\left| {{B_{2k}}} \right|}}{{2k(2k)!}}{x^{2k}}, \end{array}\] for $m_2,n_2\in {\mathbb N}$, $m_2,n_2 \ge 2$, $c_2\in(0,\pi)$, and $x \in (0,c_2)$.\\
\break
Based on the inequalities (\ref{lnsinxx}) and (\ref{lncosx2}) the following holds true: \[{G_1}\left( x \right) < {H_1}\left( x, m_1, n_1, c_1 \right),\] \[{G_2}\left( x \right) > {H_2}\left( x, m_2, n_2, c_2 \right),\]
\noindent for $m_1,n_1,m_2,n_2 \in {\mathbb N}$ and $c_1,c_2 \in (0,\pi)$.
For $c_1=c_2=3.1$, $m_1=25$ and $n_1=10$ and for $m_2=13$ and $n_2=27$, it is easy to prove that ${H_1}\left( x, m_1, n_1, c_1 \right)<0$ and ${H_2}\left( x, m_2, n_2, c_2 \right)>0$, for every $x \in (0,c_1)$.
Hence, we conclude that $G_1(x)<0$ and $G_2(x)>0$ for every $x \in (0,3.1)$, and the double-sided inequality (\ref{nejednakost_PolyExp}) holds.$
\Box$ \begin{remark} Note that this method can be used to prove that the inequality $(\ref{nejednakost_PolyExp})$ of Theorem \mbox{\rm \ref{Teorema_PolyExp}} holds on any interval $(0,c)$ where $c \in (0, \pi)$, but the degrees of the polynomials $H_1$ and $H_2$ get larger as $c$ approaches $\pi$. \end{remark}
\subsection{Constant vs. polynomial exponents}
Let us observe the inequalities in Theorem \ref{Corollary 11} and Theorem \ref{Teorema_PolyExp}, inequality (\ref{nejednakost_PolyExp}), containing constants and polynomials in the exponents, respectively.
A question of establishing a relation between these functions, with different types of exponents, comes up naturally. The following theorem addresses this question. \begin{theorem} \label{Teorema_const_vs_poly} For every $a \!\in\! \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right)$ and every $x\in \left(0,m_a\right)$, where $m_a=\sqrt{2\pi^2 \left( {a - \mbox{\footnotesize $\displaystyle\frac{3}{2}$}} \right)}$, the following double-sided inequality holds$:$ \begin{equation} \label{nejednakost_const_vs_poly} {\left( {\frac{{\sin x}}{x}} \right)^{\!a}} < {\left( {\frac{{\sin x}}{x}} \right)^{\!{\frac{3}{2} + \frac{{{x^2}}}{{2{\pi ^2}}}} }} < {\cos ^2}\frac{x}{2}. \end{equation} \end{theorem}
\noindent{\bf Proof.} Let $a = \mbox{\footnotesize $\displaystyle\frac{3}{2}$} + \varepsilon$, $\varepsilon \!\in\! \left(0,\mbox{\footnotesize $\displaystyle\frac{1}{2}$}\right)$ and $x>0$. Then: \[\begin{array}{c} \left( {\frac{3}{2} + \frac{{{x^2}}}{{2{\pi ^2}}}} \right)\ln \frac{{\sin x}}{x} - a\ln \frac{{\sin x}}{x} = \left( {\frac{3}{2} + \frac{{{x^2}}}{{2{\pi ^2}}}} \right)\ln \frac{{\sin x}}{x} - \left( {\frac{3}{2} + \varepsilon } \right)\ln \frac{{\sin x}}{x} = \\ \\
= \left( {\frac{{{x^2}}}{{2{\pi ^2}}} - \varepsilon } \right)\ln \frac{{\sin x}}{x} = \frac{1}{{2{\pi ^2}}}\left( {x - \sqrt {2{\pi ^2}\varepsilon } } \right)\left( {x + \sqrt {2{\pi ^2}\varepsilon } } \right)\ln \frac{{\sin x}}{x}. \end{array}\]
Now we have: \[\!\!\!\!\!\!\!\! x \!\in\! \left( {0,\sqrt {2{\pi ^2}\varepsilon } } \right) \Longleftrightarrow \left({\mbox{\footnotesize $\displaystyle\frac{3}{2}$} + \alpha {x^2}} \right)\ln \frac{{\sin x}}{x} > \left( {\mbox{\footnotesize $\displaystyle\frac{3}{2}$} + \varepsilon } \right)\ln \frac{{\sin x}}{x} \Longleftrightarrow {\left( {\frac{{\sin x}}{x}} \right)^{\!\frac{3}{2} + \frac{{{x^2}}}{{2{\pi ^2}}}}} \!> {\left( {\frac{{\sin x}}{x}} \right)^{\!\frac{3}{2} + \varepsilon }}.\]
Hence, applying Theorem \ref{Teorema_PolyExp}, the double-sided inequality (\ref{nejednakost_const_vs_poly}) holds for every \break ${a \!\in\! \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right)}$ and every $x \in (0,m_a)$. $
\Box$
\break
Now, in Table 1, we show the values
$x_a$ and $m_a$ for some specified $a \!\in\! \left( \mbox{\footnotesize $\displaystyle\frac{3}{2}$},2 \right)\,$:
{\small $$ \!\!\!
\begin{array}{|c||c|c|c|c|c|c|c|c|c|c|c|} \hline a \!\!&\!\! {1.501} \!\!&\!\! {1.502} \!\!&\!\! {1.503} \!\!&\!\! {1.504} \!\!&\!\! {1.505} \!\!&\!\! {1.506} \!\!&\!\! {1.507} \!\!&\!\! {1.508} \!\!&\!\! {1.509} \!\!&\!\! {1.510} \\ \hline x_a \!\!&\!\! {0.282 ...} \!\!&\!\! {0.398 ...} \!\!&\!\! {0.487 ...} \!\!&\!\! {0.561 ...} \!\!&\!\! {0.626 ...} \!\!&\!\! {0.685 ...} \!\!&\!\! {0.738 ...} \!\!&\!\! {0.788 ...} \!\!&\!\! {0.834 ...} \!\!&\!\! {0.878 ...} \\ \hline m_a \!\!&\!\! {0.140 ...} \!\!&\!\! {0.198 ...} \!\!&\!\! {0.243 ...} \!\!&\!\! {0.280 ...} \!\!&\!\! {0.314 ...} \!\!&\!\! {0.344 ...} \!\!&\!\! {0.371 ...} \!\!&\!\! {0.397 ...} \!\!&\!\! {0.421 ...} \!\!&\!\! {0.444 ...} \\ \hline \end{array} $$}
\vspace*{-6.5 mm}
{\small $$ \!\!\!
\begin{array}{|c||c|c|c|c|c|c|c|c|c|c|c|} \hline a \!\!&\!\! {1.52} \!\!&\!\! {1.53} \!\!&\!\! {1.54} \!\!&\!\! {1.55} \!\!&\!\! {1.56} \!\!&\!\! {1.57} \!\!&\!\! {1.58} \!\!&\!\! {1.59} \!\!&\!\! {1.60} \!\!&\!\! {1.65} \\ \hline x_a \!\!&\!\! {1.220 ...} \!\!&\!\! {1.468 ...} \!\!&\!\! {1.666 ...} \!\!&\!\! {1.831 ...} \!\!&\!\! {1.973 ...} \!\!&\!\! {2.096 ...} \!\!&\!\! {2.205 ...} \!\!&\!\! {2.302 ...} \!\!&\!\! {2.302 ...} \!\!&\!\! {2.302 ...} \\ \hline m_a \!\!&\!\! {0.628 ...} \!\!&\!\! {0.769 ...} \!\!&\!\! {0.888 ...} \!\!&\!\! {0.993 ...} \!\!&\!\! {1.088 ...} \!\!&\!\! {1.175 ...} \!\!&\!\! {1.256 ...} \!\!&\!\! {1.256 ...} \!\!&\!\! {1.256 ...} \!\!&\!\! {1.256 ...} \\ \hline \end{array} $$}
\vspace*{-6.5 mm}
{\small $$ \!\!\!
\begin{array}{|c||c|c|c|c|c|c|c|c|c|c|c|} \hline a \!\!&\!\! {1.70} \!\!&\!\! {1.75} \!\!&\!\! {1.80} \!\!&\!\! {1.85} \!\!&\!\! {1.90} \!\!&\!\! {1.92} \!\!&\!\! {1.94} \!\!&\!\! {1.96} \!\!&\!\! {1.98} \!\!&\!\! {1.9999} \\ \hline x_a \!\!&\!\! {2.911 ...} \!\!&\!\! {3.034 ...} \!\!&\!\! {3.103 ...} \!\!&\!\! {3.133 ...} \!\!&\!\! {3.141 ...} \!\!&\!\! {3.141 ...} \!\!&\!\! {3.141 ...} \!\!&\!\! {3.141 ...} \!\!&\!\! {3.141 ...} \!\!&\!\! {3.141 ...} \\ \hline m_a \!\!&\!\! {1.986 ...} \!\!&\!\! {2.221 ...} \!\!&\!\! {2.433 ...} \!\!&\!\! {2.628 ...} \!\!&\!\! {2.809 ...} \!\!&\!\! {2.879 ...} \!\!&\!\! {2.947 ...} \!\!&\!\! {3.013 ...} \!\!&\!\! {3.087 ...} \!\!&\!\! {3.141 ...} \\ \hline \end{array} $$}
\vspace*{-5.0 mm}
$$ \mbox{\bf Table 1} $$
\begin{remark} Note that Theorem $\ref{Teorema_const_vs_poly}$ represents another proof of the following assertion from $\mbox{\rm \cite{Lutovac2017}}\!:$ $$ {\big (}\,\forall a \!\in\! \left(3/2,2\right)\!{\big )} {\big (}\,\exists \delta \!>\! 0\,{\big )} {\big (}\,\forall x \!\in\! (0, \delta){\big )} {\left( {\frac{{\sin x}}{x}} \right)^{\!\!a}} \!\!< {\cos ^2}\frac{x}{2}. $$ \end{remark}
\section{Conclusion}
In this paper, using the power series expansions and the application of the {\sc Wu}-{\sc Debnath} theorem, we proved that the inequality (\ref{Z-H-Jang}) holds for $a \!=\! \mbox{\footnotesize $\displaystyle\frac{3}{2}$}$. At the same time, this proof represents a new short proof of Statement~\ref{Brankova-teorema}.
We analyzed the cases $a \!\in\! \left(\mbox{\footnotesize $\displaystyle\frac{3}{2}$},2\right)$ and $\displaystyle a \!\geq\! 2$ and we prove the corresponding inequalities. We introduced and prove a new double-sided inequality of similar type involving polynomial exponents. Also, we established a relation between the cases of the constant and of the polynomial exponent.
\noindent \textbf{Acknowledgement.} The research of the first, second and third authors was supported in part by the Serbian Ministry of Education, Science and Technological Development, under Projects ON 174033, TR 32023, and ON 174032 \& III 44006, respectively.
\noindent \textbf{Competing Interests.} The authors would like to state that they do not have any competing interests in the subject of this research.
\noindent \textbf{Author's Contributions.} All the authors participated in every phase of the research conducted for this paper.
\break
\end{document} | arXiv |
Chain complexes
2 The definitions
3 Cubical complexes
4 Examples of chain complexes of cubical complexes
5 Cubical and simplicial domains
6 Examples of chain complexes
We now review what it takes to have arbitrary ring of coefficients $R$.
Recall that
with $R={\bf Z}$, the chain groups are abelian groups,
with $R={\bf R}$ (or other fields), the chain groups are vector spaces, and now
with an arbitrary $R$, the chain groups are modules.
Informally,
modules are vector spaces over rings.
The following definitions and results can be found in the standard literature such as Hungerford, Algebra (Chapter IV).
Definition. Given a commutative ring $R$ with the multiplicative identity $1_R$, a (commutative) $R$-module $M$ consists of an abelian group $(M, +)$ and a scalar product operation $R \times M \to M$ such that for all $r,s \in R$ and $x, y \in M$, we have:
$r(x+y) = rx + ry$,
$(r+s)x = rx + sx$,
$(rs)x = r(sx)$,
$1_Rx = x$. $\\$
The scalar multiplication can be written on the left or right.
If $R$ is a field, an $R$-module is a vector space.
The rest of the definitions are virtually identical to the ones for vector spaces.
A subgroup $N$ of $M$ is a submodule if it is closed under scalar multiplication: for any $n \in N$ and any $r\in R$, we have $rn \in N$.
A group homomorphism $f: M \to N$ is a (module) homomorphism if it preserves the scalar multiplication: for any $m,n \in M$ and any $r, s \in R$, we have $f(rm + sn) = rf(m) + sf(n)$. When $R$ is a division ring, $f$ is called a linear operator.
A bijective module homomorphism is an (module) isomorphism, and the two modules are called isomorphic.
We will need the following classification theorem as a reference.
Theorem (Fundamental Theorem of Finitely Generated Abelian Groups). Every finitely generated abelian group $G$ is isomorphic to a direct sum of primary cyclic groups and infinite cyclic groups: $${\bf Z}^n \oplus {\bf Z}_{q_1} \oplus ... \oplus {\bf Z}_{q_s},$$ where $n \ge 0$ is the rank of $G$ and the numbers $q_1, ...,q_s$ are powers of (not necessarily distinct) prime numbers. Here ${\bf Z}^n$ is the free part and the rest is the torsion part of $G$.
The kernel of a module homomorphism $f : M \to N$ is the submodule of $M$ consisting of all elements that are taken to zero by $f$. The isomorphism theorems of group theory are still valid.
A module $M$ is called finitely generated if there exist finitely many elements $v_1,v_2, ...,v_n \in M$ such that every element of $M$ is a linear combination of these elements (with coefficients in $R$).
A module $M$ is called free if it has a basis. This condition is equivalent to: $M$ is isomorphic to a direct sum of copies of the ring $R$. Every submodule $L$ of such a module is a summand; i.e., $$M=L\oplus N,$$ for some other submodule $N$ of $M$.
Of course, ${\bf Z}^n$ is free and finitely generated. This module is our primary interest because that's what a chain group over the integers has been every time. It behaves very similarly to ${\bf R}^n$ and the main differences lie in these two related areas. First, the quotients may have torsion, such as in ${\bf Z}/ 2{\bf Z} \cong {\bf Z}_2$. Second, some operators invertible over ${\bf R}$ may be singular over ${\bf Z}$. Take $f(x)=2x$ as an example.
We will refer to finitely generated free modules as simply modules.
The definitions
We have seen chain complexes appearing from the adjacency relation (i.e., the topology) of the cubical domain and simplicial complexes.
Definition. Suppose we have a sequence of modules and homomorphisms $$\partial _k : C_{k} \to C_{k-1},\ k=0,1,2, ...,$$ with zero groups (and zero homomorphisms) at the ends: $$C_{-1}=C_{N+1}=...=0.$$ The $k$th element is called the $k$-th chain group and the $k$-th boundary operator respectively. If written consecutively, they form a sequence: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{cccccccccccccccccccc} ...&\ra{}&C_{k+1}&\ra{\partial_{k+1}}&C_{k}&\ra{\partial_k}&...&\ra{\partial_1}&C_0&\ra{\partial_0=0}&0. \end{array}$$ This sequence of groups and homomorphisms is called a chain complex denoted by $$\{C_k\}:=\{C_k,\partial\}.$$ provided the composition of two consecutive boundary operators is zero: $$\partial_k\partial_{k+1}=0.$$
Example. As an illustration of the last condition, consider this example of two operators below. Neither operator is $0$, but their composition is:
Here,
$A$ is the projection on the $xy$-plane, which isn't $0$;
$B$ is the projection on the $z$-axis, which isn't $0$;
$BA$ is $0$. $\square$
Example. A trivial chain complex: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{cccccccccccccccccccc} ...&\ra{0}&C_{k+1}&\ra{0}&C_{k}&\ra{0}&...&\ra{0}&C_0&\ra{0}&0; \end{array}$$ a "short" chain complex: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{cccccccccccccccccccc} ...&\ra{0}&C_{k+1}&\ra{}&C_{k}&\ra{0}&...&\ra{0}&C_0&\ra{0}&0; \end{array}$$ not a chain complex: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{cccccccccccccccccccc} ...&\ra{}&C_{k+1}&\ra{Id}&C_{k}&\ra{Id}&...&\ra{Id}&C_0&\ra{0}&0, \end{array}$$ unless $C_i=0$. $\square$
Definition. For a given $k=0,1,2,3,...$,
the $k$th cycle group of $\{C_k\}$ is the subgroup of $C_k$ defined to be
$$Z_k:=\ker \partial_k;$$
the $k$th boundary group of $\{C_k\}$ is the subgroup of $C_k$ defined to be
$$B_k:=\operatorname{Im} \partial_{k+1}.$$
Proposition. Every boundary is a cycle; i.e., for any chain complex $\{C_k\}$, $$B_k\subset Z_k,\ \forall k=0,1,2, ...$$
Proposition. For a finitely generated chain complex, the modules $C_k,Z_k,B_k$ are direct sums of finitely many copies of $R$: $$R \oplus R \oplus ... \oplus R .$$
Exercise. Prove the proposition.
Cubical complexes
We would like to be able to study functions defined on subsets of the Euclidean space.
Suppose the Euclidean space ${\bf R}^N$ is given and so is its cubical grid ${\bf Z}^N$. Suppose also that we have its decomposition ${\mathbb R}^N$, a list of all (closed) cubical cells in our grid.
Notation:
${\mathbb R}^N$ is the set of all cells (of all dimensions) in the grid,
$C_k(L)$ is the set of all $k$-chains in a given collection $L\subset {\mathbb R}^N$ of cells, and, in particular,
$C_k:=C_k({\mathbb R}^N)$ is the set of all $k$-chains in ${\mathbb R}^N$.
Proposition. The chain groups $C_k(L),\ k=0,1,2, ...$, are subgroups of $C_k=C_k({\mathbb R}^N)$.
Moreover, the chain complex comprised of these groups can be constructed in the identical way as the one for the whole ${\mathbb R}^N$ -- if we can make sense of how the boundary operators is defined: $$\partial ^L _{k}:C_{k}(L)\to C_{k-1}(L),\ \forall k=1,2,3, ....$$
One way to build them is to recognize that these are the same cells and they have the same boundaries, so we have: $$\partial ^L _{k} :=\partial _k\Big|_{C_{k}(L)}.$$ To make sure that these are well-defined, we need the boundaries of the chains in $C_k(L)$ to be chains in $C_{k-1}(L)$: $$\partial_k(C_k(L)) \subset C_{k-1}(L).$$
Definition. A cubical complex is a collection of cubical cells $K\subset {\mathbb R}^N$ that includes all faces of the cells it contains.
$N=2:$
Example. The cubical complex $K$ of the pixel at the origin is given by a list of cells of all dimensions:
0. $\{0 \} \times \{0 \},\ \{1 \} \times \{0 \},\ \{0 \} \times \{1 \}, \{1 \} \times \{1 \}$;
1. $\{0 \} \times [0,1],\ [0,1] \times \{0 \},\ [0,1] \times \{1 \},\ \{1 \} \times [0,1]$;
2. $[0,1] \times [0,1]$. $\\$
Now, their boundaries are defined within the complex:
0. $\partial \Big( \{(0, 0) \} \Big) = 0$, etc.,
1. $\partial \Big( \{0 \} \times [0,1] \Big) = \{(0,0) \} + \{(0,1) \}$, etc.,
2. $\partial \Big( [0,1] \times [0,1]\Big) = [0,1] \times \{0 \} + \{0 \} \times [0,1] + [0,1] \times \{1 \} + \{1 \} \times [0,1]$. $\square$
Proposition. The boundary operator on a cubical complex is well-defined.
Proposition. The chain groups and the boundary operators of a cubical complex form a chain complex: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{ccccccccccccc} ...& \ra{\partial_{k+2}} & C_{k+1}(K)& \ra{\partial_{k+1}} & C_{k}(K)& \ra{\partial_{k}} & C_{k-1}(K) & \ra{\partial_{k-1}} & ... & \ra{\partial_1} & C_1(K)& \ra{\partial_0} & 0. \end{array} $$
Definition. The dimension $\dim K$ of a cubical complex $K$ is the highest among the dimensions of its cells.
Just keep in mind that the shared vertices, edges, faces, etc. appear only once.
Exercise. Find the cubical complex representation of the regular, hollow, torus.
The rest of the definitions apply.
the $k$th cycle group of $K$ is the subgroup of $C_k=C_k(K)$ defined to be
$$Z_k=Z_k(K):=\ker \partial_k;$$
the $k$th boundary group of $K$ is the subgroup of $C_k(K)$ defined to be
$$B_k=B_k(K):=\operatorname{Im} \partial_{k+1}.$$
This is how cycles and boundaries are visualized.
Example. We choose to have only $0$- and $1$-cells. This is how we visualize boundaries and cycles, in the only two relevant dimensions:
$\square$
Exercise. Use the example above to show that the cycles and boundaries with respect to a cubical complex are different from those with respect to ${\mathbb R}^N$.
Exercise. For the complexes shown in this subsection, sketch a few examples of cycles and boundaries.
Examples of chain complexes of cubical complexes
Over $R={\bf Z}_2$, all these groups are just lists. Indeed, $C_k(K)$ is generated by the finite set of $k$-cells in $K$ and $Z_k(K),B_k(K)$ are its subgroups. Therefore, they all have only finite number of elements.
We will start with these three, progressing from the simplest to more complex, in order to reuse our computations:
Example (single vertex). Let $K:=\{V\}$. Then $$\begin{array}{llll} C_0&=< V > &=\{0,V\},\\ C_i& &=0, &\forall i>0. \end{array}$$ Then we have the whole chain complex here: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{ccccccccccccc} 0 & \ra{\partial_2=0} & C_1=0& \ra{\partial_1=0} & C_0 =< V > & \ra{\partial_0=0} & 0. \end{array} $$ From this complex, working algebraically, we deduce: $$\begin{array}{lllllll} Z_0:=&\ker \partial _0 &= < V > &=\{0,V\},\\ B_0:=&\operatorname{Im} \partial _1 & &=0. \end{array}$$ $\square$
Example (two vertices). Let $K:=\{V, U\}$. We copy the last example and make small corrections: $$\begin{array}{llllll} C_0&=< V,U > &=\{0,V,U,V+U\},\\ C_i& &=0, &\forall i>0. \end{array}$$ Then we have the whole chain complex here: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{ccccccccccccc} 0 & \ra{\partial_2=0} & C_1=0 & \ra{\partial_1=0} & C_0 =< V,U > & \ra{\partial_0=0} & 0. \end{array} $$ Now using only algebra, we deduce: $$\begin{array}{llllll} Z_0:=&\ker \partial _0 &= < V,U > &=\{0,V,U,V+U\},\\ B_0:=&\operatorname{Im} \partial _1 & &=0. \end{array}$$ $\square$
Example (two vertices and an edge). Let $K:=\{V, U, e\}$. We copy the last example and make some corrections: $$\begin{array}{llllll} C_0&=< V,U >&=\{0,V,U,V+U\},\\ C_1&=< e >&=\{0,e\},\\ C_i&=0, &\forall i>1. \end{array}$$ Then we have the whole chain complex shown: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{ccccccccccccc} 0 & \ra{\partial_2=0} & C_1=< e >& \ra{\partial_1=?} & C_0 =< V,U > & \ra{\partial_0=0} & 0. \end{array} $$ First we compute the boundary operator: $$\partial _1 (e)=V+U.$$ Hence, $$\partial _1 =[1,1]^T.$$ Now the groups.
Dimension $0$ (no change in the first line): $$\begin{array}{llllll} Z_0:=&\ker \partial _0 &= < V,U >&=\{0,V,U,V+U\},\\ B_0:=&\operatorname{Im} \partial _1 &=< V+U >&=\{0,V+U\}. \end{array}$$ Notice the inexactness of our chain complex: $Z_0 \ne B_0$ (not every cycle is a boundary!).
Dimension $1$: $$\begin{array}{llllll} Z_1:=&\ker \partial _1 &= 0,\\ B_1:=&\operatorname{Im} \partial _2 &=0. \end{array}$$ $\square$
Two, slightly more complex, examples:
Example (hollow square). Let $K:=\{A, B,C,D, a,b,c,d\}$. Then we have (too many cells to list all elements): $$\begin{array}{llllll} C_0&=< A,B,C,D >,\\ C_1&=< a,b,c,d >,\\ C_i&=0, \quad\forall i>1. \end{array}$$ Note that we can think of these two lists of generators as ordered bases.
We have the chain complex below: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{ccccccccccccc} 0 & \ra{\partial_2=0} & C_1=< a,b,c,d > & \ra{\partial_1=?} & C_0 =< A,B,C,D > & \ra{\partial_0=0} & 0. \end{array} $$ First we compute the boundary operator: $$\begin{array}{llllll} \partial _1 (a)&=A+B,\\ \partial _1 (b)&=B+C,\\ \partial _1 (c)&=C+D,\\ \partial _1 (d)&=D+A. \end{array}$$ Hence, $$\partial _1 =\left[ \begin{array}{ccccccccccccc} 1,&0,&0,&1\\ 1,&1,&0,&0\\ 0,&1,&1,&0\\ 0,&0,&1,&1 \end{array} \right].$$ The chain complex has been found, now the groups.
Dimension $0$: $$\begin{array}{llllll} Z_0:=&C_0 & &= < A,B,C,D >,\\ B_0:=&\operatorname{Im} \partial _1 &=< A+B,B+C,C+D,D+A > &=< A+B,B+C,C+D > \end{array}$$ (because $D+A$ is the sum of the rest of them).
Dimension $1$: $$\begin{array}{llllll} Z_1:=&\ker \partial _1 &= ?,\\ B_1:=&\operatorname{Im} \partial _1 &=0. \end{array}$$ To find the kernel, we need to find all $e=(x,y,z,u)\in C_1$ such that $\partial _1 e=0$. That's a (homogeneous) system of linear equations: $$\left\{ \begin{array}{ccccccccccccc} x & & &+u &=0,\\ x &+y & & &=0,\\ & y & +z& &=0,\\ & & z &+u &=0. \end{array} \right .$$ Solving from bottom to the top: $$z=-u \Longrightarrow y=u \Longrightarrow x=-u,$$ so $e=(-u,u,u,-u)^T,\ u\in {\bf R}$. Then, as signs don't matter, $$Z_1=< e >=<(1,1,1,1)>=<a+b+c+d>.$$ $\square$
Example (solid square). Let $K:=\{A, B,C,D, a,b,c,d, \tau \}$. We copy the last example and make some corrections. We have: $$\begin{array}{llllll} C_0&=< A,B,C,D >,\\ C_1&=< a,b,c,d >,\\ C_2&=<\tau>,\\ C_i&=0, \quad\forall i>2. \end{array}$$ We have the chain complex: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{ccccccccccccc} 0 & \ra{\partial_3=0} & C_2=<\tau >&\ra{\partial_2=?} & C_1=< a,b,c,d > & \ra{\partial_1} & C_0 =< A,B,C,D >& \ra{\partial_0=0} & 0. \end{array} $$ First we compute the boundary operator: $$\partial _2 (\tau)=a+b+c+d,$$ therefore, $$\partial _2 =[1,1,1,1]^T.$$ As $\partial _1$ is already known, the chain complex has been found. Now the groups:
Dimension $0$. Since the changes in the chain complex don't affect these groups, we have answers ready: $$\begin{array}{llllll} Z_0:=&C_0 &= < A,B,C,D >,\\ B_0:=&\operatorname{Im} \partial _1 &=< A+B,B+C,C+D >.\\ \end{array}$$
Dimension $1$ (no change in the first line): $$\begin{array}{llllll} Z_1:=&\ker \partial _1 & &= <a+b+c+d>,\\ B_1:=&\operatorname{Im} \partial _1 &=< \partial _2(\tau) > &=<a+b+c+d>. \end{array}$$ $\square$
Exercise. Represent the sets below as realizations of cubical complexes. In order to demonstrate that you understand the algebra, for each of them:
(a) find the chain groups and find the boundary operator as a matrix;
(b) using only part (a) and algebra, find $Z_k,B_k$ for all $k$, including the generators.
Exercise. Compute the chain complex of a "train" with $n$ cars:
Cubical and simplicial domains
Can we have a discrete representation of a punctured plane, etc? Unfortunately, removing a vertex from ${\mathbb R}^N$ leave us with a collection of cells that is not a cubical complex. Are there other subsets of chains that form chain complexes? Yes, the complements of the cubical complexes.
Definition. A cubical domain $K$ is a set of cells $K\subset {\mathbb R}^N$ that is a cubical complex or the complement of a cubical complex.
Considering that in the latter case some boundary faces are missing, is the boundary operator well defined?
Example. Below we start with the complex $K$ of the cube in the left-most column. We then remove a three simple subcomplexes from this complex and visualize the boundary of the boundary of the $3$-cell $Q$, i.e., $\partial\partial Q$, in the left-most column.
When a vertex is removed, there is no difference. When an edge -- with its end-points -- is removed, it does not appear in $\partial\partial Q$ while the rest of the edges are paired up and cancel. Finally, when face -- with its boundary edges -- is removed, these four edges don't appear in $\partial\partial Q$, while the rest are still paired up. $\square$
The idea is explained algebraically as follows.
Suppose $K$ is a cubical complex in ${\mathbb R}^N$ and $U$ is its complement: $$U:={\mathbb R}^N \setminus K.$$ Then $$C_k({\mathbb R}^N)=C_k(K)\oplus C_k(U).$$ Therefore, the projection $$p_k:C_k({\mathbb R}^N) \to C_k(U)$$ is well-defined.
Definition. We define the boundary operator on the cubical domain $U$ $$\partial_k^U:C_k(U)\to C_{k-1}(U),\ k=0,1,...$$ by $$\partial_k^U :=p_k \partial_k.$$
Proposition. The boundary operator on a cubical domain is well-defined.
Proposition. The chain groups and the boundary operators of a cubical domain form a chain complex.
Exercise. Prove the proposition. Hint: $C(U) \cong C / C(K)$.
We have a similar development for the simplicial case.
Definition. A simplicial domain $K$ is a subset of of a simplicial complex $M$ that is a simplicial complex or the complement of a simplicial complex.
Suppose $K$ is a subcomplex of a simplicial complex $M$ and $U$ is its complement: $$U:=M \setminus K.$$ Then $$C_k(M)=C_k(K)\oplus C_k(U).$$ Therefore, the projection $$p_k:C_k(M) \to C_k(U)$$ is well-defined.
Definition. We define the boundary operator on the simplicial domain $U$ $$\partial_k^U:C_k(U)\to C_{k-1}(U),\ k=0,1,...$$ by $$\partial_k^U :=p_k \partial_k.$$
Proposition. The boundary operator on a simplicial domain is well-defined.
Proposition. The chain groups and the boundary operators of a simplicial domain form a chain complex.
The rest of the definitions apply: cycles and boundaries.
Examples of chain complexes
We are already familiar with gradual and orderly building -- via skeleta -- of cubical and simplicial complexes:
The only difference is that now we have more flexibility about the cells. We will assume that they are balls of various dimensions:
Example (ladle). Let's consider a specific example. Suppose we want to build something that looks like a ladle, which is the same topologically as a Ping-Pong bat:
We want to build a simple topological space from the ground up, using nothing but cells attached to each other in the order of increasing dimensions. In our box, we have: the parts, the glue, the schematics, and a set of instructions of how to build it.
Here is the schematics:
Let's narrate the instructions. We start with the list $K$ of all cells arranged according to their dimensions:
dimension $0$: $A,B$;
dimension $2$: $\tau$. $\\$
These are the building blocks. At this point, they may be arranged in a number of ways.
Now, the two points $A,B$ are united into one set. That's the $0$-skeleton $K^{(0)}$ of $K$.
Next, we take this space $K^{(0)}$ and combine it, again as the disjoint union, with all $1$-cells in $K$. To put them together, we introduce an equivalence relation on this set. But, to keep this process orderly, we limit ourselves to an equivalence relation between the vertices (i.e., the elements of $K^{(0)}$) only, and the boundaries of the $1$-cells we are adding. In other words, we identify the endpoints of $a$ and $b$ to the points $A$ and $B$. This can happen in several ways. We make our choice by specifying the attaching map for each $1$-cell thought of as a segment $[0,1]$: $$f_a:\partial a\to K^{(0)},\ f_b:b\to K^{(0)} ,$$ by specifying the values on the endpoints: $$f_a(0)=A,\ f_a(1)=B,\ f_b(0)=A,\ f_b(1)=A.$$ We use these maps following the attaching rule: $$x\sim y \Longleftrightarrow f_a(x)=y.$$ The result is the $1$-skeleton $K^{(1)}$.
The rule we have followed is to choose
an equivalence relation on the last skeleton combined with the boundaries of the new cells.
Next, we take this space $K^{(1)}$ and combine it, again as the disjoint union, with all $2$-cells in $K$. To put them together, we introduce an equivalence relation following the rule above. For this dimension, we identify the edge of $\tau$ with a $1$-cell $a$, point by point. This can happen in several ways and we make our choice by specifying the attaching map for the $2$-cell: $$f_{\tau}:\tau\to K^{(1)} .$$ We only need to specify the values on the boundary and we assume that $f_{\tau}:\partial\tau\to b$ is a bijection. We again use the attaching rule: $$x\sim y \Longleftrightarrow f_{\tau}(x)=y.$$ The result is the $2$-skeleton $K^{(2)}$, which happens to be the whole $K$. $\square$
A $1$-cell may be attached to $0$-cells as a rope or as a noose:
Meanwhile, a $2$-cell may be attached to $1$-cells as a soap film, a zip-lock bag, or an air-balloon.
We proceed to algebra. If the (topological) boundary of an $(n+1)$-cell $\tau$ is the union of several $n$-cells $a, b, c, ...$: $$\partial \tau = a \cup b \cup c \cup ...,$$ then the boundary operator evaluated at $\tau$ is some linear combination of these cells: $$\partial _{n+1}(\tau) = \pm a\pm b\pm c\pm...$$
What are the signs? They are determined by the orientation of the cell $\tau$ as it is attached to $n$-cells. Let's consider this matching in lower dimensions.
In dimension $1$, the meaning of orientation is simple. It is the direction of the $1$-cell as we think of it as a parametric curve. Then the boundary is the last vertex it is attached to minus the first vertex.
Here, if the (topological) boundary of the $1$-cell $a$ is identified, by the attaching maps, with the union of two $0$-cells $A, B$ (or just $A$), while the (algebraic) boundary of $a$ is the sum (or a linear combination) of $A, B$: $$\begin{array}{lllllll} f_a(\partial a) &= \{A,B\} &\leadsto &\partial _1 (a) = B-A;\\ f_a(\partial a) &= \{A\} &\leadsto &\partial _1 (a) = A-A=0. \end{array}$$
For a $2$-cell, a direction is chosen for its circular boundary, clockwise or counterclockwise. As we move along the boundary following this arrow, we match the direction to that of each $1$-cell we encounter:
Here, we have three cases: $$\begin{array}{lllllll} f_{\tau}(\partial \tau) &= a \cup b \cup c &\leadsto& \partial _2\tau &= a - b - c;\\ f_{\tau}(\partial \tau) &= a \cup b &\leadsto& \partial _2\tau &= -a - b +a+b=0;\\ f_{\tau}(\partial \tau) &= A &\leadsto & \partial _2\tau &= 0. \end{array}$$
Initially, we can understand the orientation of a cell as an ordering of its vertices, just as we did for simplicial complexes.
Example. Let's evaluate the boundary operator for this $2$-cell, with the orientations of $1$-cells randomly assigned.
We have: $$\partial \tau = -a_1 + a_2 + a_3 + a_4 - a_5 + a_6 + a_7 - a_8.$$ Further, $$\begin{array}{llll} \partial (\partial \tau) &=& \partial( -a_1 + a_2 + a_3 + a_4 - a_5 + a_6 + a_7 - a_8 ) \\ &=& -\partial a_1 + \partial a_2 + \partial a_3 + \partial a_4 - \partial a_5 + \partial a_6 + \partial a_7 - \partial a_8 \\ &=& -(A_1 - A_2) + (A_3 - A_2) + (A_4 - A_3) + (A_5 - A_4) \\ && - (A_5 - A_6) + (A_7 - A_6) + (A_8 - A_7) - (A_8 - A_1) \\ &=& 0. \end{array}$$$\square$
This is what we can construct from the square by gluing one or two pairs of its opposite edges:
Example (square). Here is the simplest cell representation of the square (even though the orientations can be arbitrarily reversed):
The complex $K$ of the square is:
$0$-cells: $A, B, C, D$;
$2$-cells: $\tau$;
boundary operator: $\partial \tau = a + b + c - d; \ \partial a = B-A, \partial b = C-B$, etc. $\square$
Example (cylinder). We can construct the cylinder $C$ by gluing two opposite edges with the following equivalence relation: $(0,y) \sim (1,y)$. The result of this equivalence relation of points can be seen as equivalence of cells: $$a \sim c;\ A \sim D,\ B \sim C.$$
We still have our collection of cells (with some of them identified as before) and only the boundary operator is different:
$\partial \tau = a + b + (-a) - d = b - d ;$
$\partial a = B-A,\ \partial b = B-B=0,\ \partial d = A-A = 0.$ $\\$
The chain complex is $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{lcccccccccc} & & C_2 & \ra{\partial} & C_1 & \ra{\partial} & C_0 \\ & & < \tau > & \ra{?} & < a,b,d > & \ra{?} & < A,B > \\ & & \tau & \mapsto & b - d \\ & & & & a & \mapsto & B-A \\ & & & & b & \mapsto & 0 \\ & & & & d & \mapsto & 0 \\ {\rm kernels:} & & Z_2 = 0 && Z_1 = < b,d > & & Z_0 = < A,B > \\ {\rm images:} & & B_2 = 0 && B_1 = < b - d > & & B_0 = < B-A > \end{array}$$ Here, "kernels" are the kernels of the maps to their right, "images" are the images of the maps to their left. $\square$
Example (Möbius band). In order to build the Möbius band ${\bf M}^2$, we use the equivalence relation: $(0,y) \sim (1,1-y)$. Once again, we can interpret the gluing as equivalence of cells, $a$ and $c$. But this time they are attached to each other with $c$ upside down. It makes sense then to interpret this as equivalence of cells but with a flip of the sign: $$c \sim -a.$$ Here $-c$ represents edge $c$ with the opposite orientation:
In other words, this is an equivalence of chains. Further, $$A \sim D, \ B \sim C.$$ The boundary operator is:
$\partial \tau = a + b - (-a) - d = 2a + b - d;$
$\partial a = B-A, \ \partial b = A-B, \ \partial d = B-A.$ $\\$
The chain complex is $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{lcccccccccc} & & C_2 & \ra{\partial} & C_1 & \ra{\partial} & C_0 \\ & & < \tau > & \ra{?} & < a,b,d > & \ra{?} & < A,B > \\ & & \tau & \mapsto & 2a + b - d \\ & & & & a & \mapsto & B-A \\ & & & & b & \mapsto & A-B \\ & & & & d & \mapsto & B-A \\ {\rm kernels:} & & Z_2 = 0 && Z_1 = < a+b,a-d > & & Z_0 = < A,B > \\ {\rm images:} & & B_2 = 0 && B_1 = < 2a+b-d > & & B_0 = < B-A > \end{array}$$ $\square$
Example (sphere). We can either build the sphere as a quotient of the square or, easier, we just combine these pairs of the consecutive edges:
Then we have only two edges left. Even better, the last option gives us a representation with no $1$-cells whatsoever! $\square$
Example (projective plane). The projective plane comes from a certain quotient of the disk, ${\bf B}^2/_{\sim}$, where $u \sim -u$ is limited to the boundary of the disk. It can also be seen as a quotient of the square:
As we see, the edge of the disk is glued to itself, with a twist. Its algebraic representation is almost the same as before: $p\sim -p$.
$2$-cells: $\tau$ with $\partial\tau = 2p$;
$1$-cells: $p$ with $\partial p = 0$;
$0$-cells: $A$ with $\partial A = 0$.
Example (torus). What if after creating the cylinder by identifying $a$ and $c$ we then identify $b$ and $d$? Like this: $$c \sim a,\ d \sim -b.$$ The result is the torus ${\bf T}^2$:
Note how all the corners of the square come together in one. Then the chain complex has very few cells to deal with: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{lcccccccccc} & & C_2 & \ra{\partial} & C_1 & \ra{\partial} & C_0 \\ & & < \tau > & \ra{?} & < a,b > & \ra{?} & < A > \\ & & \tau & \mapsto & 0 \\ & & & & a & \mapsto & 0 \\ & & & & b & \mapsto & 0 \\ {\rm kernels:} & & Z_2 = < \tau > && Z_1 = < a,b > & & Z_0 = < A > \\ {\rm images:} & & B_2 = 0 && B_1 = 0 & & B_0 = 0 \end{array}$$
Example (Klein bottle). What if we flip one of the edges before gluing? Like this: $$c \sim a,\ d \sim -b.$$
The corners once again come together in one and the chain complex has very few cells to deal with: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{lcccccccccc} & & C_2 & \ra{\partial} & C_1 & \ra{\partial} & C_0 \\ & & < \tau > & \ra{?} & < a,b > & \ra{?} & < A > \\ & & \tau & \mapsto & 2a \\ & & & & a & \mapsto & 0 \\ & & & & b & \mapsto & 0 \\ {\rm kernels:} & & Z_2 = 0 && Z_1 = < a,b > & & Z_0 = < A > \\ {\rm images:} & & B_2 = 0 && B_1 = < 2a > & & B_0 = 0 \end{array}$$ $\square$
This is how we can build in dimension $3$. We start with a cube. First, we glue its top and bottom, with each point identified to the one directly underneath. Then we cut the cube into two by a horizontal square.
Exercise. Consider the relations listed above to identify the edges of this square, just as we did for the torus, etc. Use these relations to glue the opposite faces of the cube and compute the resulting chain complex.
Example (balls). We represent ${\bf B}^n$: Cells:
$n$-cells: ${\sigma}$,
$(n-1)$-cells: $a$,
$0$-cells: $A$. $\\$
The boundary operator:
${\partial}{\sigma} = a,$
${\partial}a = 0,$
${\partial}A = 0.$
The chain complex is below: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{lcccccccccccccccccccc} & C_n & \ra{\partial} & C_{n-1} & \ra{\partial} & C_{n-2} & \to ... \to & C_0 \\ & < \sigma > & \ra{\cong} & < a > & \ra{0} & 0 & \to ... \to & < A > \\ & {\sigma} & \mapsto & a & \mapsto & 0 & & A & \\ {\rm kernels:} & Z_n = 0 & & Z_{n-1} = < a > & & Z_{n-2} = 0 & ... & Z_0 = < A > \\ {\rm images:} & B_n = 0 & & B_{n-1} = < a > & & B_{n-2} = 0 & ... & B_0 = 0 &\square \end{array}$$
Example (spheres). The sphere ${\bf S}^{n-1}$ has the same representation as the ball except the $n$-cell is missing: The cells:
The chain complex is: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{lcccccccccccccc} & C_n & \ra{\partial} & C_{n-1} & \ra{\partial} & C_{n-2} & \to ... \to & C_0 \\ & 0 & \ra{0} & < a > & \ra{0} & 0 & \to ... \to & < A > & \\ & & & a & \mapsto & 0 & & A & \\ {\rm kernels:} & Z_n = 0 & & Z_{n-1} = < a > & & Z_{n-2} = 0 & ... & Z_0 = < A > \\ {\rm images:} & B_n = 0 & & B_{n-1} = 0 & & B_{n-2} = 0 & ... & B_0 = 0 &\quad\square \end{array}$$
Retrieved from "https://calculus123.com/index.php?title=Chain_complexes&oldid=1988" | CommonCrawl |
$\lim_{n\to\infty}\frac{n -\big\lfloor\frac{n}{2}\big\rfloor+\big\lfloor\frac{n}{3}\big\rfloor-\dots}{n}$, a Brilliant problem
I encounter a question when visiting Brilliant:
$\space\space\space\space\lim_{n\to\infty}s_n$
$=\lim_{n\to\infty}\frac{n - \big \lfloor \frac{n}{2} \big \rfloor+ \big \lfloor \frac{n}{3} \big \rfloor - \big \lfloor \frac{n}{4} \big \rfloor + \dots}{n}$
$=\lim_{n\to\infty}\frac{\sum_{k=1}^n(-1)^{k+1}\lfloor\frac{n}{k}\rfloor}{n}$
The answer in the website above doesn't really satisfies me, as the answer does not tell how the sequence converge and I doesn't understand how we can take subsequence $n_k=k!$ to solve the problem.
I had some idea but doesn't seems to work:
1) It is easy to show $$s_n=\frac{\sum_{k=1}^{\big\lceil\frac{n}{2}\big\rceil}(\big\lfloor\frac{n}{2k-1}\big\rfloor-\big\lfloor\frac{n}{2k}\big\rfloor)}{n}$$
2) On the other hand,
$$s_n\approx\sum_{k=1}^n(-1)^{k+1}\frac{1}{k}\to\ln2$$
So I am wondering how $s_n\approx$ the alternate hamonic series $$\forall(n,k\in\mathbb N:n\ge k),\space\space\frac{n}{k}\in\Bigg[\bigg\lfloor\frac{n}{k}\bigg\rfloor,\bigg\lfloor\frac{n}{k}\bigg\rfloor+\bigg(\frac{k-1}{k}\bigg)\Bigg]$$
I tried to look at the graph, the sequence $s_n$ is very likely to converge to $\ln 2$, and the alternating harmonic series seems to be bounded by the graph of $s_n$ at most of the time.
3) Also I observed that $s_8=\frac{8-4+2-2+1-1+1-1}{8}=\frac{1}{2}$, the terms cancelled nicely, but I am afraid that the anlalogue is not generaly true for all $s_{2^k}$.
4) I have tried to use Stolz–Cesàro Theorem, but doesn't seems useful neither.
5) I know that $\forall x,y\in\mathbb R:x+y\in\mathbb Z, \lfloor x\rfloor+\lceil y\rceil=x+y$, which maybe is useful since we may thus write $s_n$ in a more beautiful manner?
6) If there is no $(-1)^{k+1}$, I think we can treat $s_n$ as a Riemann sum, but well, ... , seems useless.
7) I have tried to think about how many terms of summand of $ns_n$ is integer.
8) I have tried to think $\big\lfloor\frac{n}{k}\big\rfloor$ as the number of positive integer multiple of $k$ that $\lt n$, and I then considered sets of number that is counted and uncounted respectively, but well, the question doesn't seems that easy.
Does this help? (1) (2) (3)
Any help will be appreciate. Thank you!
Remarks: I was wondering is there a deep subject studying this (if so references please). Can this (or variants) be represented as a simpler function?
real-analysis sequences-and-series limits ceiling-and-floor-functions
Tony Ma
Tony MaTony Ma
$\begingroup$ (+1), this question takes "shows research effort" to a new level! $\endgroup$ – John Doe Apr 27 '18 at 12:41
$\begingroup$ Why does the first link go to desmos instead of Brilliant? $\endgroup$ – Barry Cipra Apr 27 '18 at 13:18
$\begingroup$ Corrected. Thx. $\endgroup$ – Tony Ma Apr 28 '18 at 0:45
$\begingroup$ Related question: math.stackexchange.com/questions/115824/… $\endgroup$ – Aryabhata Apr 28 '18 at 1:36
Let $f : [0, 1] \to \mathbb{R}$ be defined by
$$f(x) = \mathbf{1}_{\{\text{$x > 0$ and $\lfloor 1/x \rfloor$ is odd}\}} = \sum_{i=1}^{\infty} \mathbf{1}_{\{ 2i-1 \leq \frac{1}{x} < 2i \}}. $$
Then by double counting, we find that
\begin{align*} s_n = \sum_{k=1}^{n} (-1)^{k-1} \bigg\lfloor \frac{n}{k} \bigg\rfloor &= \sum_{k=1}^{n} (-1)^{k-1} \sum_{j=1}^{n} \mathbf{1}_{\{kj \leq n\}} \\ &= \sum_{j=1}^{n} \sum_{k=1}^{n} (-1)^{k-1} \mathbf{1}_{\{kj \leq n\}} = \sum_{j=1}^{n} f\left(\frac{j}{n}\right). \end{align*}
Now we utilize the following lemma:
Lemma. Let $f : [0, 1] \to \mathbb{R}$ be Riemann integrable. Then $$ \lim_{n\to\infty} \sum_{j=1}^{n} f\left(\frac{j}{n}\right)\frac{1}{n} = \int_{0}^{1} f(x) \, dx. $$
From this, we know that $s_n/n$ converges and
$$ \lim_{n\to\infty} \frac{s_n}{n} = \int_{0}^{1} f(x) \, dx = \sum_{i=1}^{\infty} \left( \frac{1}{2i-1} - \frac{1}{2i} \right) = \log 2. $$
Sangchul LeeSangchul Lee
$\begingroup$ Sorry, I don't understand your notation in line 1: How can you sum from $i=1$ to $\infty$ but also requiring {$2i-1\le\frac{1}{x}\lt 2i$}? $\endgroup$ – Tony Ma Apr 28 '18 at 1:05
$\begingroup$ @TonyMa, The notation $\mathbf{1}_{\{\cdots\}}$ yields $1$ if $\cdots$ is true and $0$ otherwise. I just partitioned the set $$A = \{ x \in [0, 1] : \text{$x > 0$ and $\lfloor 1/x \rfloor$ is odd}\}$$ into $$A = \bigcup_{i=1}^{\infty} \{ x \in (0, 1] : 2i-1 \leq (1/x) < 2i \},$$ which yields the first line. In particular, at most one term in $\sum_{i=1}^{\infty} \mathbf{1}_{\{ 2i-1 \leq \frac{1}{x} < 2i \}}$ is non-zero. $\endgroup$ – Sangchul Lee Apr 28 '18 at 2:36
$\begingroup$ Oic, I used to use the notation $\sum_{i\in\mathbb N:2i-1\le\frac{1}{x}\lt2i}1$ $\endgroup$ – Tony Ma Apr 28 '18 at 11:30
$\begingroup$ @TonyMa, Since the discontinuities of $f$ occur at $\{\frac{1}{n}:n\geq 1\}\cup\{0\}$, it is not hard to directly establish the Riemann integrability. But of course we can count on a sledgehammer method; it can be proved that a bounded function is Riemann integrable if and only if the set of discontinuities is measure-zero. $\endgroup$ – Sangchul Lee Apr 28 '18 at 12:30
$\begingroup$ I think I understand it. How a wonderful solution! I have think up the idea of this approach but I haven't try to use $f(\frac{j}{n})$ to make it be a Riemann sum. Truly beautiful. Thank so much $\endgroup$ – Tony Ma Apr 28 '18 at 12:38
Note that $$\frac{1}{n}\sum_{k=1}^{n}(-1)^{k+1}\left\lfloor\frac{n}{k}\right\rfloor=\frac{1}{n}\sum_{k=1}^{n}\left\lfloor\frac{n}{k}\right\rfloor-\frac{1}{n/2}\sum_{k=1}^{\lfloor n/2\rfloor}\left\lfloor\frac{n/2}{k}\right\rfloor=\frac{D(n)}{n}-\frac{D(n/2)}{n/2}.$$ where $D(x)= \sum_{k\geq 1}^{n}\left\lfloor\frac{x}{k}\right\rfloor$ is the divisor summatory function. It is known that $$D(x) = x\ln(x) + x(2\gamma -1) + O(\sqrt{x})\implies \frac{D(x)}{x} = \ln(x) + (2\gamma -1) + o(1)$$ (see also Dirichlet's Divisor Problem). Hence, as $n$ goes to $+\infty$, $$\frac{1}{n}\sum_{k=1}^{n}(-1)^{k+1}\left\lfloor\frac{n}{k}\right\rfloor= \ln(n)-\ln(n/2)+o(1)\to \ln(2).$$
Robert ZRobert Z
128k1212 gold badges8787 silver badges167167 bronze badges
$\begingroup$ For first sentence, do you assume $n$ is even? For second, do you mean $\ln$ instead of $\log$? $\endgroup$ – Tony Ma Apr 28 '18 at 0:51
$\begingroup$ @TonyMa usually $\log$ and $\ln$ are used interchangeably. $\endgroup$ – John Doe Apr 28 '18 at 3:48
$\begingroup$ @TonyMa No, $n$ is any integer number. I mean $\ln$. $\endgroup$ – Robert Z Apr 28 '18 at 5:48
Elementary high school approach
One can show immediately with the squeeze theorem that $$L_1=\lim_{n\to\infty}\left(\sum_{k=1}^{2\left\lfloor \sqrt{n} \right\rfloor}\frac{1}{2n}\left\lfloor\frac{2n}{k}\right\rfloor-\sum_{k=1}^{\left\lfloor \sqrt{n} \right\rfloor}\frac{1}{n}\left\lfloor\frac{n}{k}\right\rfloor\right)=\log(2),$$
using the simple fact that $\lim_{n\to\infty}(H_{2n}-H_n)=\log(2)$.
But guess what! Your limit is
$$\lim_{n\to\infty}\left(\sum_{k=1}^{2n}\frac{1}{2n}\left\lfloor\frac{2n}{k}\right\rfloor-\sum_{k=1}^{n}\frac{1}{n}\left\lfloor\frac{n}{k}\right\rfloor\right)=L_1,$$ and therefore you can use the first limit to calculate your initial limit and then show that the sum of the remaining terms tends to $0$, which is straightforward.
Adding some steps requested by OP
WLOG, for the comfort of calculations, we can replace in the initial limit $n$ by $2n$, and using the double inequality with floor function $x\ge\left\lfloor x\right\rfloor\ge x-1$, we have
$$H_{2\left\lfloor\sqrt{n}\right\rfloor}\ge\sum_{k=1}^{2\left\lfloor\sqrt{n}\right\rfloor}\frac{1}{2n}\left\lfloor\frac{2n}{k}\right\rfloor\ge H_{2\left\lfloor\sqrt{n}\right\rfloor}-\frac{\left\lfloor\sqrt{n}\right\rfloor}{n}$$
$$-H_{\left\lfloor\sqrt{n}\right\rfloor}+\frac{\left\lfloor\sqrt{n}\right\rfloor}{n}\ge-\sum_{k=1}^{\left\lfloor\sqrt{n}\right\rfloor}\frac{1}{n}\left\lfloor\frac{n}{k}\right\rfloor\ge-H_{\left\lfloor\sqrt{n}\right\rfloor},$$ that give $$H_{2\left\lfloor\sqrt{n}\right\rfloor}-H_{\left\lfloor\sqrt{n}\right\rfloor}+\frac{\left\lfloor\sqrt{n}\right\rfloor}{n}\ge\sum_{k=1}^{2\left\lfloor\sqrt{n}\right\rfloor}\frac{1}{2n}\left\lfloor\frac{2n}{k}\right\rfloor-\sum_{k=1}^{\left\lfloor\sqrt{n}\right\rfloor}\frac{1}{n}\left\lfloor\frac{n}{k}\right\rfloor\ge H_{2\left\lfloor\sqrt{n}\right\rfloor}-H_{\left\lfloor\sqrt{n}\right\rfloor}-\frac{\left\lfloor\sqrt{n}\right\rfloor}{n}.$$ Letting $n\to\infty$ you get the value of the limit $L_1$. If denoting $\left\lfloor\sqrt{n}\right\rfloor=m$, we have the type of limit mentioned above, $\lim_{m\to\infty}(H_{2m}-H_m)=\log(2)$.
Your limit, after replacing $n$ by $2n$, is
$$\lim_{n\to\infty}\sum_{k=1}^{2n}(-1)^{k-1}\frac{1}{2n}\left\lfloor\frac{2n}{k}\right\rfloor=\lim_{n\to\infty}\left(\sum_{k=1}^{2n}\frac{1}{2n}\left\lfloor\frac{2n}{k}\right\rfloor-2\sum_{k=1}^{n}\frac{1}{2n}\left\lfloor\frac{2n}{2k}\right\rfloor\right)$$ $$=\lim_{n\to\infty}\left(\sum_{k=1}^{2n}\frac{1}{2n}\left\lfloor\frac{2n}{k}\right\rfloor-\sum_{k=1}^{n}\frac{1}{n}\left\lfloor\frac{n}{k}\right\rfloor\right).$$
What's left to do? Using in the last limit that limit $L_1$ calculated above. $$\lim_{n\to\infty}\left(\sum_{k=1}^{2n}\frac{1}{2n}\left\lfloor\frac{2n}{k}\right\rfloor-\sum_{k=1}^{n}\frac{1}{n}\left\lfloor\frac{n}{k}\right\rfloor\right)$$ $$=\lim_{n\to\infty}\left(\sum_{k=1}^{2\left\lfloor \sqrt{n} \right\rfloor}\frac{1}{2n}\left\lfloor\frac{2n}{k}\right\rfloor+\sum_{k=2\left\lfloor \sqrt{n} \right\rfloor+1}^{2n}\frac{1}{2n}\left\lfloor\frac{2n}{k}\right\rfloor-\sum_{k=1}^{\left\lfloor \sqrt{n} \right\rfloor}\frac{1}{n}\left\lfloor\frac{n}{k}\right\rfloor-\sum_{k=\left\lfloor \sqrt{n} \right\rfloor+1}^n\frac{1}{n}\left\lfloor\frac{n}{k}\right\rfloor\right)$$ $$=\underbrace{\lim_{n\to\infty}\left(\sum_{k=1}^{2\left\lfloor \sqrt{n} \right\rfloor}\frac{1}{2n}\left\lfloor\frac{2n}{k}\right\rfloor-\sum_{k=1}^{\left\lfloor \sqrt{n} \right\rfloor}\frac{1}{n}\left\lfloor\frac{n}{k}\right\rfloor\right)}_{\displaystyle \log(2)}+\underbrace{\lim_{n\to\infty}\left(\sum_{k=2\left\lfloor \sqrt{n} \right\rfloor+1}^{2n}\frac{1}{2n}\left\lfloor\frac{2n}{k}\right\rfloor-\sum_{k=\left\lfloor \sqrt{n} \right\rfloor+1}^n\frac{1}{n}\left\lfloor\frac{n}{k}\right\rfloor\right)}_{\displaystyle 0}$$ $$=\log(2),$$ and the limit tending to $0$ can be done by arranging the sums under the limit under the form of an alternating sum and then squeezing the sum.
Further explanations on the limit tending to $0$
It's clear that $$0\le\sum_{k=2\left\lfloor \sqrt{n} \right\rfloor+1}^{2n}\frac{1}{2n}\left\lfloor\frac{2n}{k}\right\rfloor-\sum_{k=\left\lfloor \sqrt{n} \right\rfloor+1}^n\frac{1}{n}\left\lfloor\frac{n}{k}\right\rfloor=\frac{1}{2n}\sum_{k=\underbrace{2\left\lfloor \sqrt{n} \right\rfloor+1}_{m}}^{2n}(-1)^{k-1}\left\lfloor\frac{2n}{k}\right\rfloor$$ $$=\frac{1}{2n}\left(\left\lfloor\frac{2n}{m}\right\rfloor-\left\lfloor\frac{2n}{m+1}\right\rfloor+\left\lfloor\frac{2n}{m+2}\right\rfloor-\left\lfloor\frac{2n}{m+3}\right\rfloor+\left\lfloor\frac{2n}{m+4}\right\rfloor-\left\lfloor\frac{2n}{m+5}\right\rfloor+\left\lfloor\frac{2n}{m+6}\right\rfloor-\cdots\right)$$ $$\le\frac{1}{2n}\left(\left\lfloor\frac{2n}{m}\right\rfloor-\left\lfloor\frac{2n}{m+1}\right\rfloor+\left\lfloor\frac{2n}{m+1}\right\rfloor-\left\lfloor\frac{2n}{m+3}\right\rfloor+\left\lfloor\frac{2n}{m+3}\right\rfloor-\left\lfloor\frac{2n}{m+5}\right\rfloor+\left\lfloor\frac{2n}{m+5}\right\rfloor-\cdots\right)$$ $$=\frac{1}{2n}\left(\left\lfloor\frac{2n}{m}\right\rfloor-1\right)=\frac{1}{2n}\left(\left\lfloor\frac{2n}{2\left\lfloor \sqrt{n} \right\rfloor+1}\right\rfloor-1\right),$$ where letting $n\to\infty$, we obviously get $0$ which accounts for the limit tending to $0$ in my solution.
A final remark
One should have dealt from the very beginning with this part of the limit tending to $0$ since we had it in the form with the alternating sum as expected.
user 1591719user 1591719
$\begingroup$ Maybe you can add some step? such as how to use squeeze theorem and how my limit is equal to $L_1$? Thank you $\endgroup$ – Tony Ma Apr 29 '18 at 2:21
$\begingroup$ @TonyMa Is everything crystal clear now? $\endgroup$ – user 1591719 Apr 29 '18 at 9:55
$\begingroup$ Well...Sounds pretty cool. I get your idea and I think I might study it carefully later. But do you just proof one of the subsequences $s_{2k}\to\ln2$? How about $s_{2k-1}$? Or do u know it also converges to $\ln 2$ for sure? Does it have a similar proof? $\endgroup$ – Tony Ma Apr 29 '18 at 11:43
$\begingroup$ @TonyMa That's why I started with writing WLOG (without loss of generality). $s_{2k-1}$ follows the same line of proving the limit. $\endgroup$ – user 1591719 Apr 29 '18 at 11:48
Not the answer you're looking for? Browse other questions tagged real-analysis sequences-and-series limits ceiling-and-floor-functions or ask your own question.
Rounding is asymptotically useless?
Dirichlet's Divisor Problem
floor number sum
The error when we rounding down and sum in $\sum_i \lfloor n/i \rfloor$ vs. $\sum_i n/i$.
Is there also an other way to show the equality: $\left\lfloor \frac{n}{2}\right\rfloor + \left\lceil \frac{n}{2} \right\rceil=n$?
Is it possible to define $\lceil x\rceil$ in terms of $\lfloor\ldots\rfloor$?
Prove that $\left\lceil \frac{n}{m} \right\rceil =\left \lfloor \frac{n+m-1}{m} \right\rfloor$
Assistance in finding $\lim_{n\to\infty} \frac{n-\lfloor \sqrt n \rfloor^2}{n}$
On $\sum_{n=1}^\infty\frac{\mu\left(\lfloor \sqrt{n} \rfloor\right)-\mu\left(\lceil \sqrt{n} \rceil\right)}{n}$ and the Möbius function
Proving that $ \sum_{i=1}^{b-1} \left\lfloor \frac{a}{b}i \right\rfloor = \sum_{j=1}^{a-1} \left\lfloor \frac{b}{a}j \right\rfloor $
Prove:$\lim_{x\to \infty}\frac{\lfloor 3e^x\rfloor+2}{\lfloor 2e^x\rfloor+1}=\frac{3}{2}$
Unexpected result, does $\Big\lfloor\frac{n-1}{2}\Big\rfloor=\sum_{i=1}^\infty\bigg\lfloor\frac{n+2^i-1}{2^{i+1}}\bigg\rfloor $
When is it true that $x^2 < \lfloor{x}\rfloor \lceil{x}\rceil$? | CommonCrawl |
Ulam matrix
In mathematical set theory, an Ulam matrix is an array of subsets of a cardinal number with certain properties. Ulam matrices were introduced by Stanislaw Ulam in his 1930 work on measurable cardinals: they may be used, for example, to show that a real-valued measurable cardinal is weakly inaccessible.[1]
Definition
Suppose that κ and λ are cardinal numbers, and let F be a λ-complete filter on λ. An Ulam matrix is a collection of subsets Aαβ of λ indexed by α in κ, β in λ such that
• If β is not γ then Aαβ and Aαγ are disjoint.
• For each β the union of the sets Aαβ is in the filter F.
References
1. Jech, Thomas (2003), Set Theory, Springer Monographs in Mathematics (Third Millennium ed.), Berlin, New York: Springer-Verlag, p. 131, ISBN 978-3-540-44085-7, Zbl 1007.03002
• Ulam, Stanisław (1930), "Zur Masstheorie in der allgemeinen Mengenlehre", Fundamenta Mathematicae, 16 (1): 140–150
| Wikipedia |
Compact semigroup
In mathematics, a compact semigroup is a semigroup in which the sets of solutions to equations can be described by finite sets of equations. The term "compact" here does not refer to any topology on the semigroup.
This article is about equations over semigroups. For semigroups compact with respect to a topology, see Topological semigroup. For semigroups of compact operators, see C0-semigroup § Compact semigroups.
Let S be a semigroup and X a finite set of letters. A system of equations is a subset E of the Cartesian product X∗ × X∗ of the free monoid (finite strings) over X with itself. The system E is satisfiable in S if there is a map f from X to S, which extends to a semigroup morphism f from X+ to S, such that for all (u,v) in E we have f(u) = f(v) in S. Such an f is a solution, or satisfying assignment, for the system E.[1]
Two systems of equations are equivalent if they have the same set of satisfying assignments. A system of equations if independent if it is not equivalent to a proper subset of itself.[1] A semigroup is compact if every independent system of equations is finite.[2]
Examples
• A free monoid on a finite alphabet is compact.[3]
• A free monoid on a countable alphabet is compact.[4]
• A finitely generated free group is compact.[5]
• A trace monoid on a finite set of generators is compact.[4]
• The bicyclic monoid is not compact.[6]
Properties
• The class of compact semigroups is closed under taking subsemigroups and finite direct products.[7]
• The class of compact semigroups is not closed under taking morphic images or infinite direct products.[7]
Varieties
The class of compact semigroups does not form an equational variety. However, a variety of monoids has the property that all its members are compact if and only if all finitely generated members satisfy the maximal condition on congruences (any family of congruences, ordered by inclusion, has a maximal element).[8]
References
1. Lothaire (2011) p. 444
2. Lothaire (2011) p. 458
3. Lothaire (2011) p. 447
4. Lothaire (2011) p. 461
5. Lothaire (2011) p. 462
6. Lothaire (2011) p. 459
7. Lothaire (2011) p. 460
8. Lothaire (2011) p. 466
• Lothaire, M. (2011). Algebraic combinatorics on words. Encyclopedia of Mathematics and Its Applications. Vol. 90. With preface by Jean Berstel and Dominique Perrin (Reprint of the 2002 hardback ed.). Cambridge University Press. ISBN 978-0-521-18071-9. Zbl 1221.68183.
| Wikipedia |
\begin{document}
\title[FPP on a hyperbolic graph]{First Passage percolation on a hyperbolic graph admits bi-infinite geodesics}
\author{Itai Benjamini} \address{The Weizmann Institute, Rehovot, Israel}
\email{[email protected]}
\author[Tessera]{Romain Tessera*} \address{Laboratoire de Mathématiques d'Orsay, Univ. Paris-Sud, CNRS, Université Paris-Saclay\\ 91405 Orsay\\ FRANCE}\email{[email protected]} \thanks{{$*$} Supported in part by ANR project ANR-14-CE25-0004 ``GAMME"} \email{[email protected]} \date{\today} \subjclass[2010]{82B43, 51F99, 97K50} \keywords{First passage percolation, two-sided geodesics, hyperbolic graph, Morse geodesics.}
\baselineskip=16pt
\begin{abstract} Given an infinite connected graph, a way to randomly perturb its metric is to assign random i.i.d.\ lengths to the edges. An open question attributed to Furstenberg (\cite{Ke}) is whether there exists a bi-infinite geodesic in first passage percolation on $\mathbb{Z}^2$, and more generally on $\mathbb{Z}^n$ for $n\geq 2$. Although the answer is generally conjectured to be negative, we give a positive answer for graphs satisfying some negative curvature assumption. Assuming only strict positivity and finite expectation of the random lengths, we prove that if a graph $X$ has bounded degree and contains a Morse geodesic (e.g.\ is non-elementary Gromov hyperbolic), then almost surely, there exists a bi-infinite geodesic in first passage percolation on $X$. \end{abstract}
\maketitle \tableofcontents
\section{Introduction}
First passage percolation is a model of random perturbation of a given geometry. In this paper, we shall restrict to the simplest model, where random i.i.d lengths are assigned to the edges of a fixed graph. We refer to \cite{ADH,GK,Ke} for background and references.
Let us briefly recall how FPP is defined. We consider a connected non-oriented graph $X$, whose set of vertices (resp.\ edges) is denoted by $V$ (resp.\ $E$). For every function $\omega:E\to (0,\infty)$, we equip $V$ with the weighted graph metric $d_{\omega}$, where each edge $e$ has weight $\omega(e)$. In other words, for every $v_1,v_2\in V$, $d_{\omega}(v_1,v_2)$ is defined as the infimum over all path $\gamma=(e_1,\ldots, e_m)$ joining $v_1$ to $v_2$ of $|\gamma|_{\omega}:=\sum_{i=1}^m\omega(e_i)$. Observe that the simplicial metric on $V$ corresponds to the case where $\omega$ is constant equal to $1$, we shall simply denote it by $d$. We will now consider a probability measure on the set of all weight functions $\omega$. We let $\nu$ be a probability measure supported on $[0,\infty)$. Our model consists in choosing independently at random the weights $\omega(e)$ according to $\nu$. More formally, we equip the space $\Omega=[0,\infty)^E$ with the product probability that we denote by $P$.
A famous open problem in percolation theory is whether with positive probability, first passage percolation on $\mathbb{Z}^2$ admits a bi-infinite geodesic. In his Saint-Flour course from 84', Kesten attributes this question to Furstenberg (see \cite{Ke}). Licea and Newman \cite{LN} made partial progress on this problem, which is still open and mentioned that the conjecture that there are no such geodesics arose independently in the physics community studying spin glass. Wehr and Woo \cite{WW} proved absence of two sided infinite geodesic in a half plane, assuming the lengths distribution is continuously distributed with a finite mean.
For Riemannian manifolds, the existence of bi-infinite geodesics is influenced by the curvature of the space. It is well-known that complete simply connected non-positively curved Riemannian manifolds (such as the Euclidean space $\mathbb{R}^n$ or the hyperbolic space $\mathbb{H}^n$) admit bi-infinite geodesics. Therefore, simply connected manifolds without two-sided geodesics must have positively curved regions. It is easy to come up with examples of complete Riemannian surfaces with bubble-like structures that create short cuts avoiding larger and larger balls around some origin. See \cite{Ba} for background.
To help the reader's intuition, let us roughly describe a similar example in the graph setting. Starting with the standard Cayley graph of $\mathbb{Z}^2$, it is not difficult to choose edges lengths among the two possible values $1/10$ and $1$, such that the resulting weighted graph has no bi-infinite geodesics. To do so, consider a sequence of squares $C_n$ centered at the origin, whose size grows faster than any exponential sequence (e.g.\ like $n^n$). Then attribute length $1/10$ to the edges along $C_n$ for all $n$, and $1$ to all other edges. This creates large ``bubbles" with relatively small neck in the graph (which in a sense can be interpreted as large positively curved regions). One easily checks for all $n$, for every pair of points at large enough distance from the origin, any geodesic between them never enters $C_n$ (as it is more efficient to go around the shorter edges of its boundary, than traveling inside it). Random triangulations admit bubbles and indeed no two sided infinite geodesics. Geodesics go via ``mountain passes" these are random analogous of the Morse geodesics defined below. See \cite{CL} for the study of FPP on random triangulations.
The Euclidean plane being flat, its discrete counterpart $\mathbb{Z}^2$ (and more generally $\mathbb{Z}^d$ for $d\geq 2$) is in some sense at criticality for the question of existence of bi-infinite geodesics in FPP. Therefore, one should expect that in presence of negative curvature, FPP a.s.\ exhibits bi-infinite geodesics. For instance, this should apply to FPP on Cayley graphs of groups acting properly cocompactly by isometries on the hyperbolic space $\mathbb{H}^d$. We will see that this is indeed the case.
Let us first introduce some notation. Let $X$ be a simple graph, with no double edges. Recall that a path $\gamma=(e_1,\ldots, e_n)$ between two vertices $x,y$ is a sequence of consecutive edges joining $x$ to $y$. We denote $(x=\gamma(0),\ldots,\gamma(n)=y)$ the set of vertices such that for all $0\leq i<n$, $\gamma(i)$ and $\gamma(i+1)$ are joined by the edge $e_{i+1}$. For all $i<j$, we shall also denote by $\gamma([i,j])$ the subpath $(e_{i+1},\ldots,e_j)$ joining $\gamma(i)$ to $\gamma(j)$. Similarly, we define infinite paths indexed by $\mathbb{N}$ (resp.\ bi-infinite paths indexed by $\mathbb{Z}$).
\begin{defn}
Let $X$ be an infinite connected graph, and let $C\geq 1$ and $K\geq 0$. A path $\gamma$ of length $n$ between two vertices $x$ and $y$ is called a $(C,K)$-quasi-geodesic finite path if for all $0<i<j\leq n$, $$j-i\left(=\left|\gamma[i,j]\right|\right)\leq Cd(\gamma(i),\gamma(j))+K.$$ Similarly, we define $(C,K)$-quasi-geodesic infinite (or bi-infinite) paths. An infinite (or a bi-infinite) path will simply be called a quasi-geodesic if it is $(C,K)$-quasi-geodesic for some constants $C$ and $K$. \end{defn} \begin{defn} A bi-infinite path $\gamma$ in $X$ is called a {\em Morse quasi-geodesic} (resp.\ Morse geodesic) if it is a quasi-geodesic (resp.\ a geodesic) and if it satisfies the so-called Morse property: for all $C\geq 1$ and $K>0$, there exists $R$ such that every $(C,K)$-quasi-geodesic joining two points of $\gamma$ remains inside the $R$-neighborhood of $\gamma$. \end{defn} It is well-known and easy to deduce from its definition that in a weighted graph with bounded degree, whose weights are bounded away from 0, a Morse geodesic always lies at at bounded distance from a bi-infinite geodesic.
\begin{thm}\label{thm:Main} Let $X$ be an infinite connected graph with bounded degree, that contains a Morse quasi-geodesic $\gamma$. Assume $\mathbb{E}\omega_e<\infty$ and $\nu(\{0\})=0$. Then for a.e.\ $\omega$, $X_{\omega}$ admits a bi-infinite geodesic. Moreover for a.e.\ $\omega$ there exists a finite subset $A\subset X$ such that for every sequence of pairs of vertices $(x_n,y_n)$ going to infinity on opposite sides of $\gamma$, the $\omega$-geodesic between $x_n$ and $y_n$ crosses $A$.
\end{thm}
That is, if the underling graph admits a two sided infinite Morse geodesic, the random FPP metric will have a two sided infinite geodesic a.s.
Very recently Ahlberg and Hoffman \cite{AH} made substantial progress on the structure of geodesic rays for FPP on $\mathbb{Z}^2$ solving the midpoint problem (from \cite{BKS}) which is related to the non existence of bi-infinite geodesic. They showed that the probability a shortest path between $(-n,0)$ and $(n, 0)$ will go via $(0,0)$ is going to $0$ with $n$. Their upperbound on the probability is going very slowly to $0$, and is likely far from the truth. The theorem above shows that the probability of the midpoint event is not going to $0$ in the distance, along a Morse geodesic. This can be useful in proving linear variance.
We briefly recall the definition of a hyperbolic graph. A geodesic triangle in a graph $X$ consists of a triplet of vertices $x_0,x_1,x_2\in V$, and of geodesic paths $\gamma_0,\gamma_1,\gamma_2$ such that $\gamma_i$ joins $x_{i+1}$ to $x_{i+2}$ where $i\in \mathbb{Z}/3\mathbb{Z}$. Given $\delta\geq 0$, a geodesic triangle is called $\delta$-thin if for every $i\in \mathbb{Z}/3\mathbb{Z}$, every vertex $v_i$ on $\gamma_i$ lies at distance at most $\delta$ from either $\gamma_{i+1}$ or $\gamma_{i+2}$. Said informally, a geodesic triangle is $\delta$-thin if every side is contained in the $\delta$-neighborhood of the other two sides. A graph is hyperbolic if there is $\delta < \infty$ so that all geodesic triangles are $\delta$-thin. It is well-known \cite{Gr} that in a hyperbolic graph, any bi-infinite quasi-geodesic is Morse. In particular, we deduce the following
\begin{cor} Let $X$ be a hyperbolic graph with bounded degree containing at least one bi-infinite geodesic. Assume $\mathbb{E}\omega_e<\infty$ and $\nu(\{0\})=0$. Then for a.e.\ $\omega$, $X_{\omega}$ admits a bi-infinite geodesic. Moreover for every bi-infinite quasi-geodesic $\gamma$, for a.e.\ $\omega$ there exists a finite subset $A\subset X$ such that for every sequence of pairs of vertices $(x_n,y_n)$ going to infinity on opposite sides of $\gamma$, the $\omega$-geodesic between $x_n$ and $y_n$ crosses $A$. \end{cor}
Note that the case where $\nu$ is supported in an interval $[a,b]\subset (0,\infty)$ is essentially obvious. Indeed, for all $\omega$, the weighed graph $X_{\omega}$ is bi-Lipschitz equivalent to $X$. We deduce that a Morse quasi-geodesic in $X$ remains a Morse quasi-geodesic in $X_{\omega}$ (adapting the definition to weighted graphs), and therefore lies at bounded distance from an actual bi-infinite geodesic.
We finish this introduction mentioning that Theorem \ref{thm:Main} applies to a wide class of Cayley graphs, including Cayley graphs of relatively hyperbolic groups, Mapping Class groups, and so on. \section{Preliminarily lemmas} We start with a useful characterization of Morse quasi-geodesics, that one may take as a definition. \begin{prop}\label{prop:DMS}\cite[Proposition 3.24 (3)]{DMS} A bi-infinite quasi-geodesic $\gamma_0$ is Morse if and only if the following holds. For every $C\geq 1$, there exists $D\geq 0$ such that every path of length $\leq Dn$ connecting two points $x,y$ on $\gamma$ at distance $\geq n$ crosses the $D$-neighborhood of the middle third of the segment of $\gamma_0$ joining $x$ to $y$. \end{prop} We deduce the following criterion, which we shall use in the sequel. \begin{lem} \label{lem:MQG} Let $X$ be an infinite connected graph with bounded degree. Assume $\gamma_0$ is a Morse quasi-geodesic. Then there exists an increasing function $\phi:\mathbb{R}_+\to \mathbb{R}_+$ such that $\lim_{t\to \infty}\phi(t)=\infty$, and with the following property. Assume \begin{itemize} \item $x,y$ belong to $\gamma_0$; \item $x'$ and $y'$ are vertices such that $d(x,x')=d(y,y')=R$, and $d(x',y')\geq 10R$; \item $\gamma$ is a path joining $x'$ to $y'$, and remains outside of the $R$-neighborhood of $\gamma_0$.
\end{itemize}
Then
$$|\gamma|\geq \phi(R)d(x,y).$$ \end{lem} \begin{proof} Assume by contradiction that there exists a constant $C>0$, and for every $n$, an integer $R\geq n$, vertices $x,x',y,y'$ and a path $\gamma$ of length $\leq Cd(x,y)$ such that $d(x,x')=d(y,y')=R$, $d(x',y')\geq 10R$, and $\gamma$ avoids the $R$-neighborhood of $\gamma_0$. By choosing $n$ large enough, we can assume that $R>D$. Applying Proposition \ref{prop:DMS} to the path obtained by concatenating $\gamma$ with geodesics from $x'$ to $x$ and from $y'$ to $y$ yields the desired contradiction. \end{proof}
The hypothesis $E\omega_e<\infty$ is used (only) in the following trivial lemma.
\begin{lem}\label{lem:LLN} Let $X$ be a connected graph, and let $\gamma$ be a self avoiding path. Assume $0<b=\mathbb{E}\omega_e<\infty$. Then for a.e.\ $\omega$, there exists $r_0=r_0(\omega)$ such that for all $i\leq 0\leq j$,
$$|\gamma([i,j])|_{\omega}\leq 2b(j-i)+r_0.$$ \end{lem} \begin{proof} This immediately follows from the law of large number, using that the edges length distributions are i.i.d. \end{proof}
Our assumption $\nu(\{0\})=0$ is used to prove the following two lemmas.
\begin{lem} Let $X$ be an infinite connected graph with bounded degree and assume that $\nu(\{0\})=0$. There exists an increasing function $\alpha:(0,\infty)\to (0,1]$ such that $\lim_{t\to 0}\alpha(t)=0$, and such that for all finite path $\gamma$ and all $\varepsilon>0$, $$
P\left(|\gamma|_{\omega}\leq \varepsilon|\gamma|\right)\leq \alpha(\varepsilon)^{|\gamma|}. $$ \end{lem} \begin{proof}
The assumption implies that for all $\lambda>0$, there exists $\delta>0$ such that $\nu([0,0+\delta])<\lambda$. Let $\gamma$ be a path of length $n$. Assume that $|\gamma|_{\omega}\leq \varepsilon |\gamma|$, and let $N$ be the number of edges of $\gamma$ with $\omega$-length $\geq \delta $. It follows that $$\delta N \leq \varepsilon n,$$ so we deduce that $N\leq \varepsilon n/\delta$. This imposes that at least $(1-\varepsilon/\delta)n$ edges of $\gamma$ have $\omega$-length $\leq \delta$. Recall that by Stirling's formula, given some $0<\alpha<1$, the number of ways to choose $\alpha n$ edges in a path of length n is $$\sim \frac{n^n}{(\alpha n)^{\alpha n}((1-\alpha) n)^{(1-\alpha) n}}= (1/\alpha)^{\alpha n}(1/(1-\alpha)^{(1-\alpha)n}.$$ Thus the probability that $\gamma$ has $\omega$-length at most $ \varepsilon n$ is less than a universal constant times $$\frac{\lambda^{(1-\varepsilon/\delta)n}}{(\varepsilon/\delta)^{(\varepsilon/\delta)n}(1-\varepsilon/\delta)^{(1-\varepsilon/\delta)n}}=\left(\frac{\lambda^{1-\varepsilon/\delta}}{(\varepsilon/\delta)^{\varepsilon/\delta}(1-\varepsilon/\delta)^{1-\varepsilon/\delta}}\right)^{n}.$$ Note that $$\lim_{\varepsilon\to 0}\frac{\lambda^{1-\varepsilon/\delta}}{(\varepsilon/\delta)^{\varepsilon/\delta}(1-\varepsilon/\delta)^{1-\varepsilon/\delta}}=\lambda.$$ In other words, we have proved that for all $\lambda>0$, there exists $\varepsilon>0$ such that
$$P\left(|\gamma|_{\omega}\leq \varepsilon|\gamma|\right)\leq (2\lambda)^{|\gamma|}$$ which is equivalent to the statement of the lemma. \end{proof} \begin{lem}\label{lem:upperboundpath}
Let $X$ be an infinite connected graph with bounded degree, and let $o$ be some vertex of $X$. Assume $\nu(\{0\})=0$. Then there exists $c>0$ such that for a.e.\ $\omega$, there exists $r_1=r_1(\omega)$ such that for all finite path $\gamma$ such that\footnote{Here $d(\gamma,o)$ denotes the distance between $o$ and the set of vertices $\{\gamma(0),\gamma(1),\ldots \}$.} $d(\gamma,o)\leq |\gamma|$, one has
$$|\gamma|_{\omega}\geq c|\gamma|-r_1.$$ \end{lem} \begin{proof} Let $q$ be an upper bound on the degree of $X$, and let $n\geq 1$ be some integer. Every path of length $n$ lying at distance at most $n$ from $o$ is such that $d(o,\gamma(0))\leq 2n$, hence such a path is determined by a vertex in the ball $B(o,2n)$, whose size is at most $q^{2n}+1$, and a path of length $n$ originated from this vertex. Therefore the number of such paths is at most $(q+1)^{3n}$.
On the other hand, we deduce from the previous lemma that for $c>0$ small enough, the probability that there exists some path $\gamma$ of length $n$, and at distance at most $n$ from $o$, and satisfying $|\gamma|_{\omega}\leq c|\gamma|$ is less than $1/(q+1)^{4n}$. Hence the lemma follows by Borel-Cantelli. \end{proof}
\section{Proof of Theorem \ref{thm:Main}} We let $\gamma_0$ be some Morse quasi-geodesic. First of all, we do not loose generality by assuming that our Morse quasi-geodesic $\gamma_0$ is a bi-infinite geodesic of the graph $X$. We let $o= \gamma_0(0)$ be some vertex. We consider two sequences of vertices $(x_n)$ and $(y_n)$ on $\gamma_0$ which go to infinity in opposite directions.
We let $\Omega'\subset \Omega$ be a measurable subset of full measure such that the conclusions of Lemmas \ref{lem:LLN} and \ref{lem:upperboundpath} hold. For all $n$ and for all $\omega$, we pick measurably an $\omega$-geodesic $\gamma_{\omega}^n$ between $x_n$ and $y_n$. Note that Lemmas \ref{lem:LLN} and \ref{lem:upperboundpath} imply that such a geodesic exists: by Lemma \ref{lem:LLN}, we have that $d_{\omega}(x_n,y_n)$ is finite, and by Lemma \ref{lem:upperboundpath}, paths of length $\geq M$ have $\omega$-length going to infinity as $M\to \infty$.
If we can prove the second part of Theorem \ref{thm:Main}, i.e.\ that for all $\omega\in \Omega'$, there exists a constant $R_{\omega}>0$ such that for all $n$, $d(\gamma_{\omega}^n,o)\leq R_{\omega}$, then the first part of Theorem \ref{thm:Main} follows by a straightforward compactness argument\footnote{Indeed, observe that Lemma \ref{lem:upperboundpath} implies that $X_{\omega}$ is a.e.\ locally finite in the sense that $d_{\omega}$-bounded subsets of vertices are finite.}. So we shall assume by contradiction that for some $\omega\in \Omega'$, there exists a sequence $R_n$ going to infinity such that $\gamma_{\omega}^n$ avoids $B(o,100R_n)$.
\begin{lem} Assuming the above, there exist integers $q<p$ such that $$d(\gamma_{\omega}^n(p),\gamma_0)=d(\gamma_{\omega}^n(q),\gamma_0)=R_n,$$ and such that for all $p\leq k\leq q$, $$d(\gamma_{\omega}^n(k),\gamma_0)\geq R_n,$$ and $$d(\gamma_{\omega}^n(p),\gamma_{\omega}^n(q))\geq 10R_n.$$ \end{lem} \begin{proof} (Note that this is obvious from a picture). Let $\gamma_0(i)$ and $\gamma_0(j)$ with $i<0<j$ be the two points at distance $100R_n$ from $o=\gamma_0(0)$. Since $\gamma_0$ is a geodesic, $\gamma_0((\infty,i])$ and $\gamma_0([j,\infty))$ are distance $200R_n$ from one another. Let $r$ be the first time integer such that $d(\gamma_{\omega}^n(r), \gamma_0([j,\infty)))=100R_n$. By triangular inequality, $d(\gamma_{\omega}^n(r), \gamma_0((\infty,i]))\geq100R_n$, and since we also have $d(\gamma_{\omega}^n(r), o)\geq 100R_n$, we deduce that $$d(\gamma_{\omega}^n(r), \gamma_0)\geq 50R_n.$$
We let $p$ and $q$ be respectively the largest integer $\leq r$ and the smallest integer $\geq r$ such that $$d(\gamma_{\omega}^n(p),\gamma_0)=d(\gamma_{\omega}^n(q),\gamma_0)=R_n.$$ Clearly, for all $p\leq k\leq q$, $$d(\gamma_{\omega}^n(k),\gamma_0)\geq R_n.$$ Moreover, recall that $\gamma_{\omega}^n$ avoids $B(o,100R_n)$. Hence if $x$ and $y$ are points on $\gamma_0$ such that $d(\gamma_{\omega}^n(p),x)=d(\gamma_{\omega}^n(q),y)=R_n$, we deduce by triangular inequality that $d(x,o)\geq 99R_n$ and $d(y,o)\geq 99R_n$. But since $x$ and $y$ lie on both sides of $o$, this implies that $d(x,y)\geq 198R_n$ (because $\gamma_0$ is a geodesic). Now by triangular inequality, we conclude that $$d(\gamma_{\omega}^n(p),\gamma_{\omega}^n(q))\geq d(x,y)-2R_n \geq 196R_n\geq 10R_n.$$ So the lemma follows. \end{proof}
\
\noindent{\bf End of the proof of Theorem \ref{thm:Main}}
We now let $i$ and $j$ be integers such that $$d(\gamma_{\omega}^n(p),\gamma_0(i))=d(\gamma_{\omega}^n(q),\gamma_0(j))=R_n.$$
Note that by triangular inequality, $j-i=|\gamma_0([i,j])|\geq R_n$.
By Lemmas \ref{lem:upperboundpath} and \ref{lem:MQG}, we have \begin{eqnarray*}
|\gamma_{\omega}^n([p,q])|_{\omega} & \geq & c|\gamma_{\omega}^n([p,q])|-r_1\\
& \geq & c\phi(R_n)|\gamma_0([i,j])|-r_1\\
& = & c\phi(R_n)(j-i)-r_1. \end{eqnarray*} On the other hand, since $\gamma_{\omega}^n$ is an $\omega$-geodesic between $x_n$ and $y_n$, we have \begin{eqnarray*}
|\gamma_{\omega}^n([p,q])|_{\omega} & \leq & 2R_n+ |\gamma_0([i,j])|_{\omega}\\
& \leq & 2R_n +2b|\gamma_0([i,j])|+r_0\\
& = & 2R_n +2b(j-i)+r_0,
\end{eqnarray*} where the second inequality follows from Lemma \ref{lem:LLN}.
Gathering these inequalities, we obtain (for $n$ large enough) $$(c\phi(R_n)-2b)R_n\leq (c\phi(R_n)-2b)(j-i)\leq 2R_n+r_1+r_0,$$ which yields a contradiction since $\phi(R_n)\to \infty$ as $n\to \infty$. This ends the proof of Theorem \ref{thm:Main}.
\section{Remarks and questions}
\begin{itemize}
\item (Cayley graphs) We do not know a single example of an infinite Cayley graph, for which FPP a.s.\ admits no bi-infinite geodesics (to fix the ideas, assume the edge length distribution is supported on the interval $[1,2]$). \item (Adding dependence) Given a hyperbolic Cayley graph, rather than considering independent edges lengths it is natural to consider other group invariant distributions. Under which natural conditions (mixing?) on this distribution do bi-infinite geodesics a.s. exists?
\item (Poisson Voronoi and other random models) A variant of random metric perturbation is obtained via Poisson Voronoi tiling of a measure metric space. It seems likely that our method of proof applies to the hyperbolic Poisson Voronoi tiling, see \cite{BPP}. Recently other versions of random hyperbolic triangulations were constructed, \cite{AR} \cite{C}. Since those are not obtained by perturbing an underling hyperbolic space, our proof does not apply to this setting.
\item(Variance along Morse geodesics) We {\em conjecture} that under a suitable moment condition on the edge-length distribution, the variance of the random distance grows linearly along the Morse quasi-geodesic, unlike in Euclidean lattices \cite{BKS}. For lengths which are bounded away from zero and infinity, Morse's property ensures that geodesics remain at uniformly bounded distance from $\gamma$, hence reducing the problem to "filiform graphs", i.e.\ graphs quasi-isometric to $\mathbb{Z}$. A (very) special class of filiform graphs is dealt with in \cite{A} where a linear variance growth is proven.
\end{itemize}
\footnotesize
\end{document} | arXiv |
\begin{document}
{\begin{flushleft}\baselineskip9pt\scriptsize {\bf SCIENTIA}\newline Series A: {\it Mathematical Sciences}, Vol. ?? (2009), ?? \newline Universidad T\'ecnica Federico Santa Mar{\'\i}a \newline Valpara{\'\i}so, Chile \newline ISSN 0716-8446 \newline {\copyright\space Universidad T\'ecnica Federico Santa Mar{\'\i}a\space 2009} \end{flushleft}}
\setcounter{page}{1} \thispagestyle{empty}
\begin{abstract} The table of Gradshteyn and Rhyzik contains some integrals that can be expressed in terms of the incomplete beta function. We describe some elementary properties of this function and use them to check some formulas in the mentioned table. \end{abstract}
\maketitle
\newcommand{\nonumber}{\nonumber} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\int_{0}^{\infty}}{\int_{0}^{\infty}} \newcommand{\int_{0}^{1}}{\int_{0}^{1}} \newcommand{\int_{- \infty}^{\infty}}{\int_{- \infty}^{\infty}} \newcommand{\noindent}{\noindent} \newcommand{\mathop{\rm Re}\nolimits}{\mathop{\rm Re}\nolimits} \newcommand{\mathop{\rm Im}\nolimits}{\mathop{\rm Im}\nolimits}
\newtheorem{Definition}{\bf Definition}[section] \newtheorem{Thm}[Definition]{\bf Theorem} \newtheorem{Example}[Definition]{\bf Example} \newtheorem{Lem}[Definition]{\bf Lemma} \newtheorem{Note}[Definition]{\bf Note} \newtheorem{Cor}[Definition]{\bf Corollary} \newtheorem{Prop}[Definition]{\bf Proposition} \newtheorem{Problem}[Definition]{\bf Problem} \numberwithin{equation}{section}
\maketitle
\section{Introduction} \label{intro} \setcounter{equation}{0}
The table of integrals \cite{gr} contains a large variety of definite integrals that involve the {\em incomplete beta } function defined here by the integral \begin{equation} \beta(a) = \int_{0}^{1} \frac{x^{a-1} \, dx }{1+x}. \label{beta-def} \end{equation} \noindent The convergence of the integral requires $a > 0$. Nielsen, who used this function extensively, attributed it to Stirling \cite{nielsen2}, page 17. The table \cite{gr} prefers to introduce first the {\em digamma function} \begin{equation} \psi(x) = \frac{d}{dx} \log \Gamma(x) = \frac{\Gamma'(x)}{\Gamma(x)}, \end{equation} \noindent and define $\beta(x)$ by the identity \begin{equation} \beta(x) = \frac{1}{2} \left( \psi \left( \tfrac{x+1}{2} \right) - \psi \left( \tfrac{x}{2} \right) \right). \label{alt-def} \end{equation} \noindent This definition appears as $\mathbf{8.370}$ and (\ref{beta-def}) appears as $\mathbf{3.222.1}$. Here \begin{equation} \Gamma(x) = \int_{0}^{\infty} t^{x-1} e^{-t} \, dt \end{equation} \noindent is the classical gamma function. Naturally, both starting points for $\beta$ are equivalent, and Corollary \ref{coro-1} proves (\ref{alt-def}). The value \begin{equation} \gamma := -\psi(1) = -\Gamma'(1) \end{equation} \noindent is the well-known {\em Euler's constant}. \\
In this paper we will prove elementary properties of this function and use them to evaluate some definite integrals in \cite{gr}.
\section{Some elementary properties} \label{sec-elem} \setcounter{equation}{0}
The incomplete beta function admits a representation by series.
\begin{Prop} Let $a \in \mathbb{R}^{+}$. Then \begin{equation} \beta(a) = \sum_{k=0}^{\infty} \frac{(-1)^{k}}{a+k}. \end{equation} \end{Prop} \begin{proof} The result follows from the expansion of $1/(1+x)$ in (\ref{beta-def}) as a geometric series. \end{proof}
\begin{Cor} \label{coro-1} The incomplete beta function is given by \begin{equation} \beta(a) = \frac{1}{2} \left[ \psi \left( \frac{a+1}{2} \right) - \psi \left( \frac{a}{2} \right) \right]. \label{rela-11} \end{equation} \noindent This is $\mathbf{8.370}$ in \cite{gr}. \end{Cor} \begin{proof} The expansion for the digamma function $\psi$ \begin{equation} \psi(t) = -\gamma - \sum_{k=0}^{\infty} \left( \frac{1}{t+k} - \frac{1}{k+1} \right) \end{equation} \noindent has been discussed in \cite{moll-gr10}. Then \begin{equation} \psi \left( \frac{a}{2} \right) = -\gamma - \sum_{k=0}^{\infty} \left( \frac{2}{a + 2k} - \frac{1}{k+1} \right) \end{equation} \noindent and \begin{equation} \psi \left( \frac{a+1}{2} \right) = -\gamma - \sum_{k=0}^{\infty} \left( \frac{2}{a + 2k+1} - \frac{1}{k+1} \right). \end{equation} \noindent The identity (\ref{rela-11}) comes from adding these two expressions. \end{proof}
These properties are now employed to prove some functional relations of the incomplete beta function. The proofs will employ the identities \begin{eqnarray} \psi(x+1) & = & \frac{1}{x} + \psi(x) \label{psi-1} \\ \psi(x) - \psi(1-x) & = & - \pi \cot( \pi x) \label{psi-2} \\ \psi(x + \tfrac{1}{2} ) - \psi(\tfrac{1}{2} -x) & = & \pi \tan( \pi x) \label{psi-3} \end{eqnarray} \noindent that were established in \cite{moll-gr10}. \\
\begin{rem} Several of the evaluations presented here will employ the special values \begin{equation} \psi(n+1) = -\gamma + \sum_{k=1}^{n} \frac{1}{k}, \end{equation} \noindent that appears as $\mathbf{8.365.4}$, and \begin{equation} \psi \left( \tfrac{1}{2} \pm n \right) = -\gamma + 2 \left( \sum_{k=1}^{n} \frac{1}{2k-1} - \ln 2 \right), \end{equation} \noindent that appears as $\mathbf{8.366.3}$. \\
Many of the formulas in Section $\mathbf{4.271}$ employ the values \begin{equation} \psi'(n) = \frac{\pi^{2}}{6} - \sum_{k=1}^{n-1} \frac{1}{k^{2}}, \label{formula-211} \end{equation} \noindent that appear as $\mathbf{8.366.11}$ and also $\mathbf{8.366.12/13}$: \begin{equation} \psi'( \tfrac{1}{2} \pm n ) = \frac{\pi^{2}}{2} \mp 4 \sum_{k=1}^{n} \frac{1}{(2k-1)^{2}}. \label{formula-212} \end{equation}
Higher order derivatives are given by \begin{eqnarray} \psi^{(n)}(1) & = & (-1)^{n+1} n! \zeta(n+1) \text{ and } \nonumber \\ \psi^{(n)}(\tfrac{1}{2}) & = & (-1)^{n+1} n! (2^{n+1}-1)\zeta(n+1). \nonumber \end{eqnarray} \end{rem}
\begin{Prop} The incomplete beta function satisfies \begin{eqnarray} \beta(x+1) & = & \frac{1}{x} - \beta(x), \label{beta-1} \\ \beta(1-x) & = & \frac{\pi}{\sin \pi x} - \beta(x), \label{beta-2} \\ \beta(x+1) & = & \frac{1}{x} - \frac{\pi}{\sin \pi x} + \beta(1-x). \label{beta-3} \end{eqnarray} \end{Prop} \begin{proof} Using (\ref{rela-11}) we have \begin{eqnarray} \beta(x+1) & = & \frac{1}{2} \left[ \psi \left( \frac{x+2}{2} \right) - \psi \left( \frac{x+1}{2} \right) \right] = \frac{1}{2} \left[ \psi \left( \frac{x}{2} + 1 \right) - \psi \left( \frac{x+1}{2} \right) \right] \nonumber \\
& = & \frac{1}{2} \left[ \frac{2}{x} + \psi \left( \frac{x}{2} \right) -
\psi \left( \frac{x+1}{2} \right) \right] \nonumber \\
& = & \frac{1}{x} - \beta(x). \nonumber \end{eqnarray} \noindent This establishes (\ref{beta-1}). To prove (\ref{beta-2}) we start with \begin{eqnarray} \beta(x) + \beta(1-x) & = & \frac{1}{2} \left[ \psi \left( \frac{1}{2} + \frac{x}{2} \right) - \psi \left( \frac{x}{2} \right) + \psi \left( 1 -\frac{x}{2} \right) - \psi \left( \frac{1}{2} - \frac{x}{2} \right) \right]. \nonumber \end{eqnarray} \noindent The formula (\ref{beta-2}) now follows from (\ref{psi-2}) and (\ref{psi-3}). \end{proof}
\section{Some elementary changes of variables} \label{sec-elemchan} \setcounter{equation}{0}
The class of integrals evaluated here are obtained from (\ref{beta-def}) by some elementary manipulations.
\begin{Example} \label{ex21} The change $x = t^{p}$ in (\ref{beta-def}) yields \begin{equation} \beta(a) = p \int_{0}^{1} \frac{t^{ap-1} \, dt}{1+t^{p}}. \end{equation} \noindent Replace $a$ by $\frac{a}{p}$ to obtain $\mathbf{3.241.1}$: \begin{equation} \int_{0}^{1} \frac{t^{a-1} \, dt}{1+t^{p}} = \frac{1}{p} \beta \left( \frac{a}{p} \right). \label{32411} \end{equation} \end{Example}
\begin{Example} The special case $p=2$ in Example \ref{ex21} gives \begin{equation} \beta(a) = 2 \int_{0}^{1} \frac{t^{2a-1} \, dt}{1+t^{2}}. \end{equation} \noindent Choose $a = \frac{b+1}{2}$, and relabel the variable of integration as $x$, to obtain $\mathbf{3.249.4}$: \begin{equation} \int_{0}^{1} \frac{ x^{b} \, dx}{1+x^{2}} = \frac{1}{2} \beta \left( \frac{b+1}{2} \right). \end{equation} \end{Example}
\begin{Example} The evaluation of $\mathbf{3.251.7}$: \begin{equation} \int_{0}^{1} \frac{x^{a} \, dx}{(1+x^{2})^{2}} = -\frac{1}{4} + \frac{a-1}{4} \beta \left( \frac{a-1}{2} \right) \label{32517} \end{equation} \noindent comes from the change of variables $t = x^{2}$ and integration by parts. Indeed, \begin{eqnarray} \int_{0}^{1} \frac{x^{a} \, dx}{(1+x^{2})^{2}} & = & \frac{1}{2} \int_{0}^{1} t^{(a-1)/2} \, \frac{d}{dt} \frac{1}{1+t} \, dt \nonumber \\ & = & -\frac{1}{4} + \frac{a-1}{4} \int_{0}^{1} \frac{t^{(a-3)/2} \, dt} {1+t} \, dt, \nonumber \end{eqnarray} \noindent and (\ref{32517}) has been established. \end{Example}
\begin{Example} Formula $\mathbf{3.231.2}$ states that \begin{equation} \int_{0}^{1} \frac{x^{p-1} + x^{-p}}{1+x} \, dx = \frac{\pi}{\sin \pi p}. \end{equation} \noindent The integrals is recognized as $\beta(p) + \beta(1-p)$ and its value follows from (\ref{beta-2}). Similarly, $\mathbf{3.231.4}$ is \begin{equation} \int_{0}^{1} \frac{x^{p} - x^{-p}}{1+x} \, dx = \frac{1}{p} - \frac{\pi} {\sin \pi p}. \end{equation} \noindent The integral is now recognized as $\beta(1+p) - \beta(1-p)$, and the result follows from (\ref{beta-3}). \end{Example}
\begin{Example} The evaluation of $\mathbf{3.244.1}$: \begin{equation} \int_{0}^{1} \frac{x^{p-1} + x^{q-p-1}}{1+x^{q}} \, dx = \frac{\pi}{q} \text{cosec} \frac{p \pi}{q} \end{equation} \noindent is \begin{equation} I = \frac{1}{q} \left( \beta(p/q) + \beta(1 - p/q) \right) \end{equation} \noindent according to (\ref{32411}). The result now follows from (\ref{beta-2}). \end{Example}
\begin{Example} The evaluation of $\mathbf{3.269.2}$: \begin{equation} \int_{0}^{1} x \, \frac{x^{p} - x^{-p}}{1+x^{2}} \, dx = \frac{1}{p} - \frac{\pi}{2 \sin( \pi p/2)} \end{equation} \noindent is obtained by the change of variables $t = x^{2}$, that produces \begin{equation} I = \frac{1}{2} \int_{0}^{1} \frac{t^{p/2} - t^{-p/2}}{1+t} \, dt = \frac{1}{2} \left[ \beta \left( \frac{p}{2} + 1 \right) -
\beta \left( 1 -\frac{p}{2} \right) \right]. \end{equation} \noindent The result now follows from (\ref{beta-3}). \end{Example}
\section{Some exponential integrals} \label{sec-expo} \setcounter{equation}{0}
In this section we present some exponential integrals that may be evaluated in terms of the $\beta$-function.
\begin{Example} The change of variables $x = e^{-t}$ in (\ref{beta-def}) gives \begin{equation} \beta(a) = \int_{0}^{\infty} \frac{e^{-at} \, dt}{1+e^{-t}}. \label{expo-1} \end{equation} \noindent This appears as $\mathbf{3.311.2}$ in \cite{gr}. \end{Example}
\begin{Example} The evaluation of $\mathbf{3.311.13}$: \begin{equation} \int_{0}^{\infty} \frac{e^{-px} + e^{-qx}}{1+e^{-(p+q)x}} \, dx = \frac{\pi}{p+q} \text{cosec}\left( \frac{\pi p }{p+q} \right) \end{equation} \noindent is achieved by the change of variables $t = (p+q)x$ that produces \begin{eqnarray} I & = & \frac{1}{p+q} \int_{0}^{\infty} \frac{e^{-pt/(p+q)}}{1+e^{-t}} \, dt +
\frac{1}{p+q} \int_{0}^{\infty} \frac{e^{-qt/(p+q)}}{1+e^{-t}} \, dt \nonumber \\ & = & \frac{1}{p+q} \left[ \beta \left( \frac{p}{p+q} \right) + \beta \left( 1 - \frac{p}{p+q} \right) \right]. \nonumber \end{eqnarray} \noindent The result now comes from (\ref{beta-3}). \end{Example}
\section{Some trigonometrical integrals} \label{sec-trigo} \setcounter{equation}{0}
In this section we present the evaluation of some trigonometric integrals using the $\beta$-function.
\begin{Example} The change of variables $x = \tan^{2}t$ in (\ref{beta-def}) gives \begin{equation} \beta(a) = 2 \int_{0}^{\pi/4} \tan^{2a-1}t \, dt. \end{equation} \noindent Introduce the new parameter $b = 2a-1$ to obtain $\mathbf{3.622.2}$: \begin{equation} \int_{0}^{\pi/4} \tan^{b}t \, dt = \frac{1}{2} \beta \left( \frac{b+1}{2} \right). \label{36222} \end{equation} \end{Example}
\begin{Example} The change of variables $x = \tan t$ in (\ref{32517}) gives \begin{equation} \int_{0}^{\pi/4} \tan^{a}t \, \cos^{2}t \, dt = -\frac{1}{4} + \frac{a-1}{4} \beta \left( \frac{a-1}{2} \right). \label{form-1} \end{equation} \noindent Now use (\ref{beta-1}) to obtain \begin{equation} \beta \left( \frac{a-1}{2} \right) = \frac{2}{a-1} - \beta \left( \frac{a+1}{2} \right), \end{equation} \noindent that converts (\ref{form-1}) to \begin{equation} \int_{0}^{\pi/4} \tan^{a}t \, \cos^{2}t \, dt = \frac{1}{4} + \frac{1-a}{4} \beta \left( \frac{a+1}{2} \right). \label{form-2} \end{equation} \noindent This is the form in which $\mathbf{3.623.3}$ appears in \cite{gr}. Using this form and (\ref{36222}) we obtain $\mathbf{3.623.2}$: \begin{equation} \int_{0}^{\pi/4} \tan^{a}t \, \sin^{2}t \, dt = -\frac{1}{4} + \frac{1+a}{4} \beta \left( \frac{a+1}{2} \right). \label{form-3} \end{equation} \end{Example}
\begin{Example} The evaluation of $\mathbf{3.624.1}$: \begin{equation} \int_{0}^{\pi/4} \frac{\sin^{p}x \, dx}{\cos^{p+2}x} = \frac{1}{p+1} \end{equation} \noindent can be done by writing the integral as \begin{equation} I = \int_{0}^{\pi/4} \tan^{p+2}x \, dx + \int_{0}^{\pi/4} \tan^{p}x \, dx. \end{equation} \noindent These are evaluated using (\ref{36222}) to obtain \begin{equation} I = \frac{1}{2} \beta \left( \frac{p+3}{2} \right) + \frac{1}{2}
\beta \left( \frac{p+1}{2} \right). \end{equation} \noindent The rule (\ref{beta-1}) completes the proof. \end{Example}
\begin{Example} The integral $\mathbf{3.651.2}$ \begin{equation} \int_{0}^{\pi/4} \frac{\tan^{\mu}x \, dx}{1- \sin x \cos x} = \frac{1}{3} \left( \beta \left( \frac{\mu+2}{2} \right) + \beta \left( \frac{\mu+1}{2} \right) \right) \label{36512} \end{equation} \noindent can be established directly using the integral definition of $\beta$ given in (\ref{beta-def}). Simply observe that dividing the numerator and denominator of the integrand by $\cos^{2}x$ yields, after the change of variables $t = \tan x$, the identity \begin{eqnarray} \int_{0}^{\pi/4} \frac{\tan^{\mu}x \, dx}{1- \sin x \cos x} & = & \int_{0}^{\pi/4} \frac{\tan^{\mu}x} {(\sec^{2}x- \tan x)} \frac{dx}{\cos^{2}x} \nonumber \\ & = & \int_{0}^{1} \frac{t^{\mu} \, dt}{t^{2}-t+1} \nonumber \\ & = & \int_{0}^{1} \frac{t^{\mu+1} + t^{\mu}}{t^{3}+1} \, dt. \nonumber \end{eqnarray} \noindent The change of variables $t = s^{1/3}$ gives the result. \\
The evaluation of $\mathbf{3.651.1}$ \begin{equation} \int_{0}^{\pi/4} \frac{\tan^{\mu}x \, dx}{1+ \sin x \cos x} = \frac{1}{3} \left( \psi \left( \frac{\mu+2}{2} \right) - \psi \left( \frac{\mu+1}{2} \right) \right) \label{36511} \end{equation} \noindent can be established along the same lines. This part employs the representation $\mathbf{8.361.7}$: \begin{equation} \psi(z) = \int_{0}^{1} \frac{x^{z-1}-1}{x-1} \, dx - \gamma \end{equation} \noindent established in \cite{moll-gr10}. \end{Example}
\begin{Example} The elementary identity \begin{equation} \frac{1}{1 - \sin^{2}x \cos^{2}x} = \frac{1}{2} \left( \frac{1}{1+ \sin x \cos x} + \frac{1}{1 - \sin x \cos x} \right) \end{equation} \noindent and the evaluations given in Examples \ref{36511} and \ref{36512} gives a proof of $\mathbf{3.656.1}$: \begin{equation} \tfrac{1}{12} \left( - \psi \left( \tfrac{\mu+1}{6} \right) - \psi \left( \tfrac{\mu+2}{6} \right) + \psi \left( \tfrac{\mu+4}{6} \right) + \psi \left( \tfrac{\mu+5}{6} \right) + 2 \psi \left( \tfrac{\mu+2}{6} \right) - 2 \psi \left( \tfrac{\mu+1}{6} \right) \right). \end{equation} \end{Example}
\begin{Example} The final integral in this section is $\mathbf{3.635.1}$: \begin{equation} \int_{0}^{\pi/4} \cos^{\mu-1}(2x) \, \tan x \, dx = \frac{1}{2} \beta(\mu). \end{equation} \noindent This is easy: start with \begin{equation} \tan x = \frac{\sin x}{\cos x} = \frac{2 \sin x \cos x}{2 \cos^{2}x} = \frac{\sin 2x}{1+ \cos 2x}, \end{equation} \noindent and use the change of variables $t = \cos 2x$ to produce the result. \end{Example}
\section{Some hyperbolic integrals} \label{sec-hyper} \setcounter{equation}{0}
This section contains the evaluation of some hyperbolic integrals using the $\beta$-function.
\begin{Example} The integral (\ref{expo-1}) can be written as \begin{equation} \beta(a) = \int_{0}^{\infty} \frac{e^{t(1/2-a)} \, dt}{e^{t/2} + e^{-t/2}}, \end{equation} \noindent and with $t = 2y$ and $b = 2a-1$, we obtain $\mathbf{3.541.6}$: \begin{equation} \int_{0}^{\infty} \frac{e^{-by} \, dy }{\cosh y} = \beta \left( \frac{b+1}{2} \right). \label{35416} \end{equation} \end{Example}
\begin{Example} Integration by parts produces \begin{eqnarray} \int_{0}^{\infty} \frac{e^{-ax} \, dx}{\cosh^{2}x} & = & 2 \int_{0}^{\infty} e^{-ax} \frac{d}{dx} \frac{1}{1+e^{-2x}} \, dx \nonumber \\ & = & -1 + 2a \int_{0}^{\infty} \frac{e^{-ax} \, dx}{1+ e^{-2x}}. \nonumber \end{eqnarray} \noindent The change of variables $t = 2x$ now gives the evaluation of $\mathbf{3.541.8}$: \begin{equation} \int_{0}^{\infty} \frac{e^{-ax} \, dx}{\cosh^{2}x} = a \beta \left( \frac{a}{2} \right) -1. \end{equation} \end{Example}
\begin{Example} The change of variables $t = e^{-x}$ gives \begin{equation} \int_{0}^{\infty} e^{-ax} \tanh x \, dx = \int_{0}^{1} \frac{t^{a-1}-t^{a}} {1+t^{2}} \, dt, \end{equation} \noindent and with $s = t^{2}$ we get \begin{eqnarray} I & = & \frac{1}{2} \int_{0}^{1} \frac{s^{a/2-1} - s^{(a-1)/2}}{1+s} \, ds \nonumber \\ & = & \frac{1}{2} \left[ \beta \left( \frac{a}{2} \right) - \beta \left( \frac{a}{2} + 1 \right) \right]. \nonumber \end{eqnarray} \noindent The transformation rule (\ref{beta-1}) gives the evaluation of $\mathbf{3.541.7}$: \begin{equation} \int_{0}^{\infty} e^{-ax} \tanh x \, dx = \beta \left( \frac{a}{2} \right) - \frac{1}{a}. \end{equation} \end{Example}
\section{Differentiation formulas} \label{sec-diff} \setcounter{equation}{0}
\begin{Example} Differentiating (\ref{beta-def}) with respect to the parameter $a$ yields \begin{equation} \int_{0}^{1} \frac{x^{a-1} \, \ln x}{1+x} \, dx = \beta'(a), \end{equation} \noindent that appears as $\mathbf{4.251.3}$ in \cite{gr}. \end{Example}
\begin{Example} Differentiating (\ref{32411}) $n$ times with respect to the parameter $a$ produces $\mathbf{4.271.16}$ written in the form \begin{equation} \int_{0}^{1} \frac{x^{a-1} \, \ln^{n}x}{1+x^{p}} \, dx = \frac{1}{p^{n+1}} \beta^{(n)}\left( \frac{a}{p} \right). \label{427116} \end{equation} \noindent The choice $n=1$ now gives formula $\mathbf{4.254.4}$ in \cite{gr}: \begin{equation} \int_{0}^{1} \frac{x^{a-1} \, \ln x}{1+x^{p}} \, dx = \frac{1}{p^{2}} \beta'\left( \frac{a}{p} \right). \label{42544} \end{equation} \end{Example}
\begin{Example} The special case $n=1, \, a=1$ and $p=1$ in (\ref{427116}) produces the elementary integral $\mathbf{4.231.1}$: \begin{equation} \int_{0}^{1} \frac{\ln x \, dx}{1+x} = - \frac{\pi^{2}}{12}. \label{42311} \end{equation} \noindent In this evaluation we have employed the values \begin{equation} \psi'(1) = \zeta(2) = \frac{\pi^{2}}{6}, \text{ and } \psi'(1/2) = \frac{\pi^{2}}{12}, \end{equation} \noindent that appear in (\ref{formula-212}). \end{Example}
\begin{Example} Formula $\mathbf{4.231.14}$: \begin{equation} \int_{0}^{1} \frac{x \, \ln x }{1+x^{2}} \, dx = - \frac{\pi^{2}}{48} \end{equation} \noindent comes from (\ref{42544})
by choosing the parameters $n=1, \, a=2$ and $p=2$. The values of $\psi'(1)$ and $\psi'(1/2)$ are employed again. Naturally, this evaluation also comes from (\ref{42311}) via the change of variables $x^{2} \mapsto x$. \end{Example}
\begin{Example} The choice $n=a=1$ and $p=2$ in (\ref{42544}) and the values \begin{equation} \psi^{(2)} \left( \tfrac{1}{4} \right) = \pi^{2} + 8G \text{ and } \psi^{(2)} \left( \tfrac{3}{4} \right) = \pi^{2} - 8G, \end{equation} \noindent where $G$ is {\em Catalan constant} defined by \begin{equation} G = \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k+1)^{2}} \end{equation} \noindent yields the evaluation of $\mathbf{4.231.12}$: \begin{equation} \int_{0}^{1} \frac{\ln x \, dx}{1+x^{2}} = -G. \end{equation} \noindent The change of variables $x = t/a$, with $a>0$, and the elementary integral \begin{equation} \int_{0}^{a} \frac{dt}{t^{2}+a^{2}} = \frac{\pi}{4a}, \end{equation} \noindent give the evaluation of $\mathbf{4.231.11}$: \begin{equation} \int_{0}^{a} \frac{\ln x \, dx}{x^{2}+a^{2}} = \frac{\pi \ln a - 4G}{4a}. \end{equation} \end{Example}
\begin{Example} Now choose $n=1, \, a=2$ and $p=1$ in (\ref{42544}) and use the value $\psi'(3/2) = \pi^{2}/2-4$ given in (\ref{formula-212}) to obtain $\mathbf{4.231.19}$: \begin{equation} \int_{0}^{1} \frac{x \, \ln x }{1+x} \, dx = \frac{\pi^{2}}{12} -1. \end{equation} \noindent Combining this with (\ref{42311}) gives $\mathbf{4.231.20}$: \begin{equation} \int_{0}^{1} \frac{1-x}{1+x} \, \ln x \, dx = 1 - \frac{\pi^{2}}{6}. \end{equation} \end{Example}
\begin{Example} The values \begin{equation} \psi^{(2)} \left( \tfrac{1}{4} \right) = -2 \pi^{3} - 56 \zeta(3) \text{ and } \psi^{(2)} \left( \tfrac{3}{4} \right) = 2 \pi^{3} - 56 \zeta(3), \end{equation} \noindent given in \cite{srichoi}, are now used to produce the evaluation of $\mathbf{4.261.6}$: \begin{equation} \int_{0}^{1} \frac{\ln^{2}x \, dx}{1+x} = \frac{\pi^{3}}{16}. \end{equation} \end{Example}
\begin{Example} The relation \begin{equation} \psi^{(n)}(1-z) + (-1)^{n+1} \psi^{(n)}(z) = (-1)^{n} \pi \frac{d^{n}}{dz^{n}} \cot \pi z, \end{equation} \noindent and the choice $n=4, \, a=1$ and $p=2$ in (\ref{42544}) produces \begin{eqnarray} \int_{0}^{1} \frac{\ln^{4}x \, dx}{1+x^{2}} & = & \frac{1}{2^{5}} \beta^{(4)} \left( \tfrac{1}{2} \right) \nonumber \\ & = & \frac{1}{1024} \left( \psi^{(4)} \left( \tfrac{3}{4} \right) - \psi^{(4)} \left( \tfrac{1}{4} \right) \right) \nonumber \\ & = & \frac{1}{1024} \left( - \pi \frac{d^{4}}{dz^{4}} \cot \pi z
\Big|_{z=3/4} \right). \nonumber \end{eqnarray} \noindent This yields the evaluation of $\mathbf{4.263.2}$: \begin{equation} \int_{0}^{1} \frac{\ln^{4}x \, dx }{1+x^{2}} = \frac{5 \pi^{5}}{64}. \end{equation}
The evaluation of $\mathbf{4.265}$: \begin{equation} \int_{0}^{1} \frac{\ln^{6}x \, dx }{1+x^{2}} = \frac{61 \pi^{7}}{256}, \end{equation} \noindent can be checked by the same method. \end{Example}
\begin{Example} Now choose $n \in \mathbb{N}$ and take $a= n +1$ and $p=1$ in (\ref{42544}) to obtain the expression \begin{equation} I := \int_{0}^{1} \frac{x^{n} \, \ln^{2}x}{1+x} \, dx = \frac{1}{8} \beta^{(2)}(n+1). \end{equation} \noindent This is now expressed in terms of the $\psi-$function and then simplified employing the relation \begin{equation} \psi^{(m)}(z) = (-1)^{m+1} m! \zeta(m+1,z), \end{equation} \noindent with the Hurwitz zeta function \begin{equation} \zeta(s,z) := \sum_{k=0}^{\infty} \frac{1}{(z+k)^{s}}. \end{equation} \noindent We conclude that \begin{equation} I = \frac{1}{4} \left( \zeta \left( 3, \frac{n+1}{2} \right) -
\zeta \left( 3, \frac{n+2}{2} \right) \right). \end{equation} \noindent The elementary identity \begin{equation} \zeta \left( s, \tfrac{a}{2} \right) - \zeta \left( s, \tfrac{a+1}{2} \right) = 2^{s} \sum_{k=0}^{\infty} \frac{(-1)^{k}}{(k+a)^{s}}, \end{equation} \noindent is now used with $s=3$ and $a=n+1$ to obtain \begin{equation} \int_{0}^{1} \frac{x^{n} \, \ln^{2}x \, dx }{1+x} = 2 \sum_{k=0}^{\infty} \frac{(-1)^{k}}{(k+n+1)^{3}}. \end{equation} \noindent This is finally transformed to the form \begin{equation} \int_{0}^{1} \frac{x^{n} \, \ln^{2}x \, dx }{1+x} = (-1)^{n} \left( \frac{3}{2} \zeta(3) + 2 \sum_{k=1}^{n} \frac{(-1)^{k}}{k^{3}} \right). \end{equation} \noindent This is $\mathbf{4.261.11}$ of \cite{gr}. \\
The same method produces $\mathbf{4.262.4}$: \begin{equation} \int_{0}^{1} \frac{x^{n} \, \ln^{3}x \, dx }{1+x} = (-1)^{n+1} \left( \frac{7 \pi^{4}}{120} - 6 \sum_{k=0}^{n-1} \frac{(-1)^{k}}{(k+1)^{4}} \right). \end{equation} \end{Example}
\begin{Example} The method of the previous example yields the value of $\mathbf{4.262.1}$: \begin{equation} \int_{0}^{1} \frac{\ln^{3}x \, dx}{1+x} = - \frac{7 \pi^{4}}{120}. \end{equation} \noindent Here we use $\psi^{(3)}(1) = \pi^{4}/15$ and $\psi^{(3)}(1/2) = \pi^{4}$. \\
Similarly, $\psi^{(5)}(1) = 8 \pi^{6}/63$ and $\psi^{(5)}(1/2) = 8 \pi^{6}$ yields $\mathbf{4.264.1}$: \begin{equation} \int_{0}^{1} \frac{\ln^{5}x \, dx}{1+x} = - \frac{31 \pi^{6}}{252}, \end{equation} \noindent and $\psi^{(7)}(1) = 8 \pi^{8}/15$ and $\psi^{(7)}(1/2) = 136 \pi^{8}$ yields $\mathbf{4.266.1}$: \begin{equation} \int_{0}^{1} \frac{\ln^{7}x \, dx}{1+x} = - \frac{127 \pi^{8}}{240}. \end{equation} \end{Example}
\begin{Example} A combination of the evaluations given above produces $\mathbf{4.261.2}$: \begin{equation} \int_{0}^{1} \frac{\ln^{2}x \, dx}{1-x+x^{2}} = \frac{10 \pi^{3}}{81 \sqrt{3}}. \end{equation} \noindent Indeed, \begin{eqnarray} \int_{0}^{1} \frac{\ln^{2}x \, dx}{1-x+x^{2}} & = & \int_{0}^{1} \frac{1+x}{1+x^{3}} \ln^{2}x \, dx \nonumber \\ & = & \int_{0}^{1} \frac{\ln^{2}x \, dx}{1+x^{3}} + \int_{0}^{1} \frac{x \, \ln^{2}x \, dx}{1+x^{3}} \nonumber \\ & = & \frac{1}{27} \left( \beta^{(2)}(\tfrac{1}{3} ) + \beta^{(2)}(\tfrac{2}{3} ) \right) \nonumber \\ & = & \frac{1}{216} \left( \psi^{(2)} \left( \tfrac{2}{3} \right) - \psi^{(2)} \left( \tfrac{1}{3} \right) + \psi^{(2)} \left( \tfrac{5}{6} \right) - \psi^{(2)} \left( \tfrac{1}{6} \right) \right) \nonumber \\ & = & \frac{\pi}{216} \left(\frac{d^{2}}{dz^{2}} \cot \pi z
\Big|_{z=1/3} + \frac{d^{2}}{dz^{2}} \cot \pi z
\Big|_{z=1/6} \right) \nonumber \\ & = & \frac{\pi}{216} \left( \frac{8 \pi^{2}}{3 \sqrt{3}} + 8 \sqrt{3} \pi^{2} \right) = \frac{10 \pi^{3}}{81 \sqrt{3}}. \nonumber \end{eqnarray} \end{Example}
\begin{Example} Replace $n$ by $2n$ in (\ref{427116}) and set $a=p=1$ to produce \begin{eqnarray} \int_{0}^{1} \frac{\ln^{2n}x \, dx}{1+x} & = & \beta^{(2n)}(1) \nonumber \\
& = & \frac{1}{2^{2n+1}} \left( \psi^{(2n)}(1) - \psi^{(2n)} \left( \tfrac{1}{2} \right) \right) \nonumber \\ & = & \frac{2^{2n}-1}{2^{2n}} (2n)! \zeta(2n+1). \nonumber \end{eqnarray} \noindent This appears as $\mathbf{4.271.1}$. \end{Example}
\begin{Example} The change of variables $t = bx$ in (\ref{427116}) produces \begin{eqnarray} \int_{0}^{b} \frac{t^{a-1} \, \ln t }{b^{p} + t^{p}} & = & \frac{b^{a-p}}{p^{2}} \beta' \left( \tfrac{a}{p} \right) + b^{1-a} \ln b \int_{0}^{b} \frac{t^{a-1} \, dt}{b^{p} + t^{p}} \nonumber \\
& = & \frac{b^{a-b}}{p^{2}} \beta' \left( \tfrac{a}{p} \right) + \ln b \frac{b^{a-p}}{p} \beta \left( \tfrac{a}{p} \right). \nonumber \end{eqnarray} \noindent The last integral was evaluated using (\ref{32411}). \\
Differentiate this identity with respect to the parameter $b$ to obtain \begin{eqnarray} \int_{0}^{b} \frac{t^{a-1} \, \ln t}{(b^{p}+t^{p})^{2}} \, dt & = & \frac{b^{a-2p} \ln b}{2p} + \frac{p-a}{p^{3}}b^{a-2p} \beta' \left( \tfrac{a}{p} \right) \label{mess-1} \\ & - & \frac{b^{a-2p}}{p^{2}} ( 1 + (a-p) \ln b ) \beta \left( \tfrac{a}{p} \right). \nonumber \end{eqnarray} \noindent The special case $a=b=p=1$ yields $\mathbf{4.231.6}$: \begin{equation} \int_{0}^{1} \frac{\ln x \, dx}{(1+x)^{2}} = -\beta(1) = - \ln 2. \end{equation} \noindent Similarly, the choice $a=2, \, b=1$ and $p=2$ yields $\mathbf{4.234.2}$: \begin{equation} \int_{0}^{1} \frac{x \ln x \, dx}{(1+x)^{2}} = -\frac{1}{4} \beta(1) = - \frac{\ln 2}{4}. \end{equation} \end{Example}
\begin{Example} In this last example of this section we present an evaluation of $\mathbf{4.234.1}$: \begin{equation} \int_{1}^{\infty} \frac{\ln x \, dx}{(1+x^{2})^{2}} = \frac{G}{2} - \frac{\pi}{8}, \label{42341} \end{equation} \noindent using the methods developed here. We begin with the change of variables $x \mapsto 1/x$ to transform the problem to the interval $[0,1]$. We have \begin{equation} \int_{1}^{\infty} \frac{\ln x \, dx}{(1+x^{2})^{2}} = - \int_{0}^{1} \frac{x^{2} \ln x \, dx}{(1+x^{2})^{2}}. \end{equation} \noindent Now choose $a=3, \, b=-1$ and $p=2$ in (\ref{mess-1}) to obtain \begin{equation} \int_{0}^{1} \frac{x^{2} \ln x \, dx}{(1+x^{2})^{2}} = - \frac{1}{8} \beta' \left( \frac{3}{2} \right) + \frac{1}{4} \beta \left( \frac{3}{2} \right). \end{equation} \noindent The value of (\ref{42341}) now follows from \begin{eqnarray} \tfrac{1}{4} \beta \left( \tfrac{3}{2} \right) & = & \tfrac{1}{8} \left( \psi \left( \tfrac{5}{4} \right) - \psi \left( \tfrac{3}{4} \right) \right) \nonumber \\ & = & \tfrac{1}{8} \left( 4 + \psi \left( \tfrac{1}{4} \right) - \psi \left( \tfrac{3}{4} \right) \right) \nonumber \\ & = & \tfrac{1}{2} - \tfrac{\pi}{8}, \nonumber \end{eqnarray} \noindent and \begin{eqnarray} \tfrac{1}{8} \beta' \left( \tfrac{3}{2} \right) & = & \tfrac{1}{32} \left( \psi' \left( \tfrac{5}{4} \right) - \psi' \left( \tfrac{3}{4} \right) \right) \nonumber \\ & = & \tfrac{1}{32} \left( \psi' \left( \tfrac{1}{4} \right) - \psi' \left( \tfrac{3}{4} \right) - 16 \right) \nonumber \\ & = & \tfrac{1}{32} \left( \zeta(2, \tfrac{1}{4}) - \zeta(2, \tfrac{3}{4}) - 16 \right) \nonumber \\ & = & \tfrac{G}{2} - \tfrac{1}{2}. \nonumber \end{eqnarray} \end{Example}
\section{One last example} \label{sec-last} \setcounter{equation}{0}
In this section we discuss the evaluation of $\mathbf{3.522.4}$: \begin{equation} \int_{0}^{\infty} \frac{dx}{(b^{2}+x^{2}) \cosh \pi x} = \frac{1}{b} \beta \left( b + \frac{1}{2} \right). \label{35224} \end{equation} \noindent The technique illustrated here will be employed in a future publication to discuss many other evaluations.
To establish (\ref{35224}), introduce the function \begin{equation} h(b,y) := \int_{0}^{\infty} e^{-bt} \frac{\cos y t}{\cosh t} \, dt. \end{equation} \noindent This function is harmonic and bounded for $\mathop{\rm Re}\nolimits{b} >0$. Therefore it admits a Poisson representation \begin{equation} h(b,y) = \frac{1}{\pi} \int_{-\infty}^{\infty} h(0,u) \frac{b}{b^{2} +(y-u)^{2}} \, du. \end{equation} \noindent The value $h(0,u)$ is a well-known Fourier transform \begin{equation} h(0,u) = \int_{0}^{\infty} \frac{\cos yt}{\cosh t} \, dt = \frac{\pi}{2 \, \cosh(\pi y/2)}, \end{equation} \noindent that appears as $\mathbf{3.981.3}$ in \cite{gr}. Therefore we have \begin{equation} h(b,y) = \frac{b}{2} \int_{-\infty}^{\infty} \frac{du}{\cosh(\pi u/2) \, \left[ b^{2} + (y-u)^{2} \right]}. \end{equation} \noindent The special value $y=0$ and (\ref{35416}) give the result (after replacing $b$ by $2b$ and $u$ by $2u$). \\
\begin{Note} Formula (\ref{35224}) can also be obtained by a direct contour integration. Details will be provided in a future publication. \end{Note}
We conclude with an interpretation of (\ref{35224}) in terms of the sine Fourier transform of a function related to $\beta(x)$. The proof is a simple application of the elementary identity \begin{equation} \int_{0}^{\infty} e^{xt} \sin bt \, dt = \frac{b}{b^{2}+x^{2}}. \end{equation} \noindent The details are left to the reader.
\begin{Thm} Let \begin{equation} \mu(x) := \int_{0}^{\infty} \frac{e^{-xt} \, dt}{\cosh t} = \beta \left( \frac{x+1}{2} \right). \end{equation} \noindent Then $\mathbf{3.522.4}$ in (\ref{35224}) is equivalent to the identity \begin{equation} \int_{0}^{\infty} \mu(t) \sin xt \, dt = \mu \left( \frac{2x}{\pi} \right). \end{equation} \end{Thm}
\end{document} | arXiv |
the open journal for quantum science
Perspectives & editorials
Outreach: Leaps!
Quantized refrigerator for an atomic cloud
Wolfgang Niedenzu1, Igor Mazets2,3, Gershon Kurizki4, and Fred Jendrzejewski5
1Institut für Theoretische Physik, Universität Innsbruck, Technikerstraße 21a, A-6020 Innsbruck, Austria
2Vienna Center for Quantum Science and Technology (VCQ), Atominstitut, TU Wien, 1020 Vienna, Austria
3Wolfgang Pauli Institute, Universität Wien, 1090 Vienna, Austria
4Department of Chemical Physics, Weizmann Institute of Science, Rehovot 7610001, Israel
5Heidelberg University, Kirchhoff-Institut für Physik, Im Neuenheimer Feld 227, D-69120 Heidelberg, Germany
Published: 2019-06-28, volume 3, page 155
Eprint: arXiv:1812.08474v3
Scirate: https://scirate.com/arxiv/1812.08474v3
Doi: https://doi.org/10.22331/q-2019-06-28-155
Citation: Quantum 3, 155 (2019).
We propose to implement a quantized thermal machine based on a mixture of two atomic species. One atomic species implements the working medium and the other implements two (cold and hot) baths. We show that such a setup can be employed for the refrigeration of a large bosonic cloud starting above and ending below the condensation threshold. We analyze its operation in a regime conforming to the quantized Otto cycle and discuss the prospects for continuous-cycle operation, addressing the experimental as well as theoretical limitations. Beyond its applicative significance, this setup has a potential for the study of fundamental questions of quantum thermodynamics.
► BibTeX data
@article{Niedenzu2019quantized, doi = {10.22331/q-2019-06-28-155}, url = {https://doi.org/10.22331/q-2019-06-28-155}, title = {Quantized refrigerator for an atomic cloud}, author = {Niedenzu, Wolfgang and Mazets, Igor and Kurizki, Gershon and Jendrzejewski, Fred}, journal = {{Quantum}}, issn = {2521-327X}, publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}}, volume = {3}, pages = {155}, month = jun, year = {2019} }
► References
[1] R. Alicki, The quantum open system as a model of the heat engine, J. Phys. A 12, L103 (1979).
https://doi.org/10.1088/0305-4470/12/5/007
[2] R. Kosloff, A quantum mechanical open system as a model of a heat engine, J. Chem. Phys. 80, 1625 (1984).
https://doi.org/10.1063/1.446862
[3] D. Gelbwaser-Klimovsky, W. Niedenzu, and G. Kurizki, Thermodynamics of Quantum Systems Under Dynamical Control, Adv. At. Mol. Opt. Phys. 64, 329 (2015a).
https://doi.org/10.1016/bs.aamop.2015.07.002
[4] J. Goold, M. Huber, A. Riera, L. del Rio, and P. Skrzypczyk, The role of quantum information in thermodynamics-a topical review, J. Phys. A 49, 143001 (2016).
https://doi.org/10.1088/1751-8113/49/14/143001
[5] S. Vinjanampathy and J. Anders, Quantum thermodynamics, Contemp. Phys. 57, 1 (2016).
https://doi.org/10.1080/00107514.2016.1201896
[6] R. Kosloff and Y. Rezek, The Quantum Harmonic Otto Cycle, Entropy 19, 136 (2017).
https://doi.org/10.3390/e19040136
[7] A. Ghosh, W. Niedenzu, V. Mukherjee, and G. Kurizki, in Thermodynamics in the Quantum Regime, edited by F. Binder, L. A. Correa, C. Gogolin, J. Anders, and G. Adesso (Springer, Cham, 2019) pp. 37-66.
https://doi.org/10.1007/978-3-319-99046-0_2
[8] M. O. Scully, M. S. Zubairy, G. S. Agarwal, and H. Walther, Extracting Work from a Single Heat Bath via Vanishing Quantum Coherence, Science 299, 862 (2003).
https://doi.org/10.1126/science.1078955
[9] O. Abah, J. Roßnagel, G. Jacob, S. Deffner, F. Schmidt-Kaler, K. Singer, and E. Lutz, Single-Ion Heat Engine at Maximum Power, Phys. Rev. Lett. 109, 203006 (2012).
https://doi.org/10.1103/PhysRevLett.109.203006
[10] M. Horodecki and J. Oppenheim, Fundamental limitations for quantum and nanoscale thermodynamics, Nat. Commun. 4, 2059 (2013).
https://doi.org/10.1038/ncomms3059
[11] P. Skrzypczyk, A. J. Short, and S. Popescu, Work extraction and thermodynamics for individual quantum systems, Nat. Commun. 5, 4185 (2014).
[12] J. B. Brask, G. Haack, N. Brunner, and M. Huber, Autonomous quantum thermal machine for generating steady-state entanglement, New J. Phys. 17, 113029 (2015).
[13] R. Uzdin, A. Levy, and R. Kosloff, Equivalence of Quantum Heat Machines, and Quantum-Thermodynamic Signatures, Phys. Rev. X 5, 031044 (2015).
https://doi.org/10.1103/PhysRevX.5.031044
[14] M. Campisi and R. Fazio, The power of a critical heat engine, Nat. Commun. 7, 11895 (2016).
https://doi.org/10.1038/ncomms11895
[15] W. Niedenzu, V. Mukherjee, A. Ghosh, A. G. Kofman, and G. Kurizki, Quantum engine efficiency bound beyond the second law of thermodynamics, Nat. Commun. 9, 165 (2018).
https://doi.org/10.1038/s41467-017-01991-6
[16] J.-P. Brantut, C. Grenier, J. Meineke, D. Stadler, S. Krinner, C. Kollath, T. Esslinger, and A. Georges, A Thermoelectric Heat Engine with Ultracold Atoms, Science 342, 713 (2013).
[17] J. V. Koski, V. F. Maisi, J. P. Pekola, and D. V. Averin, Experimental realization of a Szilard engine with a single electron, Proc. Natl. Acad. Sci. USA 111, 13786 (2014).
https://doi.org/10.1073/pnas.1406966111
[18] J. Roßnagel, S. T. Dawkins, K. N. Tolazzi, O. Abah, E. Lutz, F. Schmidt-Kaler, and K. Singer, A single-atom heat engine, Science 352, 325 (2016).
https://doi.org/10.1126/science.aad6320
[19] J. Klaers, S. Faelt, A. Imamoglu, and E. Togan, Squeezed Thermal Reservoirs as a Resource for a Nanomechanical Engine beyond the Carnot Limit, Phys. Rev. X 7, 031044 (2017).
[20] J. Klatzow, J. N. Becker, P. M. Ledingham, C. Weinzetl, K. T. Kaczmarek, D. J. Saunders, J. Nunn, I. A. Walmsley, R. Uzdin, and E. Poem, Experimental Demonstration of Quantum Effects in the Operation of Microscopic Heat Engines, Phys. Rev. Lett. 122, 110601 (2019).
[21] J. V. Koski, A. Kutvonen, I. M. Khaymovich, T. Ala-Nissila, and J. P. Pekola, On-Chip Maxwell's Demon as an Information-Powered Refrigerator, Phys. Rev. Lett. 115, 260602 (2015).
[22] D. von Lindenfels, O. Gräb, C. T. Schmiegelow, V. Kaushal, J. Schulz, F. Schmidt-Kaler, and U. G. Poschinger, A spin heat engine coupled to a harmonic-oscillator flywheel, arXiv preprint arXiv:1808.02390 (2018).
arXiv:1808.02390
[23] C. J. Pethick and H. Smith, Bose-Einstein Condensation in Dilute Gases, 2nd ed. (Cambridge University Press, Cambridge, 2008).
[24] E. Geva and R. Kosloff, A quantum-mechanical heat engine operating in finite time. A model consisting of spin-1/2 systems as the working fluid, J. Chem. Phys. 96, 3054 (1992).
[25] T. Feldmann and R. Kosloff, Quantum four-stroke heat engine: Thermodynamic observables in a model with intrinsic friction, Phys. Rev. E 68, 016101 (2003).
https://doi.org/10.1103/PhysRevE.68.016101
[26] O. Abah and E. Lutz, Optimal performance of a quantum Otto refrigerator, EPL (Europhys. Lett.) 113, 60002 (2016).
https://doi.org/10.1209/0295-5075/113/60002
[27] P. A. Erdman, V. Cavina, R. Fazio, F. Taddei, and V. Giovannetti, Maximum Power and Corresponding Efficiency for Two-Level Quantum Heat Engines and Refrigerators, arXiv preprint arXiv:1812.05089 (2018).
[28] A. Mazurenko, C. S. Chiu, G. Ji, M. F. Parsons, M. Kanász-Nagy, R. Schmidt, F. Grusdt, E. Demler, D. Greif, and M. Greiner, A cold-atom Fermi–Hubbard antiferromagnet, Nature 545, 462 (2016).
https://doi.org/10.1038/nature22362
[29] O. Fialko and D. W. Hallwood, Isolated Quantum Heat Engine, Phys. Rev. Lett. 108, 085303 (2012).
[30] K. B. Davis, M.-O. Mewes, M. A. Joffe, M. R. Andrews, and W. Ketterle, Evaporative Cooling of Sodium Atoms, Phys. Rev. Lett. 74, 5202 (1995).
https://doi.org/10.1103/PhysRevLett.74.5202
[31] W. Petrich, M. H. Anderson, J. R. Ensher, and E. A. Cornell, Stable, Tightly Confining Magnetic Trap for Evaporative Cooling of Neutral Atoms, Phys. Rev. Lett. 74, 3352 (1995).
[32] R. Grimm, M. Weidemüller, and Y. B. Ovchinnikov, Optical Dipole Traps for Neutral Atoms, Adv. At. Mol. Opt. Phys. 42, 95 (2000).
https://doi.org/10.1016/S1049-250X(08)60186-X
[33] M. Kolář, D. Gelbwaser-Klimovsky, R. Alicki, and G. Kurizki, Quantum Bath Refrigeration towards Absolute Zero: Challenging the Unattainability Principle, Phys. Rev. Lett. 109, 090601 (2012).
[34] D. Gelbwaser-Klimovsky, R. Alicki, and G. Kurizki, Minimal universal quantum heat machine, Phys. Rev. E 87, 012140 (2013).
[35] M. Albiez, R. Gati, J. Fölling, S. Hunsmann, M. Cristiani, and M. K. Oberthaler, Direct Observation of Tunneling and Nonlinear Self-Trapping in a Single Bosonic Josephson Junction, Phys. Rev. Lett. 95, 010402 (2005).
https://doi.org/10.1103/PhysRevLett.95.010402
[36] S. Eckel, J. G. Lee, F. Jendrzejewski, C. J. Lobb, G. K. Campbell, and W. T. Hill, Contact resistance and phase slips in mesoscopic superfluid-atom transport, Phys. Rev. A 93, 063619 (2016).
https://doi.org/10.1103/PhysRevA.93.063619
[37] N. Spethmann, F. Kindermann, S. John, C. Weber, D. Meschede, and A. Widera, Dynamics of Single Neutral Impurity Atoms Immersed in an Ultracold Gas, Phys. Rev. Lett. 109, 235301 (2012).
[38] M. Hohmann, F. Kindermann, T. Lausch, D. Mayer, F. Schmidt, and A. Widera, Single-atom thermometer for ultracold gases, Phys. Rev. A 93, 043607 (2016).
[39] R. Scelle, T. Rentrop, A. Trautmann, T. Schuster, and M. K. Oberthaler, Motional Coherence of Fermions Immersed in a Bose Gas, Phys. Rev. Lett. 111, 070401 (2013).
[40] T. Rentrop, A. Trautmann, F. A. Olivares, F. Jendrzejewski, A. Komnik, and M. K. Oberthaler, Observation of the Phononic Lamb Shift with a Synthetic Vacuum, Phys. Rev. X 6, 041041 (2016).
[41] I. Bloch, Ultracold quantum gases in optical lattices, Nat. Phys. 1, 23 (2005).
https://doi.org/10.1038/nphys138
[42] K.-N. Schymik, Implementing an Optical Accordion Lattice for the Realization of a Quantized Otto Cycle, Masterarbeit, Universität Heidelberg (2018).
[43] M. J. H. Ku, A. T. Sommer, L. W. Cheuk, and M. W. Zwierlein, Revealing the Superfluid Lambda Transition in the Universal Thermodynamics of a Unitary Fermi Gas, Science 335, 563 (2012).
[44] T. Jacqmin, J. Armijo, T. Berrada, K. V. Kheruntsyan, and I. Bouchoule, Sub-Poissonian Fluctuations in a 1D Bose Gas: From the Quantum Quasicondensate to the Strongly Interacting Regime, Phys. Rev. Lett. 106, 230405 (2011).
[45] R. Desbuquois, T. Yefsah, L. Chomaz, C. Weitenberg, L. Corman, S. Nascimbène, and J. Dalibard, Determination of Scale-Invariant Equations of State without Fitting Parameters: Application to the Two-Dimensional Bose Gas Across the Berezinskii-Kosterlitz-Thouless Transition, Phys. Rev. Lett. 113, 020404 (2014).
[46] L. A. Correa, M. Perarnau-Llobet, K. V. Hovhannisyan, S. Hernández-Santana, M. Mehboudi, and A. Sanpera, Enhancement of low-temperature thermometry by strong coupling, Phys. Rev. A 96, 062103 (2017).
[47] A. Lampo, S. H. Lim, M. Á. García-March, and M. Lewenstein, Bose polaron as an instance of quantum Brownian motion, Quantum 1, 30 (2017).
https://doi.org/10.22331/q-2017-09-27-30
[48] V. Mukherjee, A. Zwick, A. Ghosh, X. Chen, and G. Kurizki, Enhanced precision of low-temperature quantum thermometry via dynamical control, arXiv preprint arXiv:1711.09660 (2017).
[49] M. Gring, M. Kuhnert, T. Langen, T. Kitagawa, B. Bauer, M. Schreitl, I. Mazets, D. A. Smith, E. Demler, and J Schmiedmayer, Relaxation and Prethermalization in an Isolated Quantum System, Science 337, 1318 (2012).
[50] R. Olf, F. Fang, G. E. Marti, A. MacRae, and D. M. Stamper-Kurn, Thermometry and cooling of a Bose gas to 0.02 times the condensation temperature, Nat. Phys. 11, 720 (2015).
https://doi.org/10.1038/nphys3408
[51] B. Rauer, S. Erne, T. Schweigler, F. Cataldini, M. Tajik, and J. Schmiedmayer, Recurrences in an isolated quantum many-body system, Science 360, 307 (2018).
https://doi.org/10.1126/science.aan7938
[52] L. P. Pitaevski\u\i, Bose—Einstein condensation in magnetic traps. Introduction to the theory, Phys.-Uspekhi 41, 569 (1998).
https://doi.org/10.1070/PU1998v041n06ABEH000407
[53] F. Zambelli, L. Pitaevskii, D. M. Stamper-Kurn, and S. Stringari, Dynamic structure factor and momentum distribution of a trapped Bose gas, Phys. Rev. A 61, 063608 (2000).
[54] F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, Theory of Bose-Einstein condensation in trapped gases, Rev. Mod. Phys. 71, 463 (1999).
https://doi.org/10.1103/RevModPhys.71.463
[55] A. del Campo, J. Goold, and M. Paternostro, More bang for your buck: Super-adiabatic quantum engines, Sci. Rep. 4, 6208 (2014).
https://doi.org/10.1038/srep06208
[56] F. Grusdt and E. Demler, in Quantum Matter at Ultralow Temperatures, Proceedings of the International School of Physics ``Enrico Fermi'', Vol. 191, edited by M. Inguscio, W. Ketterle, S. Stringari, and G. Roati (IOS Press, Amsterdam, 2016) p. 325.
https://doi.org/10.3254/978-1-61499-694-1-325
[57] M.-G. Hu, M. J. Van de Graaff, D. Kedar, J. P. Corson, E. A. Cornell, and D. S. Jin, Bose Polarons in the Strongly Interacting Regime, Phys. Rev. Lett. 117, 055301 (2016).
[58] N. B. Jørgensen, L. Wacker, K. T. Skalmstang, M. M. Parish, J. Levinsen, R. S. Christensen, G. M. Bruun, and J. J. Arlt, Observation of Attractive and Repulsive Polarons in a Bose-Einstein Condensate, Phys. Rev. Lett. 117, 055302 (2016).
[59] C. J. Myatt, E. A. Burt, R. W. Ghrist, E. A. Cornell, and C. E. Wieman, Production of Two Overlapping Bose-Einstein Condensates by Sympathetic Cooling, Phys. Rev. Lett. 78, 586 (1997).
https://doi.org/10.1103/PhysRevLett.78.586
[60] M. Prüfer, P. Kunkel, H. Strobel, S. Lannig, D. Linnemann, C.-M. Schmied, J. Berges, T. Gasenzer, and M. K. Oberthaler, Observation of universal dynamics in a spinor Bose gas far from equilibrium, Nature 563, 217 (2018).
https://doi.org/10.1038/s41586-018-0659-0
[61] C. Eigen, J. A. P. Glidden, R. Lopes, E. A. Cornell, R. P. Smith, and Z. Hadzibabic, Universal prethermal dynamics of Bose gases quenched to unitarity, Nature 563, 221 (2018).
[62] S. Erne, R. Bücker, T. Gasenzer, J. Berges, and J. Schmiedmayer, Universal dynamics in an isolated one-dimensional Bose gas far from equilibrium, Nature 563, 225 (2018).
[63] D. Gelbwaser-Klimovsky, W. Niedenzu, P. Brumer, and G. Kurizki, Power enhancement of heat engines via correlated thermalization in a three-level ``working fluid'', Sci. Rep. 5, 14413 (2015b).
[64] W. Niedenzu and G. Kurizki, Cooperative many-body enhancement of quantum thermal machine power, New J. Phys. 20, 113038 (2018).
https://doi.org/10.1088/1367-2630/aaed55
[65] R. Nandkishore and D. A. Huse, Many-Body Localization and Thermalization in Quantum Statistical Mechanics, Annu. Rev. Condens. Matter Phys. 6, 15 (2015).
https://doi.org/10.1146/annurev-conmatphys-031214-014726
[66] T. Hartmann, T. A. Schulze, K. K. Voges, P. Gersema, M. W. Gempel, E. Tiemann, A. Zenesini, and S. Ospelkaus, Feshbach resonances in $^{23}\mathrm{Na}+^{39}\mathrm{K}$ mixtures and refined molecular potentials for the NaK molecule, Phys. Rev. A 99, 032711 (2019).
[67] L. J. LeBlanc and J. H. Thywissen, Species-specific optical lattices, Phys. Rev. A 75, 053612 (2007).
[68] M. O. Scully, Collective Lamb Shift in Single Photon Dicke Superradiance, Phys. Rev. Lett. 102, 143601 (2009).
[69] I. E. Mazets and G. Kurizki, Multiatom cooperative emission following single-photon absorption: Dicke-state dynamics, J. Phys. B: At. Mol. Opt. Phys. 40, F105 (2007).
https://doi.org/10.1088/0953-4075/40/6/F01
[70] A. Manatuly, W. Niedenzu, R. Román-Ancheyta, B. Çakmak, Ö. E. Müstecaplioğlu, and G. Kurizki, Collectively enhanced thermalization via multiqubit collisions, Phys. Rev. E 99, 042145 (2019).
[71] F. Jendrzejewski, S. Eckel, N. Murray, C. Lanier, M. Edwards, C. J. Lobb, and G. K. Campbell, Resistive Flow in a Weakly Interacting Bose-Einstein Condensate, Phys. Rev. Lett. 113, 045305 (2014).
[72] E. Torrontegui, S. Ibánez, S. Martínez-Garaot, M. Modugno, A. del Campo, D. Guéry-Odelin, A. Ruschhaupt, X. Chen, and J. G. Muga, Shortcuts to Adiabaticity, Adv. At. Mol. Opt. Phys. 62, 117 (2013).
https://doi.org/10.1016/B978-0-12-408090-4.00002-5
[73] A. del Campo, A. Chenu, S. Deng, and H. Wu, in Thermodynamics in the Quantum Regime, edited by F. Binder, L. A. Correa, C. Gogolin, J. Anders, and G. Adesso (Springer, Cham, 2019) pp. 127-148.
[74] R. Kosloff, Quantum Thermodynamics: A Dynamical Viewpoint, Entropy 15, 2100 (2013).
[75] R. Alicki, Quantum Thermodynamics: An Example of Two-Level Quantum Machine, Open Syst. Inf. Dyn. 21, 1440002 (2014).
https://doi.org/10.1142/S1230161214400022
[76] K. Brandner and U. Seifert, Periodic thermodynamics of open quantum systems, Phys. Rev. E 93, 062134 (2016).
[77] V. Mukherjee, W. Niedenzu, A. G. Kofman, and G. Kurizki, Speed and efficiency limits of multilevel incoherent heat engines, Phys. Rev. E 94, 062109 (2016).
[78] D. J. Wineland and W. M. Itano, Laser cooling of atoms, Phys. Rev. A 20, 1521 (1979).
https://doi.org/10.1103/PhysRevA.20.1521
[79] K. Szczygielski, On the application of Floquet theorem in development of time-dependent Lindbladians, J. Math. Phys. 55, 083506 (2014).
https://doi.org/10.1063/1.4891401
[80] M. Lewenstein, J. I. Cirac, and P. Zoller, Master equation for sympathetic cooling of trapped particles, Phys. Rev. A 51, 4617 (1995).
[81] E. Geva, R. Kosloff, and J. L. Skinner, On the relaxation of a two-level system driven by a strong electromagnetic field, J. Chem. Phys. 102, 8541 (1995).
[82] R. Scelle, Dynamics and Motional Coherence of Fermions Immersed in a Bose Gas, Ph.D. thesis, University of Heidelberg (2013).
https://doi.org/10.11588/heidok.00015142
[83] N. Erez, G. Gordon, M. Nest, and G. Kurizki, Thermodynamic control by frequent quantum measurements, Nature 452, 724 (2008).
[84] G. Gordon, G. Bensky, D. Gelbwaser-Klimovsky, D. D. B. Rao, N. Erez, and G. Kurizki, Cooling down quantum bits on ultrashort time scales, New J. Phys. 11, 123025 (2009).
[85] G. A. Álvarez, D. D. B. Rao, L. Frydman, and G. Kurizki, Zeno and Anti-Zeno Polarization Control of Spin Ensembles by Induced Dephasing, Phys. Rev. Lett. 105, 160401 (2010).
[86] R. S. Whitney, Non-Markovian quantum thermodynamics: Laws and fluctuation theorems, Phys. Rev. B 98, 085415 (2018).
https://doi.org/10.1103/PhysRevB.98.085415
[87] A. Wunsche, Displaced Fock states and their connection to quasiprobabilities, Quantum Opt. 3, 359 (1991).
https://doi.org/10.1088/0954-8998/3/6/005
[88] H. Bateman, Higher Transcendental Functions Volume II, 1st ed. (McGraw-Hill, New York, 1953).
[1] Deniz Türkpençe and Ricardo Román-Ancheyta, "Tailoring the thermalization time of a cavity-field using distinct atomic reservoirs", arXiv:1708.03721.
The above citations are from SAO/NASA ADS (last updated 2019-07-16 21:53:38). The list may be incomplete as not all publishers provide suitable and complete citation data.
On Crossref's cited-by service no data on citing works was found (last attempt 2019-07-16 21:53:36).
This Paper is published in Quantum under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions.
Quantum is an open-access peer-reviewed journal for quantum science and related fields.
Quantum is non-profit and community-run: an effort by researchers and for researchers to make science more open and publishing more transparent and efficient.
Sign up for our monthly digest of papers and other news.
Steering Board
Anne Broadbent
Harry Buhrman
Jens Eisert
Debbie Leung
Chaoyang Lu
Ana Maria Rey
Anna Sanpera
Urbasi Sinha
Robert W. Spekkens
Reinhard Werner
Birgitta Whaley
Andreas Winter
Ahsan Nazir
António Acín
Carlo Beenakker
Nicolas Brunner
Daniel Burgarth
Guido Burkard
Earl Campbell
Eric Cavalcanti
Gabriele De Chiara
Steven Flammia
Sevag Gharibian
Christopher Granade
Aram Harrow
Khabat Heshami
Chris Heunen
Shelby Kimmel
Matthew Leifer
Anthony Leverrier
Chiara Macchiavello
Ashley Montanaro
Milan Mosonyi
Roman Orus
Saverio Pascazio
Marco Piani
Joseph Renes
Jörg Schmiedmayer
Volkher Scholz
Ujjwal Sen
Jens Siewert
John Smolin
André Stefanov
Aephraim Steinberg
Krysta Svore
Luca Tagliacozzo
Marco Tomamichel
Francesca Vidotto
Thomas Vidick
Michael Walter
Witlef Wieczorek
Alexander Wilce
Ronald de Wolf
Magdalena Zych
Karol Życzkowski
Christian Gogolin
Marcus Huber
Lídia del Rio
Support Quantum and
or print our poster.
Feedback and discussion on /r/quantumjournal
Contact us by email.
© Verein zur Förderung des Open Access Publizierens in den Quantenwissenschaften.
Data protection and privacy policy
https://doi.org/10.22331/q
Quantum practices open accounting.
Copyright © 2019 Quantum – OnePress theme by FameThemes
This website uses cookies to improve your experience. For more information see the data protection and privacy policy. Accept | CommonCrawl |
Jun-Muk Hwang
Jun-Muk Hwang (황준묵; born 27 October 1963) is a South Korean mathematician, specializing in algebraic geometry and complex differential geometry.[1]
Jun-Muk Hwang
Born1963
Seoul, South Korea
Alma materSeoul National University
Harvard University
AwardsHo-Am Prize in Science (2009), Korea Science Award (2001)
Scientific career
FieldsAlgebraic geometry, complex differential geometry, complex analysis
InstitutionsInstitute for Basic Science, Korea Institute for Advanced Study, Seoul National University, Mathematical Sciences Research Institute, University of Notre Dame
ThesisGlobal Nondeformability of the Complex Hyperquadric (1993)
Doctoral advisorYum-Tong Siu
Korean name
Hangul
황준묵
Hanja
黃準默
Revised RomanizationHwang Jun-muk
McCune–ReischauerHwang Chun-muk
WebsiteCenter for Complex Geometry
Personal life
Hwang is the eldest son of gayageum musician Hwang Byungki and novelist Han Malsook.[2]
Education and career
Hwang studied physics at Seoul National University for his bachelors before studying physics at Harvard University. In 1993, he completed his PhD under the direction of Yum-Tong Siu with thesis Global nondeformability of the complex hyper quadric.[3][4] In the following years he held positions at the University of Notre Dame, the Mathematical Sciences Research Institute, and Seoul National University. Since 1999, he was a professor at the Korea Institute for Advanced Study.[1] He was in 2006 an invited speaker with talk Rigidity of rational homogeneous spaces at the International Congress of Mathematicians (ICM) in Madrid[5] and in 2014 a plenary speaker with talk Mori geometry meets Cartan geometry: Varieties of minimal rational tangents at the ICM in Seoul.[6]
With his collaborator Ngaiming Mok, he has developed the theory of varieties of minimal rational tangents, which combines methods of algebraic geometry and differential geometry in the study of rational curves on algebraic varieties. He has applied this theory to settle a number of problems on algebraic varieties covered by rational curves.[1]
In 2020, he was the founding director of the Center for Complex Geometry at the Institute for Basic Science.[7] In 2023, he was selected to be on the committee for the Abel Prize.[8][9]
Awards and honors
• 2021: National Academy of Sciences Award, National Academy of Sciences, South Korea[10]
• 2012: Fellow, American Mathematical Society
• 2009: Ho-Am Prize in Science, Am Prize, The Ho Am Foundation
• 2007: Fellow, Korean Academy of Science and Technology
• 2006: Best Scientist-Engineer of Korea, Ministry of Science and Technology
• 2006: Scientist of the Year Award, Korean National Assembly
• 2001: Korea Science Award, Ministry of Science and Technology
• 2000: Award for Excellent Article, Korean Mathematical Society[11]
Selected publications
• Hwang, Jun-Muk (1995). "Nondeformability of the complex hyperquadric". Inventiones Mathematicae. Springer Science and Business Media LLC. 120 (1): 317–338. Bibcode:1995InMat.120..317H. doi:10.1007/bf01241131. ISSN 0020-9910. S2CID 120966973.
• "Uniruled projective manifolds with irreducible reductive G-structures". Journal für die reine und angewandte Mathematik (Crelle's Journal). Walter de Gruyter GmbH. 1997 (491): 55–64. 1 September 1997. doi:10.1515/crll.1997.490.55. hdl:10722/75184. ISSN 0075-4102. S2CID 118051384.
• Hwang, Jun-Muk; Mok, Ngaiming (18 February 1998). "Rigidity of irreducible Hermitian symmetric spaces of the compact type under Kähler deformation". Inventiones Mathematicae. Springer Science and Business Media LLC. 131 (2): 393–418. Bibcode:1998InMat.131..393H. doi:10.1007/s002220050209. ISSN 0020-9910. S2CID 17138677.
• Hwang, Jun-Muk; Mok, Ngaiming (18 March 1999). "Holomorphic maps from rational homogeneous spaces of Picard number 1 onto projective manifolds". Inventiones Mathematicae. Springer Science and Business Media LLC. 136 (1): 209–231. Bibcode:1999InMat.136..209H. doi:10.1007/s002220050308. hdl:10722/48602. ISSN 0020-9910. S2CID 122937743.
• Hwang, Jun-Muk; Mok, Ngaiming (2003). "Finite morphisms onto Fano manifolds of Picard number 1 which have rational curves with trivial normal bundles". Journal of Algebraic Geometry. American Mathematical Society (AMS). 12 (4): 627–651. doi:10.1090/s1056-3911-03-00319-9. hdl:10722/42125. ISSN 1056-3911.
• HWANG, JUN-MUK; MOK, NGAIMING (2004). "Birationality of the Tangent Map for Minimal Rational Curves". Asian Journal of Mathematics. International Press of Boston. 8 (1): 51–64. doi:10.4310/ajm.2004.v8.n1.a6. ISSN 1093-6106. S2CID 17584597.
• Hwang, Jun-Muk; Mok, Ngaiming (25 February 2005). "Prolongations of infinitesimal linear automorphisms of projective varieties and rigidity of rational homogeneous spaces of Picard number 1 under Kähler deformation". Inventiones Mathematicae. Springer Science and Business Media LLC. 160 (3): 591–645. Bibcode:2005InMat.160..591H. doi:10.1007/s00222-004-0417-9. hdl:10722/48613. ISSN 0020-9910. S2CID 52237844.
• Hwang, Jun-Muk (12 August 2008). "Base manifolds for fibrations of projective irreducible symplectic manifolds". Inventiones Mathematicae. Springer Science and Business Media LLC. 174 (3): 625–644. arXiv:0711.3224. Bibcode:2008InMat.174..625H. doi:10.1007/s00222-008-0143-9. ISSN 0020-9910. S2CID 17694524.
• Fu, Baohua; Hwang, Jun-Muk (8 December 2011). "Classification of non-degenerate projective varieties with non-zero prolongation and application to target rigidity". Inventiones Mathematicae. Springer Science and Business Media LLC. 189 (2): 457–513. doi:10.1007/s00222-011-0369-9. ISSN 0020-9910. S2CID 253736967.
• Hwang, Jun-Muk; Weiss, Richard M. (30 May 2012). "Webs of Lagrangian tori in projective symplectic manifolds". Inventiones Mathematicae. Springer Science and Business Media LLC. 192 (1): 83–109. arXiv:1201.2369. doi:10.1007/s00222-012-0407-2. ISSN 0020-9910. S2CID 253745697.
References
1. "Hwang, Jun-Muk / School of Mathematics". Korea Institute for Advanced Study. Archived from the original on 2021-12-05. Retrieved 2018-09-11.
2. 임아영 [Im A-yeong] (15 November 2014). "[우리시대의멘토]국악인 황병기". Kyunghyang Shinmun. Retrieved 12 September 2018.
3. Jun-Muk Hwang at the Mathematics Genealogy Project
4. Global nondeformability of the complex hyperquadric. ACM Digital Library (phd). Association for Computing Machinery. 1993. Retrieved 5 January 2021.
5. "Rigidity of rational homogeneous spaces" (PDF). International Congress of Mathematicians, Madrid, 2006. Vol. II. Zurich: Eur. Math. Soc. 2006. pp. 613–626.
6. Hwang, Jun-Muk (2015). "Mori geometry meets Cartan geometry: Varieties of minimal rational tangents". arXiv:1501.04720 [math.AG].
7. "IBS launches the IBS Center for Complex Geometry". Institute for Basic Science. 31 August 2020. Retrieved 21 January 2021.
8. "The Abel Committee". The Abel Prize. Retrieved 3 April 2023.
9. 홍아름 (22 March 2023). "'수학계 노벨상' 아벨상 수상자에 루이스 카파렐리 미국 오스틴 텍사스대 교수". Chosun Biz (in Korean). Retrieved 3 April 2023.
10. 권예슬 (27 September 2021). "황준묵 단장, '제66회 대한민국학술원상' 수상". Institute for Basic Science (in Korean). Retrieved 17 January 2023.
11. "Hwang, Jun-Muk / School of Mathematics". Korea Institute for Advanced Study. Archived from the original on 21 January 2021. Retrieved 20 January 2021.
External links
• "Jun-Muk Hwang (KIAS) / Tangential nondegeneracy of projective varieties I / 2013-11-01". YouTube. 17 January 2018.
• "Jun-Muk Hwang (KIAS) / Tangential nondegeneracy of projective varieties II / 2013-11-01". YouTube. 17 January 2018.
• "ICM2014 VideoSeries PL2: Jun-Muk Hwang on Aug14Thu". YouTube. 17 August 2014. (Mori geometry meets Cartan geometry: Varieties of minimal rational tangents)
• 황준묵 대학교수 - Naver 인물검색
Authority control
International
• VIAF
National
• Germany
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
\begin{definition}[Definition:Golay Ternary Code]
The '''Golay ternary code''' is the linear $\tuple {11, 6}$ code over $\Z_3$ whose standard generator matrix $G$ is given by:
:$G := \begin{pmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 2 & 1 \\
0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 2 & 2 \\
0 & 0 & 1 & 0 & 0 & 0 & 2 & 1 & 0 & 1 & 2 \\
0 & 0 & 0 & 1 & 0 & 0 & 2 & 2 & 1 & 0 & 1 \\
0 & 0 & 0 & 0 & 1 & 0 & 1 & 2 & 2 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1
\end{pmatrix}$
\end{definition} | ProofWiki |
\begin{document}
\maketitle \noindent {\bf Mathematics subject classification 2020}: Primary 11F33, Secondary 11F46\\ \noindent {\bf Key words}: Siegel Eisenstein series, $p$-adic modular forms
\begin{abstract} A generalization of Serre's $p$-adic Eisenstein series in the case of Siegel modular forms is studied and a coincidence between a $p$-adic Siegel Eisenstein series and a genus theta series associated with a quaternary quadratic form is proved. \end{abstract}
\section{Introduction}
In \cite{Serre}, Serre defined the concept of a $p$-adic modular form as the $p$-adic limit of a $q$-expansion of modular forms with rational Fourier coefficients. The $p$-adic Eisenstein series was introduced as a typical example of $p$-adic modular forms, and its relation with the modular forms on $\Gamma_0(p)$ was also studied. Later, Serre's $p$-adic Eisenstein series was extended to the case of Siegel modular groups, revealing various interesting properties. For example, it was shown that some $p$-adic Siegel Eisenstein series becomes the usual Siegel modular form of level $p$ (\cite{Nagaoka1}, \cite{Kat-Nag}).
Let $p$ be a prime number and $\{k_m\}$ the sequence defined by $$ k_m:=2+(p-1)p^{m-1}. $$ For sequence $\{k_m\}$, the $p$-adic Siegel Eisenstein series $$ \widetilde{E}_2^{(n)}:=\lim_{m\to\infty}E_{k_m}^{(n)} $$ is defined, where $E_k^{(n)}$ is the ordinary Siegel Eisenstein series of degree $n$ and weight $k$.
Let $S^{(p)}$ be a positive definite, half-integral symmetric matrix of degree $4$ with level $p$ and determinant $p^2/16$. We denote by genus\,$\Theta^{(n)}(S^{(p)})$ the genus theta series of degree $n$ associated with $S^{(p)}$ (for the precise definition, see $\S$ \ref{genusT}). Genus theta series genus\,$\Theta^{(n)}(S^{(p)})$ is a Siegel modular form of weight 2 on the level $p$ modular group $\Gamma_0^{(n)}(p)$.
In \cite{KN}, Kikuta and the second author showed the coincidence between the two objects $\widetilde{E}_2^{(2)}$ and genus\,$\Theta^{(2)}(S^{(p)})$. This result asserts that there is a correspondence between some $p$-adic Siegel Eisenstein series and some genus theta series when the degree is 2. The main purpose of this paper is to show that this coincidence still exists for any $n$. Namely we prove the following theorem.
\\
\textbf{Theorem}\quad {\it We assume that $p$ is an odd prime number. Then the degree $n$ $p$-adic Siegel Eisenstein series $\widetilde{E}_2^{(n)}$ coincides with the degree $n$ genus theta series {\rm genus}\,$\Theta^{(n)}(S^{(p)})$: $$ \widetilde{E}_2^{(n)}={\rm genus}\,\Theta^{(n)}(S^{(p)}). $$ In particular, the $p$-adic Siegel Eisenstein series $\widetilde{E}_2^{(n)}$ is a Siegel modular form of weight 2 on $\Gamma_0^{(n)}(p)$. } \\
The equality of this theorem is proved by showing that the Fourier coefficients on both sides are equal. In particular, as can be seen from the discussion in the text, the computation of local densities for $n=3$ or $4$ is essential (cf. \S\; \ref{LC34}).
By considering the $p$-adic first and second approximation of $\widetilde{E}_2^{(n)}$, we obtain the following results.
\\
\textbf{Corollary}\quad (1)\;\; {\it Assume that $p>n$. Then the modular form $\widetilde{E}_2^{(n)}$ of level $p$ and weight 2 is congruent to the weight $p+1$ Siegel Eisenstein series $E_{p+1}^{(n)}$ mod $p$\;{\rm :} $$
\widetilde{E}_2^{(n)} \equiv E_{p+1}^{(n)} \pmod{p}. $$ } (2)\;\;{\it Assume that $p\geq 3$. Then we have $$ \varTheta (E_{p+1}^{(3)}) \equiv 0 \pmod{p}\quad and \quad \varTheta (E_{p^2-p+2}^{(4)}) \equiv 0 \pmod{p^2}, $$ where $\varTheta$ is the theta operator {\rm (cf. $\S$ \ref{ThetaOp})}. }
\\ Statement (1) is motivated by Serre's result in the case of elliptic modular forms: For any modular form $f$ of weight 2 on $\Gamma_0(p)$, there is a modular form $g$ of level one and weight $p+1$ satisfying $$ f \equiv g \pmod{p}. $$ The result described in (2) is related to the theory of the mod $p$ kernels of theta operators. The second congruence provides an example of a Siegel modular form contained in the mod $p^2$ kernel of a theta operator.
\\
{\sc Notation.} Let $R$ be a commutative ring. We denote by $R^{\times}$ the unit group of $R$. We denote by $M_{mn}(R)$ the set of
$m \times n$-matrices with entries in $R.$ In particular put $M_n(R)=M_{nn}(R).$ Put $GL_m(R) = \{A \in M_m(R) \ | \ \det A \in R^\times \},$ where $\det A$ denotes the determinant of a square matrix $A$. For an $m \times n$-matrix $X$ and an $m \times m$-matrix $A$, we write $A[X] = {}^t X A X,$ where $^t X$ denotes the transpose of $X$. Let $\text{Sym}_n(R)$ denote the set of symmetric matrices of degree $n$ with entries in $R.$ Furthermore, if $R$ is an integral domain of characteristic different from $2,$ let ${\mathcal H}_n(R)$ denote the set of half-integral matrices of degree $n$ over $R$, that is, ${\mathcal H}_n(R)$ is the subset of symmetric matrices of degree $n$ with entries in the field of fractions of $R$ whose $(i,j)$-component belongs to $R$ or ${1 \over 2}R$ according as $i=j$ or not.
We say that an element $A$ of $M_n(R)$ is non-degenerate if $\det A \not=0$. For a subset ${\mathcal S}$ of $M_n(R)$ we denote by ${\mathcal S}^{\mathrm{nd}}$ the subset of ${\mathcal S}$ consisting of non-degenerate matrices. If ${\mathcal S}$ is a subset of $\mathrm{Sym}_n({\mathbb R})$ with ${\mathbb R}$ the field of real numbers, we denote by ${\mathcal S}_{>0}$ (resp. ${\mathcal S}_{\ge 0}$) the subset of ${\mathcal S}$ consisting of positive definite (resp. semi-positive definite) matrices. We sometimes write $\Lambda_n$ (resp. $\Lambda_n^+$) instead of ${\mathcal H}_n({\mathbb Z})$ (resp. ${\mathcal H}_n({\mathbb Z})_{>0}$). The group $GL_n(R)$ acts on the set $\mathrm{Sym}_n(R)$ by $$ GL_n(R) \times \mathrm{Sym}_n(R) \ni (g,A) \longmapsto A[g] \in \mathrm{Sym}_n(R). $$ \noindent Let $G$ be a subgroup of $GL_n(R).$ For a $G$-stable subset ${\mathcal B}$ of $\mathrm{Sym}_n(R)$ we denote by ${\mathcal B}/G$ the set of equivalence classes of ${\mathcal B}$ under the action of $G.$ We sometimes use the same symbol ${\mathcal B}/G$ to denote a complete set of representatives of ${\mathcal B}/G.$ We abbreviate ${\mathcal B}/GL_n(R)$ as ${\mathcal B}/\!\!\sim$ if there is no fear of confusion. Let $G$ be a subgroup of $GL_n(R)$. Then two symmetric matrices $A$ and $A'$ with entries in $R$ are said to be $G$-equivalent with each other and write $A \sim_{G} A'$ if there is an element $X$ of $G$ such that $A'=A[X].$ We also write $A \sim A'$ if there is no fear of confusion. For square matrices $X$ and $Y$ we write $X \bot Y = \begin{pmatrix} X &O \\ O & Y \end{pmatrix}$.
We put ${\bf e}(x)=\exp(2 \pi \sqrt{-1} x)$ for $x \in {\mathbb C},$ and for a prime number $q$ we denote by ${\bf e}_q(*)$ the continuous additive character of ${\mathbb Q}_q$ such that ${\bf e}_q(x)= {\bf e}(x)$ for $x \in {\mathbb Z}[q^{-1}].$
For a prime number $q$ we denote by $\mathrm{ord}_q(*)$ the additive valuation of ${\mathbb Q}_q$ normalized so that $\mathrm{ord}_q(q)=1$. Moreover for any element $a, b \in {\mathbb Z}_q$ we write $b \equiv a \pmod {q}$ if $\mathrm{ord}_q(a-b) >0$.
\section{Siegel Eisenstein series and genus theta series}
\subsection{Siegel modular forms} Let $\mathbb{H}_n$ be the Siegel upper-half space of degree $n$; then the Siegel modular group $\Gamma^{(n)}:=Sp_n(\mathbb{R})\cap M_{2n}(\mathbb{Z})$ acts discontinuously on $\mathbb{H}_n$. For a congruence subgroup $\Gamma'$ of $\Gamma^{(n)}$, we denote by $M_k(\Gamma')$ the corresponding space of Siegel modular forms of weight $k$ for $\Gamma'$. Later we mainly deal with the case $\Gamma'=\Gamma^{(n)}$ or $\Gamma_0^{(n)}(N)$ where $$
\Gamma_0^{(n)}(N)=\left\{ \binom{A\;B}{C\;D}\in\Gamma^{(n)}\;\big{|}\; C \equiv O_n \pmod{N}\;\right\}. $$ In both cases, $F\in M_k(\Gamma')$ has a Fourier expansion of the form $$ F(Z)=\sum_{0\leq T\in\Lambda_n}a(F,T)\,{\bf e}(\text{tr}(TZ)). $$
Taking $q_{ij}:={\bf e}(z_{ij})$ with $Z=(z_{ij})\in\mathbb{H}_n$, we write $$ q^T:={\bf e}(\text{tr}(TZ)) =\prod_{1\leq i<j\leq n}q_{ij}^{2t_{ij}}\prod_{i=1}^nq_i^{t_i}, $$ where $T=(t_{ij})$ and $q_i=q_{ii}$,\,$t_i=t_{ii}$\,$(i=1,\cdots,n)$. Using this notation, we obtain \begin{align*} F=\sum_{0\leq T\in\Lambda_n}a(F,T)\,q^T & =\sum_{t_i}\left(\sum_{t_{ij}}a(F,T)\prod_{i<j}q_{ij}^{2t_{ij}}\right)
\prod_{i=1}^nq_i^{t_{ij}} \\
& \in\mathbb{C}[q_{ij}^{-1},q_{ij}][\![q_1,\ldots,q_n]\!]. \end{align*}
For a subring $R\subset\mathbb{C}$, we denote by $M_k(\Gamma')_R$ the set consisting of modular forms $F$ all of whose Fourier coefficients $a(F,T)$ lie in $R$. Therefore, an element $F\in M_k(\Gamma')_R$ may be regarded as an element of $$ R[q_{ij}^{-1},q_{ij}][\![q_1,\ldots,q_n]\!]. $$
\subsection{Siegel Eisenstein series} Define $$
\Gamma_\infty^{(n)}:=\left\{\;\binom{A\;B}{C\;D}\in\Gamma^{(n)}\;\big{|}\; C=O_n\; \right\}. $$ For an even integer $k>n+1$, define a series by $$ E_k^{(n)}(Z)=\sum_{\binom{*\;*}{C\;D}\in\Gamma_\infty^{(n)}\backslash \Gamma^{(n)}}
\text{det}(CZ+D)^{-k},\quad Z\in\mathbb{H}_n. $$ This series is an element of $M_k(\Gamma^{(n)})_{\mathbb{Q}}$ called the {\it Siegel Eisenstein series} of weight $k$ for $\Gamma^{(n)}$.
\subsection{Genus theta series} \label{genusT} Fix $S\in\Lambda_{m}^{+}$ and define $$ \theta^{(n)}(S;Z):=\sum_{X\in M_{mn}(\mathbb{Z})}\,{\bf e}(\text{tr}(S[X]Z)),\quad Z\in\mathbb{H}_n. $$
Let $\{S_1,\ldots, S_d\}$ be a set of representatives of $GL_m(\mathbb{Z})$-equivalence classes in $\text{genus}\,(S)$. The {\it genus theta series} associated with $S$ is defined by $$ \text{genus}\,\Theta^{(n)}(S)(Z):= \left(\sum_{i=1}^d\frac{\theta^{(n)}(S_i;Z)}{a(S_i,S_i)}\right)\Big{/} \left( \sum_{i=1}^d\frac{1}{a(S_i,S_i)}\right), \quad Z\in\mathbb{H}_n $$ where $$
a(S_i,S_i):=\sharp\{\;X\in M_m(\mathbb{Z})\;|\;S_i[X]=S_i\;\}\quad (\text{cf. \S\;\ref{LC34})}. $$
\subsection{$p$-adic Siegel Eisenstein series} Let $\{k_m\}_{m=1}^{\infty}$ be an increasing sequence of even positive integers which is $p$-adically convergent.
If the corresponding sequence of Siegel Eisenstein series $$ \{E_{k_m}^{(n)}\}\subset \mathbb{Q}[q_{ij}^{-1},q_{ij}][\![q_1,\ldots,q_n]\!] $$ converges $p$-adically to an element of $\mathbb{Q}_p[q_{ij}^{-1},q_{ij}][\![q_1,\ldots,q_n]\!]$, then we call the limit $$ \lim_{m\to\infty}E_{k_m}^{(n)}\in \mathbb{Q}_p[q_{ij}^{-1},q_{ij}][\![q_1,\ldots,q_n]\!] $$ a {\it $p$-adic Siegel Eisenstein series}.
\section{Main result}
\subsection{Statement of the main theorem}
As stated in the Introduction, the main result of this paper asserts a coincidence between some $p$-adic Siegel Eisenstein series and some genus theta series.
First we consider an element $S^{(p)}$ of $\Lambda_4^+$ with level $p$ and determinant $p^2/16$. (The existence of such a matrix will be proved in Lemma \ref{lem.existence-of-S}.)
Next we consider a special $p$-adic Siegel Eisenstein series.
Let $\{k_m\}_{m=1}^\infty\subset \mathbb{Z}_{>0}$ be a sequence defined by $$ k_m=k_m(p):=2+(p-1)p^{m-1}\qquad (p:\;\text{prime}). $$ This sequence converges $p$-adically to 2. We associate this sequence with the sequence of Siegel Eisenstein series $$ \{E_{k_m}^{(n)}\}_{m=1}^\infty\subset \mathbb{Q}[q_{ij}^{-1},q_{ij}][\![q_1,\ldots,q_n]\!]. $$ As we prove in the following, this sequence defines a $p$-adic Siegel Eisenstein series. We set $$ \widetilde{E}_2^{(n)}:=\lim_{m\to\infty}E_{k_m}^{(n)}. $$ Our main theorem can be stated as follows.
\begin{theorem} \label{statementmain} Let $p$ be an odd prime number and $S^{(p)}$ be as above. Then the following identity holds: $$ \widetilde{E}_2^{(n)}={\rm genus}\,\Theta(S^{(p)}) $$ \end{theorem}
If $n=2$, the above identity has already been proved in \cite{KN}, and the proof for a general $n$ is essentially the case $n=3$ or $4$, which will be presented in the next section.
\subsection{Case $\boldsymbol{n=3}$ or $\boldsymbol{4}$} \label{n=3-4}
We prove our main result in the case that $n$ is $3$ or $4$.
\begin{theorem} \label{th.main-result} \begin{itemize} \item[{\rm (1)}] For any $T \in \Lambda_3^+$, we have
\begin{align*} a(\widetilde E_2^{(3)},T)=a(\mathrm{genus} \ \Theta^{(3)}(S^{(p)}),T). \end{align*} \item[{\rm (2)}] For any $T \in \Lambda_4^+$, we have
\begin{align*} a(\widetilde E_2^{(4)},T)=a(\mathrm{genus} \ \Theta^{(4)} (S^{(p)}),T). \end{align*} \end{itemize} \end{theorem} By the above theorem and \cite{KN}. we have the following result.
\begin{corollary} \label{cor.main-result2} Let $n=3$ or $4$. Then \begin{align*} \widetilde E_2^{(n)}=\mathrm{genus} \ \Theta^{(n)}(S^{(p)}). \end{align*} \end{corollary}
To prove Theorem \ref{th.main-result}, we compute the both-hand sides of the equality in the theorem explicitly. We also use the notation in \cite{IK22}. Let $q$ be a prime number.
Let $\langle \ {} \ , \ {} \ \rangle=\langle \ {} \ , \ {} \ \rangle_Q$ be the Hilbert symbol on ${\mathbb Q}_q$. Let $T$ be a non-degenerate symmetric matrix with entries in ${\mathbb Q}_q$ of degree $n$. Then $T$ is $GL_n({\mathbb Q}_q)$-equivalent to $b_1 \bot \cdots \bot b_n$ with $b_1,\ldots,b_n \in {\mathbb Q}_q^{\times}$. Then we define the Hasse invariant $h_q(T)$ as \[h_q(T)=\prod_{1 \le i \le j \le n} \langle b_i,b_j \rangle.\] We also define $\varepsilon_q(T)$ as \[\varepsilon_q(T)=\prod_{1 \le i < j \le n} \langle b_i,b_j \rangle.\] These do not depend on the choice of $b_1,\ldots,b_n$. We also denote by $\eta_q(T)$ the Clifford invariant of $T$ (cf. \cite{IK22}). Then we have \[\eta_q(T)= \begin{cases} \langle -1,-1 \rangle^{m(m+1)/2} \langle (-1)^m,\det (T) \rangle \varepsilon_q(T) & \ \text{if $n=2m+1$,}\\ \langle -1,-1 \rangle^{m(m-1)/2} \langle (-1)^{m+1},\det (T) \rangle \varepsilon_q(T) & \ \text{if $n=2m$,} \end{cases}\] and hence, \[\eta_q(T)= \begin{cases} \langle -1,-1 \rangle^{m(m+1)/2} \langle (-1)^m \det (T),\det (T) \rangle \varepsilon_q(T) & \ \text{if $n=2m+1$,}\\ \langle -1,-1 \rangle^{m(m-1)/2} \langle (-1)^{m+1} \det (T),\det (T) \rangle \varepsilon_q(T) & \ \text{if $n=2m$.} \end{cases}\] Let $T \in \mathrm{Sym}_n({\mathbb Q}_q)$. Then, by the product formula for the Hilbert symbol, we have $\prod_q h_q(T)=1$ and \[\prod_q \eta_q(T)=\begin{cases} (-1)^{(n^2-1)/8} & \text{ if } n \text{ is odd} ,\\ (-1)^{n(n-2)/8} & \text{ if } n \text{ is even}. \end{cases}\] For $a \in {\mathbb Z}_q, a \not=0$, define $\chi_q(a)$ as \[ \chi_q(a)= \begin{cases} 1 & \text{ if } {\mathbb Q}_q(\sqrt{a})={\mathbb Q}_q,\\ -1 & \text{ if } {\mathbb Q}_q(\sqrt{a})/{\mathbb Q}_q \text { is unramified quadratic}, \\ 0 & \text{ if } {\mathbb Q}_q(\sqrt{a})/{\mathbb Q}_q \text{ is ramified quadratic}. \end{cases} \]
We now prove the exisitence of the matrix $S^{(p)}$ that appears in the main theorem.
\\ Let $H_k = \overbrace{H \bot ...\bot H}^k$ with $H =\left(\begin{matrix}
0 & 1/2 \\
1/2 & 0
\end{matrix}\right)$. We take an element $\epsilon \in {\mathbb Z}_p^\times$ such that $\chi_p(-\epsilon)=-1$, and put $U_0=1 \,\bot\, \epsilon$.
\begin{lemma}\label{lem.existence-of-S} For each prime number $q$, let $S_q$ be an element of ${\mathcal H}_4({\mathbb Z}_q)$ such that
\begin{align*} S_q =\begin{cases} U_0 \bot pU_0 & \text{ if } q=p, \\ H_2 & \text{ otherwise}. \end{cases} \end{align*} Then there exists an element $S$ of $\Lambda_4^+$ with level $p$ and determinant $16^{-1}p^2$ such that \begin{align*} S \sim_{GL_4({\mathbb Z}_q)} S_q \text{ for any } q. \tag{E} \end{align*} Conversely, if $S$ is an element of $\Lambda_4^+$ with level $p$ and determinant $16^{-1}p^2$ , then $S$ satisfies condition {\rm (E)}. \end{lemma}
\begin{proof} Let $S_q$ be as above. By assumption, we have \[ (16^{-1}p^2)^{-1}\det (S_q) \in ({\mathbb Z}_q^\times)^2 \text{ for any } q,\] and \begin{align*} h_q(S_q)=\begin{cases} -1 & \text{ if } q=p \text{ or } 2, \\ 1 & \text{ otherwise}. \end{cases} \end{align*}
Hence, by \cite[Theorem 4.1.2]{Ki2}, there exists an element $S$ of $\text{Sym}_4({\mathbb Q})_{>0}$ satisfying condition (E). By construction, we easily see that $S$ belongs to ${\mathcal H}_4({\mathbb Z})_{>0}$ with level $p$ and determinant $16^{-1}p^2$.
Conversely, let $S$ be an element of $\Lambda_4^+$ with level $p$ and determinant $16^{-1}p^2$. Then, we have \[\det (2S) \in ({\mathbb Z}_q^\times)^2 \text{ for any } q\not=p.\] Hence, we have $S \sim_{GL_4({\mathbb Z}_q)} H_2$, so we have $h_q(S)=1$ if $q \not=p,\,2$ and $h_2(S)=-1$. Thus, we have $h_p(S)=-1$. By definition, we have $p^{-2}\det (S) \in ({\mathbb Z}_p^\times)^2$ and $pS^{-1} \in {\mathcal H}_4({\mathbb Z}_p)$. Then, we easily see that $S \sim_{GL_4({\mathbb Z}_p)} U_0 \bot pU_0$. This proves the assertion. \end{proof}
\subsubsection{Computation of $\boldsymbol{a(\widetilde{E}_2^{(n)},T)}$ for $\boldsymbol{n=3}$ or $\boldsymbol{4}$}
In the case $n$ is even, for $T \in {\mathcal H}_n({\mathbb Z}_q)^{\rm nd}$ we put $\xi_q(T)=\chi_q((-1)^{n/2} \det (2T))$. For $T \in {\mathcal H}_n({\mathbb Z}_q)^{\rm nd}$, let $b_q(T,s)$ be the Siegel series of $T$. Then $b_q(T,s)$ is a polynomial in $q^{-s}$. More precisely, we define a polynomial $\gamma_q(T,X)$ in $X$ by $$\gamma_q(T,X)= \begin{cases} (1-X)\prod_{i=1}^{n/2}(1-q^{2i}X^2)(1-q^{n/2}\xi_q(T) X)^{-1} & \text{ if $n$ is even,}
\\ (1-X)\prod_{i=1}^{(n-1)/2}(1-q^{2i}X^2) & \text{ if $n$ is odd.} \end{cases}$$
Then there exists a polynomial $F_q(B,X)$ in $X$ with coefficients in ${\mathbb Z}$ such that $$F_q(T,q^{-s})={b_q(T,s) \over \gamma_q(T,q^{-s})}.$$ The properties of the polynomial $F_q(B,X)$ is studied in detail by the first author in \cite{K99}. (In particular, see \cite[Theorem 3.2]{K99} for the properties used below.)
We put \[\widetilde b_q(T,X)=\gamma_q(T,X)F_q(T,X).\]
\begin{proposition}\label{prop.FC-Siegel} For any $T \in \Lambda_n^+$, we have \begin{align*} a(E_k^{(n)},T)&= (-1)^{[(n+1)/2]}2^{n-[n/2]} \frac{k}{B_k} \prod_{i=1}^{[n/2]}\frac{2k-2i}{B_{2k-2i}}
\prod_q F_q(T,q^{k-n-1})\\ &\times \begin{cases} \frac{B_{k-n/2,\chi_T}}{k-n/2} & \text{ if } n \text{ is even,} \\ 1 & \text{ if } n \text{ is odd,} \end{cases} \end{align*} where $\chi_T$ denotes the primitive Dirichlet character corresponding to the extension $\mathbb{Q}(\sqrt{(-1)^{n/2}{\rm det}(2T)})/\mathbb{Q}$. \end{proposition}
\begin{remark}\label{rem.comariosn-KN} Let $F_q^{(n)}(T,X)$ be the polynomial in \cite[Theorem 2.2]{Kat-Nag}. Then it coincides with $F_q(T,X)$ if $n$ is even. But it is $\eta_q(T)F_q(T,X)$ if $n$ is odd. \end{remark}
\begin{proposition} \label{prop.limit-of-Kitaoka-polynomial} Let $T \in {\mathcal H}_n({\mathbb Z}_q)^{\rm nd}$. \begin{itemize} \item[{\rm (1)}] Let $q \not=p$. Then we have \begin{align*} &\lim_{m\to\infty} F_q(T,q^{k_m-n-1})=F_q(T, q^{2-n-1}).\\ \end{align*} \item[{\rm (2)}] We have \begin{align*} &\lim_{m \to\infty} F_p(T,p^{k_m-n-1})=1 \end{align*} \end{itemize} \end{proposition}
\begin{proof} By construction, $F_q(T,X)$ belongs to ${\mathbb Z}[X]$ and its first coefficient is $1$. Thus the assertion holds. \end{proof}
\begin{lemma} \label{lem.vanishing-of-F} Let $T \in {\mathcal H}_3({\mathbb Z}_q)^{\rm nd}$. Suppose that $\eta_q(T)=-1$. Then \[F_q(T,q^{-2})=0.\] \end{lemma}
\begin{proof} By the functional equation of $F_q(T,X)$ (cf. \cite[Theorem 3.2]{K99}), we have \begin{align*} F_q(T,q^{-4}X^{-1}) & =\eta_q(T)(q^2X)^{\mathrm{ord}_q(2^{2}\det T)}F_q(T,X)\\ & =-(q^2X)^{\mathrm{ord}_q(2^{2}\det T)}F_q(T,X). \end{align*} Hence, we have \begin{align*} F_q(T,q^{-2})=-F_q(T,q^{-2}). \end{align*} Therefore, the assertion holds. \end{proof}
Here we prepare some result on Bernoulli numbers.
\begin{lemma}\;{\rm (Carlitz \cite[{\sc Theorem 3}]{Carlitz1})} \label{lem.limit-of-Bernoulli-number} Assume that $p$ is an odd prime number. For $t=rp^k(p-1)$\,{\rm (}$r\in\mathbb{Z}_{>0},\,k\in\mathbb{Z}_{\geq 0}${\rm )}, the numerator of $\displaystyle B_t+\frac{1}{\,p\,}-1$ is divisible by $p^k$. In particular $$ \lim_{k\to\infty}B_{r(p-1)p^k}=1-\frac{1}{\,p\,}. $$ for any $r\in\mathbb{Z}_{>0}$. \end{lemma}
Now we obtain the following theorem.
\begin{theorem} \label{th.limit-of-FC-of-Eisenstein} \begin{itemize} \item[{\rm (1)}] Let $T \in \Lambda_3^+$. Then, we have \begin{align*} a(\widetilde E_2^{(3)},T)=\frac{576}{(1-p)^2} \prod_{q \not=p} F_q(T,q^{-2}), \end{align*} and in particular if $\eta_p(T)=1$, we have \begin{align*} a(\widetilde E_2^{(3)},T)=0. \end{align*} \item[{\rm (2)}] Let $T \in \Lambda_4^+$. \begin{itemize} \item[{\rm (2.1)}] Suppose that $\det (2T)$ is not square. Then \[a(\widetilde E_2^{(4)},T)=0.\] \item[{\rm (2.2)}] Suppose that $\det (2T)$ is square. Then \begin{align*} a(\widetilde E_2^{(4)},T)=\frac{1152}{(1-p)^2} \prod_{q \not=p} F_q(T,q^{-3}). \end{align*} In particular if $\eta_p(T)=1$ or $\mathrm{ord}_p(\det(2T)) =0$, then \[a(\widetilde E_2^{(4)},T)=0.\] \end{itemize} \end{itemize} \end{theorem}
\begin{proof} (1) We have \begin{align*} (\star)\qquad\qquad \lim_{m\to\infty} \frac{B_{k_m}}{k_m}= \lim_{m\to\infty} \frac{B_{2k_m-2}}{2k_m-2}= \frac{(1-p)B_2}{2}=\frac{1-p}{12}. \end{align*} These identities can be proved by Kummer's congruence when $p>3$. For $p=3$, these identities are shown using the extended Kummer's congruence (cf. \cite[COROLLAIRE 3]{Fr}). This proves the first part of the assertion. Suppose that $\eta_p(T)=1$. We note that \[\prod_q \eta_q(T)=-1.\] Therefore, if $\eta_p(T)=1$, there is a prime number $q \not=p$ such that $\eta_q(T)=-1$. Therefore, by Lemma \ref{lem.vanishing-of-F}, we have $F_q(T,q^{-2})=0$. This proves the second part of the assertion.
(2) Put \[{\mathcal B}_{T,m}=\Big(\frac{2k_m-4}{B_{2k_m-4}} \Big)\Big(\frac{B_{k_m-2,\chi_T}}{k_m-2}\Bigr).\] Then \[{\mathcal B}_{T,m} =\Big(\frac{2(p-1)p^{m-1}}{B_{2(p-1)p^{m-1}}} \Big)
\Big(\frac{B_{(p-1)p^{m-1},\chi_T}}{(p-1)p^{m-1}}\Bigr).\] First suppose that $\det (2T)$ is not square. Then, $\chi_T$ is non-trivial, and $\frac{B_{(p-1)p^{m-1},\chi_T}}{(p-1)p^{m-1}}$ is $p$-integral for any $m \ge 1$ (cf. \cite{Carlitz2}). Hence, by the theorem of von Staudt-Clausen, \[\lim_{m\to\infty} {\mathcal B}_{T,m}=0.\] This proves the assertion (2.1). Next suppose that $\det (2T)$ is square. Then, $B_{(p-1)p^{m-1},\chi_T}=B_{(p-1)p^{m-1}}$, and \[{\mathcal B}_{T,m} =2\frac{B_{(p-1)p^{m-1}}}{B_{2(p-1)p^{m-1}}}.\] By Lemma \ref{lem.limit-of-Bernoulli-number}, \[\lim_{m\to\infty} {\mathcal B}_{T,m}=2\cdot\lim_{m\to\infty}\frac{B_{(p-1)p^{m-1}}}{B_{2(p-1)p^{m-1}}}=2.\] This fact, together with $(\star)$, proves the first part of the assertion (2.2). To prove the remaining part, first suppose that $\eta_p(T)=1$. We note that \[\prod_q \eta_q(T)=-1.\] Therefore, there is a prime number $q \not=p$ such that $\eta_q(T)=-1$.
We have $F_q(T,q^{-3})=0$ as will be proved in Corollary \ref{add-cor2-2}.
Next suppose that $\mathrm{ord}_p(\det (2T)) =0$. Then, $\eta_p(T)=1$, and the assertion follows from the above. \end{proof}
\subsubsection{Computation of
$\boldsymbol{a(\text{genus}\,\Theta^{(n)}(S^{(p)}),T)}$ for $\boldsymbol{n=3}$ or $\boldsymbol{4}$}
\label{LC34}
For $S \in {\mathcal H}_m({\mathbb Z}_q)^{\mathrm{nd}}$ and $T \in {\mathcal H}_n({\mathbb Z}_q)^{\mathrm{nd}}$ with $m \ge n$, we define the local density $\alpha_q(S,T)$ as \[\alpha_q(S,T)=2^{-\delta_{m,n}}\lim_{e\to\infty} q^{e(-mn+n(n+1)/2)}{\mathcal A}_e(S,T), \] where \begin{align*}
{\mathcal A}_e(S,T)=\{X \in M_{mn}({\mathbb Z}_q)/q^eM_{mn}({\mathbb Z}_q) \ | \ S[X] \equiv T\!\pmod{ q^e {\mathcal H}_n({\mathbb Z}_q)}\,\}, \end{align*} and $\delta_{m,n}$ is the Kronecker delta.
We say that an element $X$ of $M_{mn}({\mathbb Z}_q)$ with $m \ge n$ is {\it primitive} if $\mathrm{rank}_{{\mathbb Z}_q/q{\mathbb Z}_q} X=n$. We also say that an element $X \mod q^e$ of $M_{mn}({\mathbb Z}_q)/q^eM_{mn}({\mathbb Z}_q)$ is {\it primitive} if $X$ is primitive. This definition does not depend on the choice of $X$. From now on, for $X \in M_{mn}({\mathbb Z}_q)$, we often use the same symbol $X$ to denote the class of $X$ mod $q^e$. For $S \in {\mathcal H}_m({\mathbb Z}_q)^{\mathrm{nd}}$ and $T \in {\mathcal H}_n({\mathbb Z}_q)$ with $m \ge n$, we define the primitive local density $\beta_q(S,T)$ as \[\beta_q(S,T)=2^{-\delta_{m,n}}\lim_{e\to\infty} q^{e(-mn+n(n+1)/2)}{\mathcal B}_e(S,T),\] where
\[{\mathcal B}_e(S,T)=\{X \in {\mathcal A}_e(S,T) \ | \ X \text{ is primitive}\}. \]
The following is due to \cite{Ki1}.
\begin{lemma} \label{lem.local-density-via-primitive-local-density} Let $S \in {\mathcal H}_m({\mathbb Z}_q)^{\rm nd}$ and $T \in {\mathcal H}_n({\mathbb Z}_q)^{\rm nd}$ with $m \ge n$. Then \[\alpha_q(S,T)=\sum_{g \in GL_n({\mathbb Z}_q) \backslash M_n({\mathbb Z}_q)^{\rm nd}} q^{(-m+n+1) \det g}\beta_q(S,T[g^{-1}]).\] \end{lemma} For $S \in \mathrm{Sym}_m({\mathbb Z}_q)^{\mathrm{nd}}$ and $T \in \mathrm{Sym}_n({\mathbb Z}_q)^{\mathrm{nd}}$ we also define another local density $\widetilde \alpha_q(T,S)$ as \begin{align*} \widetilde \alpha_q(T,S)=2^{-\delta_{m,n}}\lim_{e\to\infty} q^{e(-mn+n(n+1)/2)}\widetilde {\mathcal A}_e(T,S), \end{align*} where \begin{align*}
\widetilde {\mathcal A}_e(T,S)=\{X \in M_{mn}({\mathbb Z}_q)/q^eM_{mn}({\mathbb Z}_q) \ |
\ S[X] \equiv T \!\pmod{q^e \mathrm{Sym}_n({\mathbb Z}_q)}\,\}. \end{align*}
\begin{remark} \label{rem.comparison-densities} We note that \begin{align*} \widetilde \alpha_q(2T,2S)=2^{m\delta_{2,p}}\alpha_q(S,T) \end{align*} for $S \in {\mathcal H}_m({\mathbb Z}_q)$ and $T \in {\mathcal H}_n({\mathbb Z}_q)$. \end{remark}
For $S \in \Lambda_m^+$ and $T \in \Lambda_n^+$, put
\[a(S,T)=\#\{X \in M_{mn}({\mathbb Z}) \ | \ S[X]=T\}.\] Moreover put \[M(S)=\sum_{i=1}^{d} {1 \over a(S_i,S_i)} \] and \[\widetilde a(S,T)=M(S)^{-1}\sum_{i=1}^d {a(S_i,T) \over a(S_i,S_i)},\] where $S_i$ runs over a complete set of $GL_m({\mathbb Z})$-equivalence classes in $\text{genus}(S)$. Then we have the following formula.
\begin{proposition} \label{prop.Siegel-main-theorem} Under the above notation, we have \begin{align*} &\widetilde a(S,T)=\\ &2^n \varepsilon_{n,m}\pi^{n(2m-n+1)/4}\prod_{i=1}^{n-1} \Gamma((m-i)/2)^{-1}(\det (2S))^{-n/2}(\det (2T))^{(m-n-1)/2}\\ &\times \prod_q \alpha_q(S,T), \end{align*} where \[\varepsilon_{n,m}=\begin{cases} 1/2 & \text{ if either } m=n+1 \text{ or } m=n>1, \\ 1 & \text{otherwise}. \end{cases}\] \end{proposition}
\begin{proof} We note that we have $\widetilde a(S,T)=\widetilde a(2S,2T)$. By \cite[Theorem 6.8.1]{Ki2} we have \begin{align*} &\widetilde a(2S,2T)\\ &=\varepsilon_{n,m}\pi^{n(2m-n+1)/4}\prod_{i=1}^{n-1} \Gamma((m-i)/2)^{-1}(\det (2S))^{-n/2}(\det (2T))^{(m-n-1)/2}\\ &\times \prod_q \widetilde \alpha_q(2T,2S). \end{align*} Then, the assertion follows from Remark \ref{rem.comparison-densities}.
\end{proof}
\begin{proposition} \label{prop.mass-formula} \begin{itemize} \item[{\rm (1)}] Let $T \in \Lambda_3^+$. Then we have \[a(\mathrm{genus} \ \Theta^{(3)}(S^{(p)}),T)=8p^{-3}\pi^{4} \alpha_p(U_0 \bot pU_0,T)\prod_{q \not=p} \alpha_q(H_2,T).\] \item[{\rm (2)}] Let $T \in \Lambda_4^+$. Then we have \[a(\mathrm{genus}\ \Theta^{(4)}(S^{(p)}),T)=16p^{-4}\pi^{4} \det(2T)^{-1/2}\alpha_p(U_0 \bot pU_0,T)\prod_{q \not=p} \alpha_q(H_2,T).\] \end{itemize} \end{proposition}
\begin{proof} (1) By Proposition \ref{prop.Siegel-main-theorem}, we have \[a(\mathrm{genus} \ \Theta^{(3)}(S^{(p)}),T)=8p^{-3}\pi^{4} \prod_{q } \alpha_q(S^{(p)},T).\] By Lemma \ref{lem.existence-of-S}, we prove the assertion.
(2) Again by Proposition \ref{prop.Siegel-main-theorem}, we have \[a(\mathrm{genus} \ \Theta^{(4)}(S^{(p)}),T)=16p^{-4}\pi^{4} \det(2T)^{-1/2}\prod_{q } \alpha_q(S^{(p)},T).\] Again by Lemma \ref{lem.existence-of-S}, we prove the assertion. \end{proof}
The following lemma has been essentially proved in \cite[Lemma 14.8]{Sh1}.
\begin{lemma} \label{lem.local-density-and-Siegel-series} Let $T \in {\mathcal H}_n({\mathbb Z}_q)^{\rm nd}$ with $k \ge n/2$. Then, we have \[\alpha_q(H_k,T)=2^{-\delta_{2k,n}}\widetilde b_q(T,q^{-k}).\] \end{lemma}
\begin{proposition}\label{prop.local-density-at-q} \label{prop2-5} Let $q \not=p$. \begin{itemize} \item[{\rm (1)}] Let $T \in {\mathcal H}_3({\mathbb Z}_q)^{\rm nd}$. Then, \[\alpha_q(H_2,T)=(1-q^{-2})^2 F_q(T,q^{-2}).\] \item[{\rm (2)}] Let $T \in {\mathcal H}_4({\mathbb Z}_q)^{\rm nd}$ and suppose that $\det(2T)$ is square in ${\mathbb Z}_q$. Then, \[\alpha_q(H_2,T)=(1-q^{-2})^2 q^{\mathrm{ord}_q(\det (2T))/2}F_q(T,q^{-3}).\] \end{itemize} \end{proposition}
\begin{proof} (1) By definition, $\gamma_q(T,X)=(1-X)(1-q^{2}X^2)$. Thus the assertion is easy to prove using Lemma \ref{lem.local-density-and-Siegel-series}.
(2) Again by definition, $\gamma_q(T,X)=(1-X)(1-q^{2}X^2)(1+q^2X)$. Hence, by Lemma \ref{lem.local-density-and-Siegel-series}, we have \[\alpha_q(H_2,T)=(1-q^{-2})^2 F_q(T,q^{-2}).\] Thus the assertion follows from the functional equation of $F_q(T,X)$. \end{proof}
\begin{corollary} \label{add-cor2-2} Assume that $\det (2T)$ is square in $\mathbb{Z}_q$. Let $T \in \mathcal{H}_4(\mathbb{Z}_q)^{\mathrm{nd}}$ and suppose that $\eta_q(T)=-1$. Then $F_q(T,q^{-3})=0$. \end{corollary}
\begin{proof} Since $\eta_q(T)=1$, we have $\alpha_q(H_2,T)=0$. Thus the assertion follows from Proposition \ref{prop2-5} (2). \end{proof}
The following proposition is one of key ingredients in the proof of our main result.
\begin{proposition}\label{prop.local-density-at-p} \begin{itemize} \item[{\rm (1)}] Let $T \in {\mathcal H}_3({\mathbb Z}_p)^{\mathrm{nd}}$. Then \[\alpha_p(U_0 \bot pU_0,T)=(1+p)(1+p^{-1})(1-\eta_p(T)).\] \item[{\rm (2)}] Let $T \in {\mathcal H}_4({\mathbb Z}_p)^{\mathrm{nd}}$. If $\alpha_p(U_0 \bot pU_0,T) \not=0$, then $\det (2T) =p^{2m}\xi^2$ with $m \in {\mathbb Z}_{>0}, \xi \in {\mathbb Z}_p^\times$ and $\eta_p(T)=-1$. Conversely, if $T$ satisfies this condition, then \begin{align*} \alpha_p(U_0 \bot pU_0, T)=2(1+p^{-1})^2 p^m.
\end{align*} \end{itemize} \end{proposition}
The above proposition may be proved by the explicit formula in \cite[THEOREM]{SH}. But to apply the formula, we have to compute too many quantities associated with $U_0 \bot pU_0$ and $T$. Therefore, we here use another method.
We say that $S \in {\mathcal H}_n({\mathbb Z}_q)^{\rm nd}$ is {\it maximal} if there is no element $g \in M_n({\mathbb Z}_q)^{\mathrm{nd}}$ such that $\det g \in q{\mathbb Z}_q$ and $S[g^{-1}] \in {\mathcal H}_n({\mathbb Z}_q)$.
First we prove Proposition \ref{prop.local-density-at-p} (1). The following lemma is easy to prove.
\begin{lemma} \label{lem.maximal-element3} Let $q$ be an odd prime number. Suppose that $T \in {\mathcal H}_3({\mathbb Z}_q)^{\rm nd}$ is maximal. Then $T$ is one of the following matrices. \begin{itemize} \item[{\rm (1)}] $T \sim_{GL_3({\mathbb Z}_q)} \epsilon_1 \bot \epsilon_2 \bot \epsilon_3$ with $\epsilon_i \in {\mathbb Z}_q^\times \ (i=1,2,3)$. \item[{\rm (2)}] $T \sim_{GL_3({\mathbb Z}_q)} \epsilon_1 \bot \epsilon_2 \bot q\epsilon_3$ with $\epsilon_i \in {\mathbb Z}_q^\times \ (i=1,2,3)$. \item[{\rm (3)}] $T \sim_{GL_3({\mathbb Z}_q)} \epsilon_1 \bot q\epsilon_2 \bot q\epsilon_3$ with $\epsilon_i \in {\mathbb Z}_q^\times \ (i=1,2,3)$ such that $\chi_q(-\epsilon_2\epsilon_3)=-1$. \end{itemize} Moreover, we have \[ \eta_q(T)=\begin{cases}1 & \text{ in the case {\rm (1)},} \\ \chi_q(-\epsilon_1\epsilon_2) & \text{ in the case {\rm (2)},} \\ -1 & \text{ in the case {\rm (3)}}. \end{cases}\] \end{lemma}
\begin{lemma} \label{lem.explicit-primitive-locla-density3} Let $T \in {\mathcal H}_3({\mathbb Z}_p)^{\rm nd}$. \begin{itemize} \item[{\rm (1)}] Suppose that $T$ is non-maximal. Then $\beta_p(U_0 \bot pU_0,T)=0$. \item[{\rm (2)}] Suppose that $T$ is maximal and $\eta_p(T)=1$. Then $\beta_p(U_0 \bot pU_0,T)=0$. \item[{\rm (3)}] Suppose that $T$ is maximal and $\eta_p(T)=-1$. Then \[\beta_p(U_0 \bot pU_0,T)=2(1+p)(1+p^{-1}).\] \end{itemize} \end{lemma}
\begin{proof} We may suppose that $T=\epsilon_1 p^{a_1} \bot \epsilon_2 p^{a_2} \bot \epsilon_3 p^{a_3}$ with $a_1 \le a_2 \le a_3$ and and $\epsilon_i \in {\mathbb Z}_p^\times \ (i=1,2,3)$.
Suppose that $T$ is non-maximal. Then, by Lemma \ref{lem.maximal-element3}, we have $a_1 \ge 1$, or $a_3 \ge 2$, or $a_1=0,a_2=a_3=1$ and $\chi_p(-\epsilon_2\epsilon_3)=1$.
Suppose that $\beta_p(U_0 \bot pU_0,T) \not=0$. Then, there is a primitive matrix $X=(x_{ij})_{1 \le i \le 4, 1 \le j \le 3} \in M_{4,3}({\mathbb Z}_p)$ such that \begin{align*} (U_0 \bot pU_0)[X] \equiv T \pmod{p^e {\mathcal H}_3({\mathbb Z}_p)} \tag{a} \end{align*} for an integer $e \ge 2$.
In the case $a_1 \ge 1$, (a) implies that ${\bf x}_j=(x_{i,j})_{1 \le i \le 2}$ is primitive for some $j=1,2,3$ and that \[U_0 [{\bf x}_j]\equiv 0 \pmod{p}.\] This is impossible because $\chi_p(\det U_0)=-1$.
In the case $a_3 \ge 2$, (a) implies that ${\bf y}=(x_{i,3})_{1 \le i \le 4}$ is primitive, and that \[(U_0 \bot pU_0) [{\bf y}]\equiv 0 \pmod{p^2}.\] This is also impossible by the same reason as above.
In the case, $a_1=0,a_2=a_3=1$ and $\chi_p(-\epsilon_2\epsilon_3)=1$, (a) implies that ${\bf z}=(x_{ij})_{3 \le i \le 4, 2 \le j \le 3}$ belongs to $GL_2({\mathbb Z}_p)$ and \[pU_0[{\bf z}] \equiv \epsilon_2 p \bot \epsilon_3 p \pmod{p^2 {\mathcal H}_2({\mathbb Z}_p)}.\] This is also impossible because $\chi_p(-\det (pU_0)) \not=\chi_p(-\det (\epsilon_2 p \bot \epsilon_3 p))$. This proves the assertion (1).
Next suppose that $T$ is maximal and $\eta_p(T)=1$. Then, again by Lemma \ref{lem.maximal-element3}, $a_1=a_2=a_3=0$, or $a_1=a_2=0,a_3=1$ and $\chi_p(-\epsilon_1\epsilon_2)=1$. In the first case, since the ${\mathbb Z}_p/p{\mathbb Z}_p$-rank of $U_0 \bot pU_0$ is smaller than that of $T$, clearly we have $\beta_p(U_0 \bot pU_0,T)=0$. In the second case, since $\chi_p(-\det U_0) \not=\chi_p(-\det (\epsilon_1 \bot \epsilon_2))$, we also have $\beta_p(U_0 \bot pU_0,T)=0$. Finally suppose that $T$ is maximal and $\eta_p(T)=-1$. Let $X$ be a primitive matrix satisfying the condition (a).
First let $T=\epsilon_1 \bot \epsilon_2 \bot \epsilon_3 p$ with $\chi_p(-\epsilon_1\epsilon_2)=-1$.
Write $X=\begin{pmatrix} X_{11} & X_{12} \\ X_{21} & X_{22} \end{pmatrix}$ with $X_{11} , M_{21} \in M_{2}({\mathbb Z}_p)$ and $X_{12},X_{22} \in M_{21}({\mathbb Z}_p)$. Then we have \begin{align*} U_0[X_{11}] +pU_0[X_{21}] \equiv \epsilon_1 \bot \epsilon_2 \pmod{p^e}, \tag{b11} \end{align*} \begin{align*} {}^tX_{11}U_0X_{12}+{}^tX_{21}pU_0X_{22} \equiv 0 \pmod{p^e}, \tag{b12} \end{align*} \begin{align*} U_0[X_{12}]+pU_0[X_{22}] \equiv \epsilon_3 p \pmod{p^e}.\tag{b22} \end{align*}
Since $U_0$ and $\epsilon_1 \bot \epsilon_2$ are invertible in $M_2({\mathbb Z}_p)/p^eM_2({\mathbb Z}_p)$, so is $X_{11}$ by (b11). Hence, by (b12), we have \[X_{12}\equiv -p({}^tX_{11}U_0)^* \ {}^t\!X_{21}U_0X_{22} \pmod{p^e},\] and by (b22), we have \begin{align*} p(pU_0[({}^tX_{11}U_0)^*\ {}^t\!X_{21}U_0]+U_0)[X_{22}] \equiv \epsilon_3 p \pmod{p^e}, \tag{b$22'$} \end{align*} where $({}^tX_{11}U_0)^*$ is the inverse of ${}^tX_{11}U_0$ in $M_2({\mathbb Z}_p)/p^eM_2({\mathbb Z}_p)$.
For $X_{21} \in M_2({\mathbb Z}_p)/p^eM_2({\mathbb Z}_p)$ put \[{\mathcal C}_e(X_{21})={\mathcal B}_e(U_0,-pU_0[X_{21}]+(\epsilon_1 +\epsilon_2)),\] and for $X_{21} \in M_2({\mathbb Z}_p)/p^eM_2({\mathbb Z}_p)$ and $X_{11} \in {\mathcal C}_e(X_{21})$ put \[{\mathcal C}_e(X_{21},X_{11})={\mathcal B}_e(p(pU_0[({}^tX_{11}U_0)^*\ {}^t\!X_{21}U_0]+U_0),\epsilon_3 p).\] Then, by (b11) and (b$22'$), we have \begin{align*} &\#{\mathcal B}_e(U_0 \bot pU_0,T) \\ &=\sum_{X_{21} \in M_2({\mathbb Z}_p)/p^eM_2({\mathbb Z}_p)} \sum_{X_{11} \in {\mathcal C}_e(X_{21})} \# {\mathcal C}_e(X_{21},X_{11}). \end{align*} For non-negative integers $e' \le e$, let $$\pi_{e'}^e:M_{rs}({\mathbb Z}_p)/p^eM_{rs}({\mathbb Z}_p) \longrightarrow M_{rs}({\mathbb Z}_p)/p^{e'}M_{rs}({\mathbb Z}_p) $$
be the natural projection. For $Y \in M_{rs}({\mathbb Z}_p)/p^eM_{rs}({\mathbb Z}_p)$, we use the same symbol $Y$ to denote the element $\pi_{e'}^e(Y) \in M_{rs}({\mathbb Z}_p)/p^{e'}M_{rs}({\mathbb Z}_p)$. Then $\pi_{e'}^e$ induces a mapping from ${\mathcal C}_e(X_{21})$ to ${\mathcal C}_{e'}(X_{21})$, which will also be denoted by $\pi_{e'}^e$. We have \[{\mathcal C}_{1}(X_{21})={\mathcal B}_1(U_0,\epsilon_1 \bot \epsilon_2)\] for any integer $e \ge 1$ and $X_{21} \in M_2({\mathbb Z}_p)/p^eM_2({\mathbb Z}_p)$. Hence, by the $p$-adic Newton approximation method, we easily see that \[\#{\mathcal C}_e(X_{21})=p^{e-1}\#{\mathcal C}_1(X_{21})=p^{e-1}\#{\mathcal B}_1(U_0,\epsilon_1 \bot \epsilon_2),\] and in particular \[\#{\mathcal B}_e(U_0,\epsilon_1 \bot \epsilon_2)=p^{e-1}\#{\mathcal B}_1(U_0,\epsilon_1 \bot \epsilon_2).\] This implies that \[p^{-1}\#{\mathcal B}_1(U_0,\epsilon_1 \bot \epsilon_2)=\beta_p(U_0,\epsilon_1 \bot \epsilon_2),\] and \[\#{\mathcal C}_e(X_{21})=p^e\beta_p(U_0,\epsilon_1 \bot \epsilon_2).\] Moreover, we have \[{\mathcal C}_2(X_{21},X_{11})={\mathcal B}_2(pU_0,\epsilon_3 p)\] for any integer $e \ge 2$, $X_{21} \in M_2({\mathbb Z}_p)/p^eM_2({\mathbb Z}_p)$ and $X_{11} \in {\mathcal C}_e(X_{21})$. Therefore, in a way similar to above, we have \[\#{\mathcal C}_e(X_{21},X_{11})=p^e\beta_p(pU_0,\epsilon_3 p).\] Hence, we have \begin{align*} &\#{\mathcal B}_e(U_0 \bot pU_0,T)\\ & =p^{2e}\beta_p(U_0,\epsilon_1 \bot \epsilon_2)\beta_p(pU_0,\epsilon_3p)\sum_{X_{21} \in M_2({\mathbb Z}_p)/p^eM_2({\mathbb Z}_p)} 1\\ &=p^{6e} \beta_p(U_0,\epsilon_1 \bot \epsilon_2)\beta_p(pU_0,\epsilon_3 p). \end{align*} Therefore,
we have \[\beta_p(U_0 \bot pU_0,T)=\beta_p(U_0,\epsilon_1 \bot \epsilon_2)\beta_p(pU_0,\epsilon_3 p).\] We note that $\beta_p(pU_0,\epsilon_3 p)=p\beta_p(U_0,\epsilon_3 ) =p(1+p^{-1})$. Thus the assertion follows from \cite[Theorem 5.6.3]{Ki2}. Next let $T=\epsilon_1 \bot \epsilon_2p \bot \epsilon_3 p$ with $\chi_p(-\epsilon_2\epsilon_3)=-1$. Then, in the same way as above, we have \[\beta_p(U_0 \bot pU_0,T)=\beta_p(U_0,\epsilon_1) \beta_p(pU_0,\epsilon_2 p \bot \epsilon_3 p).\] Thus the assertion follows from \cite[Theorem 5.6.3]{Ki2}. \end{proof}
A non-degenerate ${m \times m}$ matrix $D=(d_{ij})$ with entries in ${\mathbb Z}_q$ is said to be {\it reduced} if $D$ satisfies the following two conditions:\begin{enumerate} \item[(R-1)] For $i=j$, $d_{ii}=q^{e_{i}}$ with a non-negative integer $e_i$; \vspace*{1mm}
\item[(R-2)] For $i\ne j$, $d_{ij}$ is a non-negative integer satisfying $ d_{ij} \le q^{e_j}-1$ if $i <j$ and $d_{ij}=0$ if $i >j$. \end{enumerate} It is well known that we can take the set of all reduced matrices as a complete set of representatives of $GL_m({\mathbb Z}_q) \backslash M_m({\mathbb Z}_q)^{\rm nd}.$
\begin{lemma}\label{lem.uniquenes-of-maximal-element3} Let $q$ be an odd prime number. Let $T \in {\mathcal H}_3({\mathbb Z}_q)^{\rm nd}$ and suppose that $\eta_q(T)=-1$. Then there is a unique element $g \in GL_3({\mathbb Z}_q) \backslash M_3({\mathbb Z}_q)^{\rm nd}$ such that $T[g^{-1}]$ is maximal. \end{lemma}
\begin{proof} Since $\eta_q(T)=-1$, without loss of generality we assume that $T=\epsilon_1 q^{a_1} \bot \epsilon_2 q^{a_2} \bot \epsilon_3 q^{a_3}$ with $a_1$ even, and $a_2$ or $a_3$ is odd.
Let $g \in M_3({\mathbb Z}_q)^{\rm nd}$ such that $T[g^{-1}]$ is maximal. We may assume that \[g=\begin{pmatrix} q^{e_1} & d_{12} & d_{13} \\ 0 & q^{e_2} & d_{23} \\ 0 & 0 & q^{e_3} \end{pmatrix}\] satisfies the conditions (R-1) and (R-2).
Then we have \begin{align*} &T[g^{-1}]=\\ &{\footnotesize \begin{pmatrix} \epsilon_1q^{a_1-2e_1} & q^{-e_1}\epsilon_1q^{a_1} d_{12}^*& q^{-e_1}\epsilon_1q^{a_1}d_{13}^*\\ q^{-e_1}\epsilon_1q^{a_1} d_{12}^* & \epsilon_1q^{a_1}(d_{12}^*)^2+ \epsilon_2q^{a_2-2e_2}& d_{12}^*\epsilon_1q^{a_1}d_{13}^*+q^{-e_2}\epsilon_2q^{a_2}d_{23}^*\\ d_{13}^*\epsilon_1q^{a_1-e_1} & d_{13}^*\epsilon_1q^{a_1}d_{12}^*+q^{-e_2}\epsilon_2q^{a_2}d_{23}^* & \epsilon_1q^{a_1}(d_{13}^*)^2+\epsilon_2 q^{a_2}(d_{23}^*)^2+\epsilon_3 q^{a_3-2e_3} \end{pmatrix}}, \end{align*} where
\\ $d_{12}^*=-q^{-e_1-e_2}d_{12},d_{13}^*=q^{-e_1-e_2-e_3}(d_{12}d_{23}-q^{e_2}d_{13})$ and $d_{23}^*=-q^{-e_2-e_3}d_{23}.$
\\ First assume $a_2$ is even and $a_3$ is odd. Since $T[g^{-1}]$ is maximal, we have $e_1=a_1/2$, and hence $q^{-e_1}\epsilon_1q^{a_1} d_{12}^*=-\epsilon_1q^{-e_2}d_{12}$, Then, by (R-2), we have $d_{12}=0$. Hence, we have $q^{-e_1}\epsilon_1q^{a_1}d_{13}^*=-\epsilon_1q^{-e_3}d_{13}$, and again by (R-2), we have $d_{13}=0$. We also have $\epsilon_1q^{a_1}(d_{12}^*)^2+ \epsilon_2q^{a_2-2e_2}=\epsilon_2q^{a_2-2e_2}$, and by the maximality condition, we have $e_2=a_2/2$ and again by (R-2), we have $d_{23}=0$. This proves the uniqueness of $g$. Next assume that $a_2$ and $a_3$ are odd. Since we have $\eta_q(T)=-1$, we have $\chi_q(-\epsilon_2\epsilon_3)=-1$. In the same way as above, we can prove the uniqueness of $e_1=a_1/2$ and $d_{12}=d_{13}=0$. Then, $\epsilon_1q^{a_1}(d_{12}^*)^2+ \epsilon_2q^{a_2-2e_2}=\epsilon_2q^{a_2-2e_2}$, and by the maximality condition, we have $e_2=(a_2-1)/2$. Then, we have $d_{12}^*\epsilon_1q^{a_1}d_{13}^*+q^{-e_2}\epsilon_2q^{a_2}d_{23}^*=-\epsilon_2q^{-e_3+1}d_{23}$, and therefore, by (R-2), we have $-q^{-e_3+1}d_{23} \in {\mathbb Z}_p^\times$ if $d_{23} \not=0$. Hence, we have \[\epsilon_1q^{a_1}(d_{13}^*)^2+\epsilon_2 q^{a_2}(d_{23}^*)^2+\epsilon_3 q^{a_3-2e_3}=q^{-1}\epsilon_2(q^{-e_3+1}d_{23})^2+\epsilon_3q^{a_3-2e_3},\] and it does not belong to ${\mathbb Z}_q$ because $\chi_q(-\epsilon_2\epsilon_3)=-1$. This implies that $d_{23}=0$ and the assertion holds. \end{proof}
\noindent {\bf Proof of Proposition \ref{prop.local-density-at-p} \,(1).} The assertion follows from Lemmas \ref{lem.local-density-via-primitive-local-density}, \ref{lem.explicit-primitive-locla-density3}, and \ref{lem.uniquenes-of-maximal-element3}.
Next we prove Proposition \ref{prop.local-density-at-p} (2). The following lemma is easy to prove.
\begin{lemma} \label{lem.maximal-element4} Let $q$ be an odd prime number. Suppose that $T \in {\mathcal H}_4({\mathbb Z}_q)^{\rm nd}$ is maximal. Then $T$ is one of the following matrices. \begin{itemize} \item[{\rm (1)}] $T \sim_{GL_4({\mathbb Z}_q)} \epsilon_1 \bot \epsilon_2 \bot \epsilon_3 \bot \epsilon_4$ with $\epsilon_i \in {\mathbb Z}_q^\times \ (i=1,2,3,4)$. \item[{\rm (2)}] $T \sim_{GL_4({\mathbb Z}_q)} \epsilon_1 \bot \epsilon_2 \bot \epsilon_3 \bot q\epsilon_4$ with $\epsilon_i \in {\mathbb Z}_q^\times \ (i=1,2,3,4)$. \item[{\rm (3)}] $T \sim_{GL_4({\mathbb Z}_q)} \epsilon_1 \bot \epsilon_2 \bot q\epsilon_3 \bot q\epsilon_4$ with $\epsilon_i \in {\mathbb Z}_q^\times \ (i=1,2,3,4)$ such that \\ $\chi_q(-\epsilon_3\epsilon_4)=-1$. \end{itemize} Moreover, in case {\rm (3)}, we have $\eta_q(T)=-1 $.
\end{lemma} \begin{lemma}\label{lem.explicit-primitive-locla-density4} Let $T \in {\mathcal H}_4({\mathbb Z}_p)^{\rm nd}$ be maximal. If $\alpha_p(U_0 \bot pU_0,T) \not=0$, then $$T \sim_{GL_4({\mathbb Z}_p)} \epsilon_1 \bot \epsilon_2 \bot p\epsilon_3 \bot p\epsilon_4$$ such that $\epsilon_1 \epsilon_2\epsilon_3 \epsilon_4 \in ({\mathbb Z}_p^\times)^2$ and $\chi_p(-\epsilon_1\epsilon_2)=\chi_p(-\epsilon_3\epsilon_4)=-1$. Conversely, if $T$ satisfies these conditions, then \begin{align*} \alpha_p(U_0 \bot pU_0, T)=2(1+p^{-1})^2 p.
\end{align*} \end{lemma} \begin{proof}
By the assumption, $\det (2T)$ is divided by $p^2$, and $T$ must be the case (3) of Lemma \ref{lem.maximal-element4}, $T$. Moreover, in this case we have $p^{-2}\det (2T)=\prod_{i=1}^4 \epsilon_i=\epsilon^2$ with $\epsilon \in {\mathbb Z}_p^\times$. Hence, we have $\chi_p(-\epsilon_1\epsilon_2)=\chi_p(-\epsilon_3\epsilon_4)=-1$. Hence, $T \sim_{GL_4({\mathbb Z}_p)} U_0 \bot pU_0$, and the second assertion follows from \cite[Theorem 6.8.1]{Ki2}. \end{proof}
\begin{lemma}\label{lem.uniquenes-of-maximal-element4} Let $q$ be an odd prime number. Let $T \in {\mathcal H}_4({\mathbb Z}_q)^{\rm nd}$ and suppose that $\eta_q(T)=-1$. Then there is a unique element $g \in GL_4({\mathbb Z}_q) \backslash M_4({\mathbb Z}_q)^{\rm nd}$ such that $T[g^{-1}]$ is maximal. \end{lemma} \begin{proof} The assertion can be proved in the same manner as Lemma \ref{lem.uniquenes-of-maximal-element3}. \end{proof}
\noindent
{\bf Proof of Proposition \ref{prop.local-density-at-p}\,(2).} The assertion follows from Lemmas \ref{lem.local-density-via-primitive-local-density}, \ref{lem.explicit-primitive-locla-density4}, and \ref{lem.uniquenes-of-maximal-element4}.
\\
{\bf Proof of Theorem \ref{th.main-result}.}
(1) If $\eta_p(T)=1$, by Theorem \ref{th.limit-of-FC-of-Eisenstein} (1) and Propositions \ref{prop.mass-formula} and \ref{prop.local-density-at-p}, we have \[a(\widetilde E_2^{(3)},T)=a(\mathrm{genus} \ \Theta^{(3)}(S^{(p)}),T)=0.\]
Suppose that $\eta_p(T)=-1$. Then, we have \[a(\mathrm{genus} \ \Theta^{(3)}(S^{(p)}),T)=16 (1+p)^2 p^{-4} \pi^4 \prod_{q \not=p} (1-q^{-2})^2 \prod_{q \not=p} F_q(T,q^{-2}).\] We have \[\prod_{q \not=p} (1-q^{-2})^2=\zeta(2)^{-2}(1-p^{-2})^{-2}=\frac{36}{\pi^4(1-p^{-2})^2}.\] Hence, we have \[a(\mathrm{genus} \ \Theta^{(3)}(S^{(p)}),T)=\frac{576}{(p-1)^2} \prod_{q \not=p} F_q(T,q^{-2}).\] This coincides with $a(\widetilde E_2^{(3)},T)$ in view of Theorem \ref{th.limit-of-FC-of-Eisenstein} (1).
(2) If $p^{-2}\det (2T)$ is not a square integer or $\eta_p(T)=1$, by Theorem \ref {th.limit-of-FC-of-Eisenstein} (2), and Propositions \ref{prop.mass-formula} and \ref{prop.local-density-at-p}, we have \[a(\widetilde E_2^{(4)},T)=a(\mathrm{genus} \ \Theta^{(4)}(S^{(p)}),T)=0.\] Suppose that $p^{-2}\det (2T)$ is a square integer and $\eta_p(T)=-1$. Then, we have \[a(\mathrm{genus} \ \Theta^{(4)}(S^{(p)}),T)=32 (1+p)^2 p^{-4} \pi^4 \prod_{q \not=p} (1-q^{-2})^2 \prod_{q \not=p} F_q(T,q^{-3}).\] We have \[\prod_{q \not=p} (1-q^{-2})^2=\zeta(2)^{-2}(1-p^{-2})^{-2}=\frac{36}{\pi^4(1-p^{-2})^2}.\] Hence, we have \[a(\mathrm{genus} \ \Theta^{(4)}(S^{(p)}),T)=\frac{1152}{(p-1)^2} \prod_{q \not=p} F_q(T,q^{-3}).\] This coincides with $a(\widetilde E_2^{(4)},T)$ in view of Theorem \ref{th.limit-of-FC-of-Eisenstein} (2).
These complete the proof of Theorem \ref{th.main-result}.
\subsection{General case} To prove the main theorem, we must consider the case $n\geq 5$. For the Siegel operator $\Phi$, we have $$ \begin{cases} \Phi (\widetilde{E}_2^{(n)})=\widetilde{E}_2^{(n-1)},\\ \Phi (\text{genus}\,\Theta^{(n)}(S^{(p)}))=\text{genus}\,\Theta^{(n-1)}(S^{(p)}). \end{cases} $$ From this, it is sufficient to prove the following proposition.
\begin{proposition} \label{n5} \quad Assume that $n\geq 5$. For any $T\in\Lambda_n^+$, we have $$ (*)\qquad\qquad \qquad a(\widetilde{E}_2^{(n)},T)=a({\rm genus}\,\Theta^{(n)}(S^{(p)}),T)=0. $$ \end{proposition}
\begin{proof} First we consider ${\rm genus}\,\Theta^{(n)}(S^{(p)})$. Since $S^{(p)}\in\Lambda_4^+$ and $T\in\Lambda_n^+$\;$(n\geq 5)$, $a(\theta^{(n)}(S_i;Z),T)=0$ holds for each theta series $\theta^{(n)}(S_i;Z)$. Hence, we obtain $$ a({\rm genus}\,\Theta^{(n)}(S^{(p)}),T)=0. $$ Next we investigate $\widetilde{E}_2^{(n)}$ and prove $$ \label{vanishE2} (**)\qquad\qquad\qquad \lim_{m\to\infty}a(E_{k_m}^{(n)},T)=a(\widetilde{E}_2^{(n)},T)=0 $$ for any $T\in\Lambda_n^+$. We recall the formula for $a(E_k^{(n)},T)$ given in Proposition \ref{prop.FC-Siegel}.
We extract the following factor in the formula for $a(E_k^{(n)},T)$: $$ A_{k,n}(T):=\frac{k}{B_k}\cdot\prod_{i=1}^{[n/2]}\frac{k-i}{B_{2k-2i}}\cdot \begin{cases} \frac{B_{k-n/2,\chi_T}}{k-n/2} & (\text{if $n$ is even}),\\ 1 & (\text{if $n$ is odd}). \end{cases} $$ To prove $(**)$, it suffices to show that $$ (\ddagger)\qquad\qquad\qquad \lim_{m\to\infty}A_{k_m,n}(T)=0\quad (\text{$p$-adically}). $$
First we assume that $p>3$. By Kummer's congruence for Bernoulli numbers, the factors $$ \frac{k_m}{B_{k_m}}\;\;\text{and}\;\;\frac{k_m-i}{B_{2k_m-2i}}\;\;(1\leq i \leq [n/2] \;\;\text{with}\;\; i \not\equiv 2 \pmod{(p-1)}) $$ in $A_{k_m,n}(T)$ have $p$-adic limits when $m\longrightarrow \infty$.
\\ We forcus our attention on the factors $$ \frac{k_m-i}{B_{2k_m-2i}}\;\;\text{for}\;\; i\;\;\text{with}\quad i \equiv 2 \pmod{(p-1)}. $$ In these cases, by the von Staudt-Clausen theorem, we obtain $$ \text{ord}_p\left(\frac{k_m-i}{B_{2k_m-2i}}\right)\geq 1. $$ In particular, in the case of $i=2\leq [n/2]$, the following identity holds: $$ \text{ord}_p\left(\frac{k_m-2}{B_{2k_m-4}}\right)=m. $$ ( Such $i$ also appears when $n=4$. See Remark \ref{cacellation}.) Consequently, there is a constant $C$ such that $$ \text{ord}_p\left(\frac{k_m}{B_{k_m}}\cdot\prod_{i=1}^{[n/2]}\frac{k_m-i}{B_{2k_m-2i}}\right)\geq C+m $$ for sufficiently large $m$. Regarding the factor $B_{k_m-n/2,\chi_T}/(k_m-n/2)$, we use Carlitz's result (\cite{Carlitz2}) for the generalized Bernoulli numbers in the case of quadratic Dirichlet characters. We have \begin{align*} \text{ord}_p\left( \frac{B_{k_m-n/2,\chi_T}}{k_m-n/2}\right)
& =\text{ord}_p\left(B_{k_m-n/2,\chi_T}\right)
-\text{ord}_p\left(k_m-n/2\right)\\
& \geq -1-\text{ord}_p\left(2-n/2\right), \end{align*} for sufficiently large $m$. By assumption $n\geq 5$, the values $$ \text{ord}_p\left( \frac{B_{k_m-n/2,\chi_T}}{k_m-n/2}\right) $$ have a lower bound for sufficiently large $m$.
\\ Combining these results, we can prove $(\ddagger)$ when $p>3$.
Next we consider the case $p=3$. By von Staudt-Clausen theorem, we obtain $$ \text{ord}_3\left(\frac{k_m}{B_{k_m}}\right)=1,\quad \text{ord}_3\left(\frac{k_m-i}{B_{2k_m-2i}}\right)=\text{ord}_3(k_m-i)+1\geq 1 \quad (1\leq i\leq [n/2]). $$ In particular, when $i=2\geq [n/2]$, the following identity holds: $$ \text{ord}_3\left(\frac{k_m-2}{B_{2k_m-4}}\right)=m. $$ Since we know that $\text{ord}_3(B_{k_m-n/2,\chi_T}/(k_m-n/2))$ has a lower bound as in the case $p>3$, the statement $(\ddagger)$ is also proven in the case $p=3$.
This completes the proof of Proposition \ref{n5}. \end{proof}
\noindent \begin{remark} \label{cacellation} \quad An exceptional factor $(k_m-2)/\,B_{2k_m-4}$ in the product \\
$\prod (k_m-i)/\,B_{2k_m-2i}$ also appears when $n=4$. However, in this case, cancellation occurs between $$ \frac{k_m-2}{B_{2k_m-4}}\quad \text{and}\quad \frac{B_{k_m-2,\chi_T}}{k_m-2}. $$
\end{remark}
By combining Corollary \ref{cor.main-result2} and Proposition \ref{n5}, we have proved our main result, Theorem \ref{statementmain}.
\section{Applications}
\subsection{Modular forms on $\boldsymbol{\Gamma_0^{(n)}(p)}$} In \cite{Serre}, Serre proved the following result.
\begin{theorem} {\rm (Serre \cite{Serre})}\quad Let $p$ be an odd prime number and $\mathbb{Z}_{(p)}$ the local ring consisting of $p$-integral rational numbers. For any $f\in M_2(\Gamma^{(1)}_0(p))_{\mathbb{Z}_{(p)}}$, there is a modular form $g\in M_{p+1}(\Gamma^{(1)})_{\mathbb{Z}_{(p)}}$ satisfying $$ f \equiv g \pmod{p}. $$ \end{theorem} An attempt to generalize this result to the case of Siegel modular forms can be found in \cite{BN2}. \\ Here we consider the first $p$-adic approximation of $\widetilde{E}_2^{(n)}$, that is, $$ E_{k_1}^{(n)}=E_{p+1}^{(n)}. $$
\begin{theorem} Let $p$ be a prime number such that $p>n$. The modular form $\widetilde{E}_2^{(n)}\in M_2(\Gamma_0^{(n)}(p))_{\mathbb{Z}_{(p)}}$ is congruent to $E_{p+1}^{(n)}\in M_{p+1}(\Gamma^{(n)})_{\mathbb{Z}_{(p)}}$ mod $p$: $$ \widetilde{E}_2^{(n)} \equiv E_{p+1}^{(n)} \pmod{p}. $$ \end{theorem} The above result provides an example of Serre's type congruence in the case of Siegel modular forms. ( For $n=2$, this theorem has already been proved in \cite[\sc Propositon 4]{KN}. The $p$-integrality of $\widetilde{E}_2^{(n)}$ comes from the explicit formula for the Fourier coefficients.)
\subsection{Theta operators} \label{ThetaOp}
For a Siegel modular form $\displaystyle F=\sum a(F,T)q^T$, we define $$ \varTheta (F):=\sum a(F,T)\cdot\text{det}(T)\,q^T\in\mathbb{C}[q_{ij}^{-1},q_{ij}][\![q_1,\ldots,q_n]\!]. $$ The operator $\varTheta$ is called the {\it theta operator}. This operator was first studied by Ramanujan in the case of elliptic modular forms, and the generalization to the case of Siegel modular forms can be found in \cite{BN1}.
If a Siegel modular form $F$ satisfies $$ \varTheta (F) \equiv 0 \pmod{N}, $$ we call it an element of the space of the {\it mod $N$ kernel of the theta operator}. For example, Igusa's cusp form $\chi_{35}\in M_{35}(\Gamma^{(2)})_{\mathbb{Z}}$ satisfies the congruence relation $$ \varTheta (\chi_{35}) \equiv 0 \pmod{23}\qquad\qquad (\text{cf. \cite{K-K-N}}), $$ namely, $\chi_{35}$ is an element of the space of mod $23$ kernel of the theta operator.
\begin{theorem} \label{modpaquare} Assume that $p\geq 3$. Then we have $$ \varTheta (E_{p+1}^{(3)}) \equiv 0 \pmod{p},\qquad \varTheta (E_{p^2-p+2}^{(4)}) \equiv 0 \pmod{p^2}. $$ The second congruence shows that the Siegel Eisenstein series $E_{p^2-p+2}^{(4)}$ is an element of the mod $p^2$ kernel of the theta operator. \end{theorem}
\begin{proof} To prove the first congruence relation, we consider the first approximation of $\widetilde{E}_2^{(3)}$: $$ \widetilde{E}_2^{(3)} \equiv E_{2+p-1}^{(4)}=E_{p+1}^{(3)} \pmod{p}. $$ If $T\in\Lambda^+_3$ satisfies $a({\rm genus}\,\Theta^{(3)}(S^{(p)}),T)=a(\widetilde{E}_2^{(3)},T)\ne 0$, then, by Lemma \ref{lem.explicit-primitive-locla-density3}, we have $\text{det}(2T) \equiv 0 \pmod{p}$. This fact implies that $$ \varTheta (E_{p+1}^{(3)}) \equiv \varTheta(\widetilde{E}_2^{(3)}) = \varTheta ({\rm genus}\,\Theta^{(3)}(S^{(p)}) )\equiv 0 \pmod{p}. $$ We consider the second congruence relation. Considering the second $p$-adic approximation of $\widetilde{E}_2^{(4)}$, we obtain $$ \widetilde{E}_2^{(4)} \equiv E_{2+(p-1)p}^{(4)}=E_{p^2-p+2}^{(4)} \pmod{p^2}. $$ Therefore, it is sufficient to prove that $$ \varTheta (\widetilde{E}_2^{(4)}) \equiv 0 \pmod{p^2}. $$ Assume that $a(\widetilde{E}_2^{(4)},T)\ne 0$ for $T\in\Lambda_4^+$. Then $\text{det}(2T)$ is square by Theorem \ref{th.limit-of-FC-of-Eisenstein}, (2). Under this condition, if we further assume that $\text{det}(2T)\not\equiv 0 \pmod{p}$, then we have $a(\widetilde{E}_2^{(4)},T)=0$ by Theorem \ref{th.limit-of-FC-of-Eisenstein}, (2.2). This is a contradiction. Therefore, we have $\text{det}(2T) \equiv 0 \pmod{p}$, equivalently,
$\text{det}(2T) \equiv 0 \pmod{p^2}$. This means that,
if $a(\widetilde{E}_2^{(4)},T)\ne 0$ for $T\in\Lambda_4^+$, then $T\in\Lambda_4^+$ satisfies
$\text{det}(2T) \equiv 0 \pmod{p^2}$. This implies
$$
\varTheta (\widetilde{E}_2^{(4)}) \equiv 0 \pmod{p^2}
$$ and completes the proof of Theorem \ref{modpaquare}. \end{proof}
\noindent \begin{remark} The first congruence relation in Theorem \ref{modpaquare} can also be shown as a special case of the main theorem in \cite[Theorem 2.4]{N-T}. \end{remark}
\subsection{Numerical examples}
In this section, we provide examples of Fourier coefficients of $\widetilde{E}_2^{(n)}$ and \\ genus\,$\Theta^{(n)}(S^{(p)})$, which certify the validity of our identity in our main result.
\\ \fbox{\textbf{Case $\boldsymbol{n=3:}$}}
\\ We take $$ p=11,\quad T={\scriptsize \begin{pmatrix} 1 & 0 & \tfrac{1}{2} \\
0 & 1 & 0 \\
\tfrac{1}{2} & 0 & 3 \end{pmatrix}}
\in \Lambda_3^+\quad\text{with}\quad\text{det}(T)=11/4, $$ and calculate $a(\widetilde{E}_2^{(3)},T)$ and $a(\text{genus}\,\Theta^{(3)}(S^{(p)}),T)$.
\\ \textbf{Calculation of} $\boldsymbol{a(\widetilde{E}_2^{(3)},T)}$:
\\ By Theorem \ref{th.limit-of-FC-of-Eisenstein}, (1), $$ a(\widetilde{E}_2^{(3)},T)=\frac{576}{(1-11)^2}\cdot\lim_{m\to\infty}(1-11^{10\cdot 11^{m-1}})= \frac{144}{25}. $$ \textbf{Calculation of} $\boldsymbol{a({\rm genus}\,\Theta^{(3)}(S^{(11)}),T)}$:
\\ We can take three representatives of $GL_4(\mathbb{Z})$-equivalence classes in genus$(S^{(11)})$: $$ {\scriptsize S_1^{(11)}:=\begin{pmatrix} 1 & 0 & \tfrac{1}{2} & 0 \\ 0 & 1 & 0 & \tfrac{1}{2} \\
\tfrac{1}{2} & 0 & 3 & 0 \\ 0 & \tfrac{1}{2} & 0 & 3
\end{pmatrix},\; S_2^{(11)}:=\begin{pmatrix} 1 & \tfrac{1}{2} & \tfrac{1}{2} & \tfrac{1}{2} \\ \tfrac{1}{2} & 1 & 0 & \tfrac{1}{2} \\
\tfrac{1}{2} & 0 & 4 & 2 \\ \tfrac{1}{2} & \tfrac{1}{2} & 2 & 4
\end{pmatrix},\; S_3^{(11)}:=\begin{pmatrix} 2 & 1 & \tfrac{1}{2} & \tfrac{1}{2} \\ 1 & 2 & 0 & \tfrac{1}{2} \\
\tfrac{1}{2} & 0 & 2 & 1 \\ \tfrac{1}{2} & \tfrac{1}{2} & 1 & 2
\end{pmatrix}.
} $$ Moreover we have $$ a(S_1^{(11)},S_1^{(11)})=32,\quad a(S_2^{(11)},S_2^{(11)})=72, \quad a(S_3^{(11)},S_3^{(11)})=24.\qquad (\text{cf.\;\cite{Nipp}}). $$ Therefore, \begin{align*} & a(\text{genus}\,\Theta^{(3)}(S^{(11)}),T)=\\ &[a(\theta^{(3)}(S_1^{(11)};Z),T)/32+a(\theta^{(3)}(S_2^{(11)};Z),T)/72+a(\theta^{(3)}(S_3^{(11)};Z),T)/24]\\ & \cdot [(1/32)+(1/72)+(1/24)]^{-1}. \end{align*} Direct calculations show that $$ a(\theta^{(3)}(S_1^{(11)};Z),T)=16,\qquad a(\theta^{(3)}(S_2^{(11)};Z),T)=a(\theta^{(3)}(S_3^{(11)};Z),T)=0 $$ for the above $T\in\Lambda_3^+$. Hence, we have $$ a(\text{genus}\,\Theta^{(3)}(S^{(11)}),T)=16 \times\frac{9}{25}=\frac{144}{25}. $$ Of course, this value is consistent with the value obtained using the equations given in Propositions \ref{prop.mass-formula}\, (1) and \ref{prop.local-density-at-p}\, (1):
\begin{align*} a(\text{genus}\,\Theta^{(3)}(S^{(11)}),T) & = \frac{8}{11^3}\pi^4
\cdot \alpha_{11}(U_0\,\bot\,11\cdot U_0,T)
\cdot \prod_{q\ne 11}\alpha_q(H_2,T) \\
& = \frac{8}{11^3}\pi^4
\cdot 2(1+11)(1+11^{-1})
\cdot \frac{36}{\pi^4(1-11^{-2})^2}\\
&=\frac{144}{25}. \end{align*}
Further numerical examples for the case $n=3$ can be found in \cite{Okuma}.
\\
\fbox{\textbf{Case $\boldsymbol{n=4:}$}}
\\ The values $a(\text{genus}\,\Theta^{(4)}(S^{(p)}),T)$ at $T=S^{(p)}$ can be calculated as $$ a(\text{genus}\,\Theta^{(4)}(S^{(p)}),S^{(p})=\left(\sum_{i=1}^d\frac{1}{a(S_i,S_i)}\right)^{-1}=M(S^{(p)})^{-1} $$ and the values $M(S^{(p)})$ for small $p$ can be found in \cite{Nipp}:
\begin{table}[hbtp] \begin{center}
\begin{tabular}{c | cccccc} $p$ & $3$ & $5$ & $7$ & $11$ & $13$ & $\cdots$ \\ \hline
$M(S^{(p)})^{-1}$ & $288$ & $72$ & $32$ & $\frac{288}{25}$& $8$ & $\cdots$ \end{tabular} \end{center} \end{table}
\\ These numerical data can be verified to be consistent with those obtained from the the formula $$ a(\widetilde{E}_2^{(4)},S^{(p)})=\frac{1152}{(p-1)^2}, $$ which is a result of Theorem \ref{th.limit-of-FC-of-Eisenstein} (2.2).
\end{document} | arXiv |
Selected articles from the IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2017: medical genomics
Human ancestry indentification under resource constraints -- what can one chromosome tell us about human biogeographical ancestry?
Tanjin T. Toma1,
Jeremy M. Dawson1 &
Donald A. Adjeroh1
The Correction to this article has been published in BMC Medical Genomics 2019 12:2
While continental level ancestry is relatively simple using genomic information, distinguishing between individuals from closely associated sub-populations (e.g., from the same continent) is still a difficult challenge.
We study the problem of predicting human biogeographical ancestry from genomic data under resource constraints. In particular, we focus on the case where the analysis is constrained to using single nucleotide polymorphisms (SNPs) from just one chromosome. We propose methods to construct such ancestry informative SNP panels using correlation-based and outlier-based methods.
We accessed the performance of the proposed SNP panels derived from just one chromosome, using data from the 1000 Genome Project, Phase 3. For continental-level ancestry classification, we achieved an overall classification rate of 96.75% using 206 single nucleotide polymorphisms (SNPs). For sub-population level ancestry prediction, we achieved an average pairwise binary classification rates as follows: subpopulations in Europe: 76.6% (58 SNPs); Africa: 87.02% (87 SNPs); East Asia: 73.30% (68 SNPs); South Asia: 81.14% (75 SNPs); America: 85.85% (68 SNPs).
Our results demonstrate that one single chromosome (in particular, Chromosome 1), if carefully analyzed, could hold enough information for accurate prediction of human biogeographical ancestry. This has significant implications in terms of the computational resources required for analysis of ancestry, and in the applications of such analyses, such as in studies of genetic diseases, forensics, and soft biometrics.
Accurate inference of biogeographical ancestry is important for various application areas. For instance, population stratification can confound the relationship between a genetic marker and disease. Identifying ancestry informative markers (AIMs) in the genome is essential for detecting such stratification in case-control association studies of complex diseases, such as cancer, diabetes, neurodegenerative diseases (e.g., Alzheimer's disease), and cardiovascular diseases [1,2,3]. Measuring genetic ancestry has also been a focus in the forensic science community. For routine forensic identification of ancestry, a small number of genetic markers is needed that can be tested quickly and cheaply [4, 5]. Reliable estimation of biogeographic ancestry is also a key procedure in studies of admixed populations. Several AIM sets have been proposed for estimating the admixture between given ancestral populations, for insance, the genetic contributions of Africans and Europeans to African American populations, and the contributions of Native Americans and Africans or African Americans to Latino populations [6,7,8]. Ancestry estimation also plays a significant role in guiding criminal investigations [9, 10]. Furthermore, many studies are investigating the association between ancestry and certain types of diseases [11,12,13]. Thus, analysis of genetic ancestry is a vast research area with numerous applications, which has attracted the use of a diverse array of techniques.
One aim in studies on human genetic ancestry is to identify sets of ancestry informative markers (AIMs) by analyzing DNA sequences from different chromosomes collected from the population samples under study. Most widely used AIMs are based on single nucleotide polymorphisms (SNPs) [14, 15] which demonstrate superior ability in predicting biogeographical origin of an unknown individual compared to other markers, such as short tandem repeats (STRs) [16]. Although a large number of SNPs can provide nearly accurate ancestry information for multiple geographic regions, a small but robust set of SNPs may be more desirable for certain applications [17]. Several published SNP panels have focused on distinguishing ancestral origins for individuals from different continental regions, e.g., Europe, America, Africa and East Asia [18], or between people from widely-separated global populations [1]. Some also proposed small SNP panels, typically ranging from the teens to hundreds of SNPs, which can estimate continental genetic ancestry relatively well [18]. However, very few studies have focused on identifying SNP panels for sub-continental ancestry estimation, a known challenging problem, given the difficulties of using small SNP panels in distinguishing individuals from closely related populations [19].
Several studies on ancestry identification have demonstrated that many globally distributed populations, can generally be distinguished by examining differences in allele frequencies, using the fixation index, widely known as Fst [20]. These studies identified that thousands of single nucleotide polymorphisms (SNPs) distributed throughout the human genome have significant differences in allele frequencies between two or more continental populations [21, 22]. Thus, a small set of SNPs (e.g., a few hundred) can be used to separate individuals with different continental origins using the Fst feature [23,24,25]. However, such panels of SNPs are less informative in detecting sub-continental differences in closely related populations [6, 19, 26,27,28,29,30]. Apart from Fst based ancestry estimation, techniques based on principal component analysis (PCA) [31,32,33] have widespread applications. A typical example of this class of methods is EIGENSTART [31]. These methods represent genetic variations by principal component vectors, however, they are not highly efficient due to the need for a large number of SNPs (thousands to millions) to calculate the principal component vectors. Besides, many studies have developed small panels of SNPs to distinguish ancestral origins from a large number of populations, example, 73 populations (Kidd et al. [34]) and 119 populations (Kidd et al. [17]). However, they used unsupervised learning (clustering) methods, such as STRUCTURE [35], to determine which populations cluster together and thus observed the ability of a SNP panel to infer ancestry.
Important progress has been made in the use of genomic information for ancestry detection [36,37,38,39], however, significant challenges still remain. Although a panel with a small number of SNPs can produce sufficiently accurate continental-level ancestry classification, reliable sub-continental population detection using only limited number of marker SNPs is still a major challenge. Significant research is still required to identify sets of ancestry informative SNPs (AISNPs) that can accurately distinguish closely related sub-populations, e.g., those from the same continent. This is a difficult multi-class classification challenge, with only a few attempts at the problem. This problem is also related to the issue of separating admixture populations [7, 8, 40, 41], and recent approaches that have used GWAS (Genome-Wide Association Studies) data [2, 3, 41]. We do not address the problem of admixture in the current paper, and we do not use GWAS datasets.
Another significant challenge is that of computation, and the ever limited resources available in most labs, where such ancestry estimation or classification may be needed. Thus, given resource limitations, we introduce a key new constraint in addressing the problem: only SNPs from one chromosome can be used in the analysis. This is significant, as it means that the sequencing needed can be focused on only the specified chromosome, hence saving time and sequencing cost. Essentially, the challenge, therefore, is to answer the question: how much information about our human biological and geographical ancestry can we found in a single chromosome? Clearly, this question can be formulated at different levels of granularity, for instance, using sets of chromosomes, rather than just one chromosome, or using sets of genes, rather than chromosomes.
In this work, we address the problems of both continental-level and sub-continental level ancestry identification using small SNP panels, with all SNPs in the panel coming from just one single chromosome. For this study, we will focus on Chromosome 1, since this is the largest chromosome, and thus might provide the best starting point for our exercise. Thus, in this work, we have employed machine learning approaches and statistical methods to determine small sets of SNPs that can be used to predict an individual's biogeographical origin to continental as well as sub-continental levels. Here, we studied DNA information from Chromosome 1 (largest human chromosome) to develop an efficient and cost-effective ancestry inference system.
We consider the problem in three stages. Initially, we employed parameter-based SNP selection, and later refined the selection by using a clustering technique (specifically, DBSCAN [42, 43]) to choose an efficient panel of SNPs. The final SNP panel is selected by applying a statistical approach based on pairwise correlation of SNPs to identify important ancestry informative SNPs for both continental and sub-continental ancestry classification. For continental-level ancestry classification, we view it as a five-class classification problem including the continents of Europe, Latin America, Africa, East Asia, and South Asia. Within each continent, we also have several closely-related sub-populations. Distinguishing these sub-populations accurately is the challenging part. To address the sub-continental classification problems, we consider pairwise classification of the sub-populations within each continent. For both continental and sub-continental classification problem, we have applied the softmax neural network classifier [44].
Figure 1 shows a schematic diagram of the proposed process for selection of ancestry informative SNPs. The figure shows how the initial set of over 20 million SNPs from chromosome 1 is reduced in several data pre-processing stages (e.g., data cleaning, similarity SNP set removal), and initial pruning stages (parameter-based selection and outlier-based selection) to a much smaller set of 6404 SNPs. Below, we describe our methodology in more detail.
Graphical depiction of the proposed process of SNP selection for predicting human biogeographical ancestry
Datasets and pre-processing
In this study, we used datasets from the 1000 Genome Project, Phase 3 [19]. The dataset contains information from 2504 individuals, from 26 different sub-populations, spanning five continents. For each individual, information is provided on 84.4 million variants (SNPs) from all 23 chromosomes. Table 1 provides a summary on the different populations, including the number of samples in each of the 26 populations. We analyzed the variants from only Chromosome 1 which is nearly 20.1 million SNPs. After the pre-processing steps (e.g., data cleaning), we identified continental and sub-continental ancestry informative SNPs in several stages. The DNA information for the 20.1 million variants (SNPs) from Chromosome 1 of each of the 2504 subjects is provided in a large 61.2 GB .vcf file. At the beginning, we extracted data from the .vcf file and stored them in several .mat files to be able to conduct our analysis in a MATLAB (MathWorks Inc., Natick, MA) environment. For each SNP, we extracted their rsID, loci number/position, reference allele, alternate allele(s), and allele information of all 2504 subjects (each person's allele is diploid, containing two nucleotides, from different combinations of the four nucleotide bases A, C, G, T). Next, we performed data cleaning operations on the extracted data based on the following criteria:
Remove an SNP position if the SNP contains more than one reference nucleotides.
Exclude an SNP position in the analysis, if an alternate allele nucleotide also exists in the reference allele for this SNP,
Exclude an SNP locus from the analysis, if for the given SNP, each of the two nucleotides from all the individuals in the dataset both match with the reference allele's nucleotide.
Table 1 26 populations in the dataset
The result of the above is the removal of around 13 million SNPs at the cleaning stage. Further analysis is then performed on the remaining SNPs. For the purpose of SNP selection, we removed a person's allele information from an SNP position, if the person's two nucleotides at the given position are the same as the reference allele's nucleotide. Consequently, two different sets of SNPs have been observed in the analysis. In one set, each SNP contains same allele information among all individuals, although this allele information is different from the reference nucleotide. We call this SNP set the 'Similarity set'. In contrast, in the other set, allele information is not the same among all individuals at the given SNP position. We call this set the 'Dissimilarity set'. Since, for ancestry identification, we need to distinguish among populations with respect to some attribute/feature(s), SNP loci that demonstrate greater variation in DNA information among individuals will lead to better identification results. Thus, we have chosen only the 'Dissimilarity set' of SNPs for further analysis.
SNP selection
The overall process of SNP selection is performed in three stages, each building on the results from the preceding stage. The initial stage employs a parameter-based selection; the latter stages use machine learning and statistical methods to further improve the results, and to prune the selected SNPs to significantly smaller set.
Stage 1: Parameter-based SNP Selection:
At the beginning, we aimed to identify important markers for each of the 26 populations from the 'Dissimilarity set' of SNPs. Consequently, we generated a structure array where each row allocates information from one SNP position containing 26 different fields, with each field corresponding to one of the 26 different populations. Each field associated with one population group contains relevant information regarding that group, such as, number of individuals of that group existing at that SNP position (since we removed individuals from a SNP position based on the similarity of their allele with reference nucleotide) and corresponding allele information of those individuals. Next, we calculated two parameters 'α' and 'β' at each dissimilar SNP position for each of the 26 populations using the following formulae:
$$ \alpha =\frac{n_p^i}{n_p}\kern0.5em \mathrm{and}\kern0.5em \beta =\frac{f_p^i}{n_p^i}, $$
where, p = 1, 2, …, 26
\( {n}_p^i \)= No. of individuals of population type p existing at SNP i
np= Total no. of individuals of population p in training data
\( {f}_p^i \)= Frequency of occurrence of the allele that appears most in population p at SNP i
For a given population p, a SNP position i is considered important if at that position α × β = 1 (i.e., α = 1 and β = 1). Here, α = 1 indicates that all individuals of that population exist at SNP i, since none of them has both nucleotides being the same as the reference nucleotide, while β = 1 means those individuals also share the same allele information at SNP i. Thus, based on the values of parameters α and β, we identify the best distinguishing SNPs for each population. After we obtain important SNP sets for each population, we take the union of all the 26 sets. The result is a set of 38,532 ancestry informative SNPs. From these 38 K SNPs, we further removed the SNPs which contain the same allele information across all individuals from all 26 populations in the training set, since SNPs showing no variations between different population groups are not informative in distinguishing them. At the end of this stage, the result is a set of 34,631 ancestry informative SNPs, all from Chromosome 1.
Stage 2: Outlier-Based SNP Selection:
In order to reduce the number of SNPs further, we apply a cluster-based technique on the results from Stage 1. In particular, we use a contrarian approach: we group the SNPs using a clustering technique. In doing so, we also indirectly identify those SNPs that could not be grouped comfortably into any particular cluster. These are the outlier SNPs that do not seem to be similar with other SNPs, and thus represent good candidates for use in discriminating between ancestries. We use DBSCAN [42, 43] as the clustering technique for further selection of important AISNPs which are reasonably distinct in nature. This is a density-based clustering technique which does not require the number of clusters of the data to be pre-specified. Given a set of data points in some space, the DBSCAN clustering approach attempts to place points that are closely packed into one group. Points that lie alone in low-density regions are marked as outliers. In our specific problem of ancestry classification, SNPs that contain similar ancestry information are clustered together, while those that could not be clustered into some group are identified as outliers with seemingly unique ancestry information. In this work, we have considered these outlier SNPs as good candidates for distinguishing biogeographical ancestry between populations.
Here, we apply DBSCAN clustering on the 34,631 SNPs extracted in the previous stage of selection. The algorithm requires three input parameters, namely, data matrix D, radius parameter (ε) and neighborhood density threshold (MinPts). Data matrix D has 34,631 rows, where each row is associated with one SNP. Each SNP is considered as an object with l dimensions, where l is the number of training samples. Each dimension belongs to the allele information of a training subject represented by a number between 1 and 16, since four nucleotides {A, C, G, T} generate 16 possible allele symbols {AA, AC, …, TT}. The parameter MinPts (neighborhood density threshold) indicates the minimum number of points required to form a cluster, while ε (the radius parameter) is measured as the Euclidean distance between two l-dimensional SNP objects. The DBSCAN clustering algorithm is described in Algorithm 1 below using a pseudo code [43].
The choice of the two parameters, ε and MinPts, requires careful consideration as they play important roles in determining the output clusters. For this problem, we have set MinPts = 2, i.e., at least two SNPs will be able to form a cluster if they are within a certain distance ε. And, the value of ε is chosen empirically. We measured the 26-class classification performance for different values of ε for the 80/20 train-test split of the data. For ε=0.1 we obtained the best classification result. The DBSCAN clustering technique resulted in 2378 clusters and 6404 outliers. These 6404 outlier SNPs constitute our new set of candidate SNPs for ancestry identification.
Step 3: Correlation-based SNP Selection:
As we obtain the set of 6404 SNPs from the clustering technique, we measure the overall 26-class ancestry prediction performance for each individual SNP marker. That is, we perform ancestry estimation using each of the 6404 SNPs, independent of the other SNPs. Naturally, we do not expect to produce very good performance for a single SNP. However, the relative performance of the SNPs is a crucial piece of information for our approach. Consequently, we generate a performance matrix X with m=6404 rows, where each row of X is allocated for one SNP representing a six-dimensional vector,
$$ {\underline{x}}^{(i)}=\left[{x_1}^{(i)}\;{x_2}^{(i)}\;{x_3}^{(i)}\;{x_4}^{(i)}\;{x_5}^{(i)}\;{x_6}^{(i)}\right] $$
The first element records the accuracy of 26-class classification using SNP i. The next five elements of the vector are related to five continents, where each element denotes the percentage of test individuals correctly predicted from a continent. Classification into 26 populations by each SNP has been conducted using an 80–20% train-test split, with n = 2504 individuals. For classification, the SNP is represented using its allele-context feature, where each SNP's allele-context feature belongs to three possible values: 0, 1, 2. For the allele-context feature, a '0' means both nucleotides from a person are the same as the reference nucleotide, '1' means one of the two nucleotides is different from the reference nucleotide, and '2' means both nucleotides of that person are different from the reference nucleotide at SNP i. For both the training sets and test sets, we denote the allele-context feature vector a and class-label vector b as follows:
$$ {\underline{a}}_{train}^{(i)}={\left[{a}_1^{(i)}{a}_2^{(i)}..\dots {a}_l^{(i)}\right]}^T\;\mathrm{and}\kern0.5em {\underline{a}}_{test}^{(i)}={\left[{a}_1^{(i)}{a}_2^{(i)}..\dots {a}_{\left(n-l\right)}^{(i)}\right]}^T $$
$$ {b}_{train}={\left[{b}_1\;{b}_2..\dots {b}_l\right]}^T\kern0.5em \mathrm{and}\kern0.5em {b}_{test}={\left[{b}_1\;{b}_2..\dots {b}_{\left(n-l\right)}\right]}^T $$
Here, l = number of training subjects, and n-l = number of test subjects. Thus, for i = 1,2, …, m number of SNPs, the overall performance matrix is represented as,
$$ X={\left[{\underline{x}}^{(1)}{\underline{x}}^{(2)}..\dots {\underline{x}}^{(6404)}\right]}^T $$
Having created the performance matrix X, we can now compute the pairwise correlation between the SNPs using the associated performance vectors. For example, correlation of SNP i and SNP k is calculated using the Pearson's correlation coefficient as follows:
\( C=\frac{\sum \limits_{j=1}^5\left({x}_j^{(i)}-{x}^{-(i)}\right)\;\left({x}_j^{(k)}-{x}^{-(k)}\right)}{\sqrt{\sum \limits_{j=1}^5{\left({x}_j^{(i)}-{x}^{-(i)}\right)}^2}\kern0.5em \sqrt{\sum \limits_{j=1}^5{\left({x}_j^{(k)}-{x}^{-(k)}\right)}^2}} \)
\( {x}_j^{(i)} \)=element of the vector \( {\underline{x}}^{(i)} \) for continent j (j = 1,2,..,5),
x−(i)=average of the five \( {x}_j^{(i)} \)elements of vector \( {\underline{x}}^{(i)} \).
Now, if SNP i and SNP k are highly correlated (that is, their correlation coefficient C is above a certain threshold th), then one of them is kept in the analysis and the other one is removed. Here, we consider the SNP that provides a better classification accuracy in the performance matrix (represented by the first element of vector \( {\underline{x}}^{(i)} \)) as "non-redundant", while the other SNP is taken to be redundant. The proposed correlation-based approach to SNPs selection is explained in more detail below, using pseudo code (see Algorithm 2).
Now that we have presented the general procedure for selecting the SNPs, the final step will be to select those that are best for continental-level classification, and those that are more suitable for more localized discrimination between sub-populations, say from the same continent. We describe our approach below:
SNP selection for continental-level classification
To determine the best candidate SNPs for continental-level classification, we have exploited the proposed correlation-based SNP selection. First, the 6404 SNPs are ranked from highest to lowest based on their classification accuracy in the performance matrix X and the 6404 × 6 performance matrix is rearranged accordingly. Following this ranking, we create the ordered listing of the SNPs for the initial 'non-Redundant SNP set' and the algorithm is then initialized with the best performing SNP. For a certain correlation threshold th, the algorithm is executed to identify the final set of non-Redundant SNPs from the 6404 SNPs. These candidate SNPs represented by the allele-context feature are subsequently used to perform the five-continent classification using an 80/20 train-test split. We carried out empirical experiments for a range of values of correlation thresholds and the threshold which provides the best classification performance with the smallest set of SNPs has been finally selected.
SNP selection for pairwise/binary classification between sub-populations
Having determined the continental-level ancestry using the above, the next question is how to differentiate two sub-populations, within the same continent. When an individual's continental ancestry is known and the individual belongs to any of two possible closely related sub-populations within that continent, the issue now becomes how to identify the accurate sub-population ancestry. In this work, we have selected candidate SNP sets for all possible pairwise classification of sub-populations within a given continent exploiting the same basic correlation-based SNP selection algorithm used for continental-level ancestry identification. Given two sub-populations, say S1 and S2 from the same continent j, the goal is to identify a powerful set of candidate SNPs which will be able to distinguish individuals from these two populations. Now, the 6404 SNPs are ranked from highest to lowest based on the continent j elements \( {x}_j^{(i)} \) in the performance matrix X and performance matrix is rearranged accordingly. Thus, the correlation algorithm is initialized with the best performing SNP for continent j and for a certain threshold the algorithm is executed to obtain the required set of SNPs from the 6404 SNPs. Next, we perform binary classification between the two sub-populations using the allele-context feature of these SNPs, again following an 80/20 train-test split. As was done for the continental-level classification, we also tested for a different values of the correlation threshold, and selected the threshold that provided the best classification performance while using a small number of SNPs.
Ancestry classification using selected SNPs
Having identified the best SNP subsets, ancestry classification can be performed using standard classification algorithms. In this work, we perform classification using the softmax neural network classifier [44]. We use the same algorithm for both continental-level classification and for sub-population-level classification. In machine learning, softmax regression is a generalization of binary logistic regression that we can use for multi-class classification tasks. In logistic regression, the output labels are assumed to be binary, that is, y(i) ∈ {0, 1}. The goal then is to predict the probability that a given sample belongs to the '1' class, i.e., P(y = 1|x) vs. the probability that it belongs to the '0' class, i.e., P(y = 0|x). On the other hand, in softmax regression setting, the output label can take K different values: y(i) ∈ {1, 2⋯, K}. Now, the goal is to estimate the probability for each value k ∈ {1, 2⋯, K}, i.e., P(y = k|x). Thus, softmax regression is an extension of logistic regression to the multi-class case. With K = 2, softmax regression is same as binary logistic regression. Overall, with softmax regression scheme, we can solve the classification problem not just for K = 2, but also for many possible values of K.
Softmax regression is often used as the activation function in the final layer of a neural network classifier. For a K-class classification problem, the number of units/nodes in the output layer of the neural network should be K. Each of the K output nodes gives the probability of a certain class and probabilities from all output nodes sum to 1. Each output node i in the final layer of the neural network receives the weighted sum of the inputs from the previous layer with the addition of a bias term, which is denoted as follows,
$$ {z}_i=\sum \limits_j{w}_{i,j}{x}_j+{b}_i $$
where, j is the number of nodes in the previous layer. Now to compute the softmax activation at each output node, exponential of the term zi is calculated for each i,
$$ {t}_i={e}^{z_i} $$
Finally, activation at output node i is obtained by normalizing the exponential term.
$$ {a}_i={t}_i/{\sum}_{i=1}^K{t}_i $$
Thus, by normalizing the distribution, output from each node i falls in the range [0, 1]. Here, the class associated with the highest probability value is considered as the predicted output label.
Experiments were performed using the identified 1000 Genome dataset, with 26 sub-populations, from 5 continents. The performance of the proposed approach was evaluated, on both continental-level and sub-population ancestry prediction/ classification. The results of these experiments are described below.
Continental classification
First, we performed a five-class classification (using the five continents -- Europe, America, East Asia, South Asia, and Africa) for a range of values of the correlation threshold: th = 0.1 to 0.99 with an interval of 0.01. In Fig. 2, we show the results on continental-level classification for correlation threshold th = 0.4 to 0.99 with 0.01 interval along with the corresponding number of SNPs. The highest performance achieved is 99.91% for th = 0.98 with 614 SNPs (marked by a red square in the plot). But, since our goal is to rather use a smaller panel of SNPs to distinguish the continental populations, we searched for the threshold th that provides an optimum performance with less number of SNPs (approximately 200 or less). From Fig. 2, we can observe the general trend in performance for the proposed approach. At th = 0.7, the system suggests a panel of 32 SNPs, for an overall classification accuracy of about 90%. Performance generally increases with increasing correlation threshold, rising to about 93% accuracy rate, at about th = 0.83, using about 93 SNPs. The best classification result is obtained with correlation threshold th = 0.91, resulting in a classification accuracy of 96.75% with 206 SNPs (marked by the magenta square). These 206 SNPs have been considered as our final candidate SNPs for continental-level ancestry classification. The confusion matrix for the five-class classification problem with overall performance of 96.75 96.75% is shown in Table 2. With respect to each continent, the best results were observed for populations from Africa, and from East Asia. Those from America were the most challenging, followed by Europe. Also, for these two challenging cases, most Europeans were misclassified as American, and vice versa.
Results for continental-level ancestry classification using varying thresholds. Results include both accuracy (left) and the number of SNPs (right) required to achieve a given accuracy
Table 2 Confusion matrix for continental-level Ancestry classification (oveall classification rate of 96.75%, 206 SNPs)
Pairwise classification between sub-populations
Table 3 shows the overall pairwise classification results between sub-populations in each of the five continents in our dataset. The number of SNPs required for each classification have also been noted. From the table, it is evident that in all cases of pairwise classification of closely related populations, we can infer the ethnicity using a small panel of SNPs (less than 200) and for some instances, the accuracy is as high as 100%. For a more detailed analysis, Fig. 3 shows the performance of the proposed methods with increasing correlation thresholds, using sub-populations in the continent of America. As in Fig. 3, the plots for pairwise classification of sub-populations within the continent of America are shown for a range of correlation thresholds th = 0.1 to 0.9 with an interval of 0.01. The best performance (#SNPs & accuracy) has been marked with a red square in the figures.
Table 3 Results for pairwise/binary classification between sub-populations in each continent
Pairwise classification results with varying correlation thresholds, for subgroups within the continent of America: a PUR vs. PEL; b PUR vs. MXL, c PUR vs. CLM; d CLM vs. PEL; e CLM vs. MXL; and f PEL vs. MXL
As can be observed, it is relatively easy to distinguish between individuals from certain sub-populations, even within the same continent. For instance, Fig. 3a shows that individuals of Porto Rican (PUR) descent are relatively easy to distinguish from those with Peruvian (PEL) descent, achieving a 100% accuracy rate, using 56 SNPs, under our approach. Similarly for Columbia (CLM) and Peru (PEL) (Fig. 3c). As before, accuracy generally increases with increasing correlation thresholds (and hence more SNPs), but this is not monotonic. However, we can also see some challenging cases, such as Columbia (CLM) and Mexico (MXL), (see Fig. 3e), where the highest classification rate is only about 74%, using 37 SNPs. Even increasing the number of SNPs beyond 37 could not improve the result.
Computation time
The experiments were performed on a personal computer running on Intel Core i7-7700 K Quad-Core 4.2 GHz Desktop Processor, 16GB RAM, with 4 TB 64 MB Cache Hard Drive. Part of the proposed approach required the evaluation of the predictive power of each SNP. Thus, each SNP is used independently to perform ancestry classification. In the proposed methodology, we may notice that a performance matrix was generated before initiating SNP selection for a certain correlation threshold. With the reduced set of 6404 SNPs, the algorithm had to run 6404 times to generate the performance matrix, where each time only one SNP is being used to perform classification. The average time it takes to evaluate the performance of a single SNP is approximately 1.17 s. With 6404 SNPs, the time required to construct the whole performance matrix is about 2 h. By using a graphics processing unit (GPU), we can reduce the total time for generating the performance matrix to 1.5 h. After we generate the performance matrix, the SNP selection process starts. We compute pairwise correlation between SNPs and based on a certain correlation threshold we identify a panel of non redundant (or important) SNPs. The value of the correlation threshold determines the size of the SNP panel and the number of SNP features in a panel determines how much time will be taken by the classifier to perform classification. SNP selection time for continental level classification using correlation threshold 0.9 is approximately 27.35 s, where 184 SNPs have been selected.
Comparative performance evaluation
We have performed a limited comparison of our proposed approaches with related work. Table 4 shows the comparative performance of our proposed methods on continental-level ancestry classification, when compared with other related methods. Table 5 presents similar comparative performance of our proposed method for binary/pairwise classification of sub-populations against other related methods in the literature. The comparative results show the proposed methods are competitive with the state-of-the-art methods, even when using information from just one chromosome.
Table 4 Comparative Performance on continental-level Ancestry classification
Table 5 Comparative Performance In Sub-Population-level Ancestry classification
Prediction of continental ancestry from genetic sequences have been studied for years. However, much less has been done on prediction of ancestry for closely-related sub-populations, for instance, those that are within the same country, or continent, especially under resource constraints, with potentially limited or missing genomic data. In this work, we have developed an ancestry identification system to predict the continental origin of an unknown individual and also distinguish between closely related sub-populations within a continent. We used only SNPs from just one chromosome (namely, Chromosome 1) for our analysis, and to identify different panels of ancestry informative SNPs. We have applied both machine learning and statistical techniques to select candidate SNPs. Our results show that one single chromosome (Chromosome 1, in this case), if carefully analyzed, could hold enough information for accurate estimation of human biogeographical ancestry. This has a significant implication in terms of the computational resources required for analysis of ancestry, and in the applications of such analyses, such as in studies of genetic diseases, forensics, and biometrics.
We have essentially considered binary classification, given pairs of sub-populations. Further work can be performed to extend the proposed approach to handle multi-class classification of biogeographical ancestry. Another interesting future work is to investigate the performance of other chromosomes, especially the smaller chromosomes, to see if we can construct equally high-performing panels of AISNPs using an even less amount of data. It will also be interesting to further investigate the identified SNPs to see if there is any connection between them, or their nearby genes, with specific diseases or health problems that are known to be more prevalent in certain geographic regions.
Ancestry informative marker
AISNP:
Ancestry informative SNP
DBSCAN:
Density-based spatial clustering of applications with noise
DNA:
Deoxynucleic acid
Graphics processing units
GWAS:
Genome-wide association studies
PCA:
Principal component analysis
Single nucleotide polymorphisms
Short tandem repeats
Enoch MA, Shen PH, Xu K, Hodgkinson C, Goldman D. Using ancestry informative markers to define populations and detect population stratification. J Psychopharmacol. 2006;20:199–26.
Araújo GS, et al. Integrating, summarizing and visualizing GWAS-hits and human diversity with DANCE (disease-ANCEstry networks). Bioinformatics. 2016;32(8):1247–9.
Bhaskar A, Javanmard A, Courtade TA, Tse D. Novel probabilistic models of spatial genetic ancestry with applications to stratification correction in genome-wide association studies. Bioinformatics. 2016;33(6):879–85.
Fondevila M, et al. Revision of the SNPforID 34-plex forensic ancestry test: assay enhancements, standard reference sample genotypes and extended population studies. Forensic Sci Int Genet. 2013;7(1):63–74.
Gettings KB, et al. A 50-SNP assay for biogeographic ancestry and phenotype prediction in the US population. Forensic Sci Int Genet. 2014;8(1):101–8.
Tian C, et al. A genomewide single-nucleotide–polymorphism panel for Mexican American admixture mapping. Am J Hum Genet. 2007;80(6):1014–23.
Sanderson J, et al. Reconstructing past admixture processes from local genomic ancestry using wavelet transformation. Genetics. 2015;200(2):469–81.
Arthur R, et al. AKT: ancestry and kinship toolkit. Bioinformatics. 2017;33(1):142–4.
Krimsky S, Simoncelli T. Genetic justice: DNA data banks, criminal investigations, and civil liberties: Columbia University Press, New York; 2012.
Aarli R. Genetic justice and transformations of criminal procedure. J Scand Stud Criminol Crime Prev. 2012;13(1):3–21.
Wen W, Shu X-o, Guo X, Cai Q, Long J, Bolla MK, Michailidou K, et al. Prediction of breast cancer risk based on common genetic variants in women of east Asian ancestry. Breast Cancer Res. 2016;18(1):124.
Bandera EV, Chandran U, Zirpoli G, Gong Z, McCann SE, Hong C-C, Ciupak G, Pawlish K, Ambrosone CB. Body fatness and breast cancer risk in women of African ancestry. BMC Cancer. 2013;13(1):475.
Liu Y, Nyunoya T, Leng S, Belinsky SA, Tesfaigzi Y, Bruse S. Softwares and methods for estimating genetic ancestry in human populations. Hum Genomics. 2013;7(1):1.
Pardo-Seco J, Martinón-Torres F, Salas A. Evaluating the accuracy of AIM panels at quantifying genome ancestry. BMC Genomics. 2014;15(1):543.
Amirisetty S, Hershey GK, Baye TM. AncestrySNPminer: a bioinformatics tool to retrieve and develop ancestry informative SNP panels. Genomics. 2012;100:57–63.
Silva NM, Pereira L, Poloni ES, Currat M. Human neutral genetic variation and forensic STR data. PLoS One. 2012;7:e49666.
Kidd JR, et al. Analyses of a set of 128 ancestry informative single-nucleotide polymorphisms in a global set of 119 population samples. Investig Genet. 2011;2(1):1.
Nassir R, et al. An ancestry informative marker set for determining continental origin: validation and extension using human genome diversity panels. BMC Genet. 2009;10(1):39.
1000 Genomes Project Consortium. A global reference for human genetic variation. Nature. 2015;526(7571):68.
Wright S. Evolution and the genetics of populations, vol 2: the theory of gene frequencies. Chicago and London: University of Chicago Press; 1969.
Price AL, et al. Discerning the ancestry of European Americans in genetic association studies. PLoS Genet. 2008;4(1):e236.
Mao X, et al. A genomewide admixture mapping panel for Hispanic/Latino populations. Am J Hum Genet. 2007;80(6):1171–8.
Kosoy R, et al. Ancestry informative marker sets for determining continental origin and admixture proportions in common populations in America. Hum Mutat. 2009;30(1):69–78.
Phillips C, et al. Inferring ancestral origin using a single multiplex assay of ancestry-informative marker SNPs. Forensic Sci Int Genet. 2007;3:273–80.
Halder I, et al. A panel of ancestry informative markers for estimating individual biogeographical ancestry and admixture from four continents: utility and applications. Hum Mutat. 2008;29(5):648–58.
Seldin MF, et al. European population substructure: clustering of northern and southern populations. PLoS Genet. 2006;2(9):e143.
Campbell CD, et al. Demonstrating stratification in a European American population. Nat Genet. 2005;37(8):868.
Seldin MF, Price AL. Application of ancestry informative markers to association studies in European Americans. PLoS Genet. 2008;4(1):e5.
Tian C, et al. Analysis of East Asia genetic substructure using genome-wide SNP arrays. PLoS One. 2008;3(12):e3862.
Bryc K, et al. Genome-wide patterns of population structure and admixture in west Africans and African Americans. Proc Natl Acad Sci. 2010;107(2):786–91.
Price AL, et al. Principal components analysis corrects for stratification in genome-wide association studies. Nat Genet. 2006;38(8):904.
Novembre J, Stephens M. Interpreting principal component analyses of spatial population genetic variation. Nat Genet. 2008;40(5):646–9.
Patterson N, Price AL, Reich D. Population structure and eigenanalysis. PLoS Genet. 2006;2(12):e190.
Kidd KK, et al. Progress toward an efficient panel of SNPs for ancestry inference. Forensic Sci Int Genet. 2014;10:23–32.
Pritchard JK, et al. Association mapping in structured populations. Am J Hum Genet. 2000;67(1):170–81.
Lao O, et al. Evaluating self-declared ancestry of US Americans with autosomal, Y-chromosomal and mitochondrial DNA. Hum Mutat. 2010;31:12.
Nievergelt CM, et al. Inference of human continental origin and admixture proportions using a highly discriminative ancestry informative 41-SNP panel. Investig Genet. 2013;4(1):13.
Hajiloo M, et al. ETHNOPRED: a novel machine learning method for accurate continental and sub-continental ancestry identification and population stratification correction. BMC Bioinformatics. 2013;14(1):61.
Graydon M, Cholette F, Ng L-K. Inferring ethnicity using 15 autosomal STR loci—comparisons among populations of similar and distinctly different physical traits. Forensic Sci Int Genet. 2009;3(4):251–4.
Baran Y, et al. Fast and accurate inference of local ancestry in Latino populations. Bioinformatics. 2012;28(10):1359–67.
Chimusa ER, et al. ancGWAS: a post genome-wide association study method for interaction, pathway and ancestry analysis in homogeneous and admixed populations. Bioinformatics. 2016;32(4):549–56.
Ester M, et al. A density-based algorithm for discovering clusters in large spatial databases with noise. KDD. 1996;34:96.
Han J, Pei J, Kamber M. Data Mining: Concepts and Techniques: Waltham: Morgan Kauffmann; 2012.
Bishop CM. Pattern recognition and machine learning: New York. 2006.
Preliminary version of this paper was presented at the IEEE BIBM'17 Conference, Kansas City, MO, Nov. 2017.
The datasets used in this work are publicly available from the 1000 Genome Project.
This work and publication was supported in part by the Center for Identification Technology and the National Science Foundation, Grant No. 1650474.
This article has been published as part of BMC Medical Genomics Volume 11 Supplement 5, 2018: Selected articles from the IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2017: medical genomics. The full contents of the supplement are available online at https://bmcmedgenomics.biomedcentral.com/articles/supplements/volume-11-supplement-5.
Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, USA
Tanjin T. Toma, Jeremy M. Dawson & Donald A. Adjeroh
Tanjin T. Toma
Jeremy M. Dawson
Donald A. Adjeroh
All authors have read and approved the final manuscript.
Correspondence to Donald A. Adjeroh.
All authors approved the paper for publication.
Toma, T.T., Dawson, J.M. & Adjeroh, D.A. Human ancestry indentification under resource constraints -- what can one chromosome tell us about human biogeographical ancestry?. BMC Med Genomics 11, 0 (2018). https://doi.org/10.1186/s12920-018-0412-4
Ancestry prediction
Single chromosome | CommonCrawl |
Proof of Derivative of Inverse Hyperbolic Tangent function
Inverse Hyperbolic functions
The inverse hyperbolic tangent in function form is written as $\tanh^{-1}{x}$ or $\operatorname{arctanh}{x}$ when $x$ is considered to represent a variable. The differentiation or the derivative of inverse hyperbolic tan function with respect to $x$ is written in two different mathematical forms as follows.
$(1).\,\,\,$ $\dfrac{d}{dx}{\, \Big(\tanh^{-1}{(x)}\Big)}$
$(2).\,\,\,$ $\dfrac{d}{dx}{\, \Big(\operatorname{arctanh}{(x)}\Big)}$
The derivative rule of inverse hyperbolic tangent function is proved mathematically from the first principle of differentiation in calculus. Now, let us study how to prove the derivative rule for inverse hyperbolic tan function in mathematics.
Derivative of Inverse Hyperbolic Tan in Limit form
The differentiation of the inverse hyperbolic tangent function can be derived in limit form by the fundamental definition of the derivative.
$\dfrac{d}{dx}{\, (\tanh^{-1}{x})}$ $\,=\,$ $\displaystyle \large \lim_{\Delta x \,\to \, 0}{\normalsize \dfrac{\tanh^{-1}{(x+\Delta x)}-\tanh^{-1}{x}}{\Delta x}}$
When the differential element $\Delta x$ is simply represented by $h$ for our convenience, then the whole mathematical expression can be expressed in terms of $h$ instead of $\Delta x$.
$\implies$ $\dfrac{d}{dx}{\, (\tanh^{-1}{x})}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\tanh^{-1}{(x+h)}-\tanh^{-1}{x}}{h}}$
Mathematically, the derivative rule of $\operatorname{arctanh}{(x)}$ function with respect to $x$ is actually derived in differential calculus from the first principle of differentiation.
Evaluate the Limit by the Direct Substitution
In limits, the direct substitution method is a basic method for evaluating the limit of any mathematical expression. So, let us try to derive the differentiation of inverse hyperbolic tangent function by evaluating the limit of the mathematical expression as $h$ approaches $0$.
$= \,\,\,$ $\dfrac{\tanh^{-1}{(x+0)}-\tanh^{-1}{x}}{0}$
$= \,\,\,$ $\dfrac{\tanh^{-1}{x}-\tanh^{-1}{x}}{0}$
$= \,\,\,$ $\require{cancel} \dfrac{\cancel{\tanh^{-1}{x}}-\cancel{\tanh^{-1}{x}}}{0}$
It is evaluated that the derivative of the inverse hyperbolic tangent function is indeterminate. Actually, it is not correct due to the failure of the direct substitution method.
Simplify the mathematical expression
Now, we have to think for an alternative approach for proving the differentiation formula for the inverse hyperbolic tangent function from first principle of differentiation.
According to the Inverse hyperbolic functions, the inverse hyperbolic tangent function can be expressed in logarithmic form.
$\tanh^{-1}{x}$ $\,=\,$ $\dfrac{1}{2}\,\log_{e}{\Bigg(\dfrac{1+x}{1-x}\Bigg)}$
Then, $\tanh^{-1}{(x+h)}$ $\,=\,$ $\dfrac{1}{2} \, \log_{e}{\Bigg(\dfrac{1+(x+h)}{1-(x+h)}\Bigg)}$
Now, replace every inverse hyperbolic tangent function by their equivalent logarithmic form expressions in the fundamental definition of the derivative of inverse hyperbolic tan function.
$\implies$ $\dfrac{d}{dx}{\,(\tanh^{-1}{x})}$ $\,=\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\dfrac{1}{2}\,\log_{e}{\Bigg(\dfrac{1+(x+h)}{1-(x+h)}\Bigg)}-\dfrac{1}{2}\,\log_{e}{\Bigg(\dfrac{1+x}{1-x}\Bigg)}}{h} }$
Now, let us focus on simplifying the mathematical expression in the right hand side of the equation.
$=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\dfrac{1}{2}\,\Bigg[\log_{e}{\Bigg(\dfrac{1+(x+h)}{1-(x+h)}\Bigg)}-\log_{e}{\Bigg(\dfrac{1+x}{1-x}\Bigg)}\Bigg]}{h} }$
$=\,\,\,$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \Bigg[\dfrac{1}{2}\,\times\,\dfrac{\log_{e}{\Bigg(\dfrac{1+(x+h)}{1-(x+h)}\Bigg)}-\log_{e}{\Bigg(\dfrac{1+x}{1-x}\Bigg)}}{h}\Bigg]}$
The constant be separated by the constant multiple rule of limits.
$=\,\,\,$ $\dfrac{1}{2} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(\dfrac{1+(x+h)}{1-(x+h)}\Bigg)}-\log_{e}{\Bigg(\dfrac{1+x}{1-x}\Bigg)}}{h}}$
Every logarithmic term in the numerator can be expanded by the quotient rule of logarithms.
$=\,\,\,$ $\dfrac{1}{2} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Big(1+(x+h)\Big)}-\log_{e}{\Big(1-(x+h)\Big)}-\Big[\log_{e}{\Big(1+x\Big)}-\log_{e}{\Big(1-x\Big)}\Big]}{h}}$
$=\,\,\,$ $\dfrac{1}{2} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Big(1+(x+h)\Big)}-\log_{e}{\Big(1-(x+h)\Big)}-\log_{e}{\Big(1+x\Big)}+\log_{e}{\Big(1-x\Big)}}{h}}$
$=\,\,\,$ $\dfrac{1}{2} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Big(1+(x+h)\Big)}-\log_{e}{\Big(1+x\Big)}-\log_{e}{\Big(1-(x+h)\Big)}+\log_{e}{\Big(1-x\Big)}}{h}}$
$=\,\,\,$ $\dfrac{1}{2} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Big(1+(x+h)\Big)}-\log_{e}{\Big(1+x\Big)}-\Big[\log_{e}{\Big(1-(x+h)\Big)}-\log_{e}{\Big(1-x\Big)}\Big]}{h}}$
Use the same quotient rule of logarithms to combine the logarithmic terms as follows.
$=\,\,\,$ $\dfrac{1}{2} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(\dfrac{1+(x+h)}{1+x}\Bigg)}-\Bigg[\log_{e}{\Bigg(\dfrac{1-(x+h)}{1-x}\Bigg)}\Bigg]}{h}}$
$=\,\,\,$ $\dfrac{1}{2} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(\dfrac{1+(x+h)}{1+x}\Bigg)}-\log_{e}{\Bigg(\dfrac{1-(x+h)}{1-x}\Bigg)}}{h}}$
$=\,\,\,$ $\dfrac{1}{2} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(\dfrac{1+x+h}{1+x}\Bigg)}-\log_{e}{\Bigg(\dfrac{1-x-h}{1-x}\Bigg)}}{h}}$
$=\,\,\,$ $\dfrac{1}{2} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(\dfrac{1+x+h}{1+x}\Bigg)}-\log_{e}{\Bigg(\dfrac{1-x+(-h)}{1-x}\Bigg)}}{h}}$
$=\,\,\,$ $\dfrac{1}{2} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(\dfrac{1+x}{1+x}+\dfrac{h}{1+x}\Bigg)}-\log_{e}{\Bigg(\dfrac{1-x}{1-x}+\dfrac{(-h)}{1-x}\Bigg)}}{h}}$
$=\,\,\,$ $\dfrac{1}{2} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(\dfrac{\cancel{1+x}}{\cancel{1+x}}+\dfrac{h}{1+x}\Bigg)}-\log_{e}{\Bigg(\dfrac{\cancel{1-x}}{\cancel{1-x}}+\dfrac{(-h)}{1-x}\Bigg)}}{h}}$
$=\,\,\,$ $\dfrac{1}{2} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(1+\dfrac{h}{1+x}\Bigg)}-\log_{e}{\Bigg(1+\dfrac{(-h)}{1-x}\Bigg)}}{h}}$
$=\,\,\,$ $\dfrac{1}{2} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \Bigg[\dfrac{\log_{e}{\Bigg(1+\dfrac{h}{1+x}\Bigg)}}{h}-\dfrac{\log_{e}{\Bigg(1+\dfrac{(-h)}{1-x}\Bigg)}}{h}\Bigg]}$
$=\,\,\,$ $\dfrac{1}{2} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \Bigg[\dfrac{\log_{e}{\Bigg(1+\dfrac{h}{1+x}\Bigg)}}{h}+\dfrac{\log_{e}{\Bigg(1+\dfrac{(-h)}{1-x}\Bigg)}}{-h}\Bigg]}$
Now, we can use the addition rule of limits for evaluating the limit of sum of the functions.
$=\,\,\,$ $\dfrac{1}{2} \times \Bigg[\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(1+\dfrac{h}{1+x}\Bigg)}}{h}}$ $+$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(1+\dfrac{(-h)}{1-x}\Bigg)}}{-h}\Bigg]}$
Each term in the expression is in the form of logarithmic limit rule but it requires some adjustments.
$=\,\,\,$ $\dfrac{1}{2} \times \Bigg[\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(1+\dfrac{h}{1+x}\Bigg)}}{\dfrac{h \times (1+x)}{1+x}}}$ $+$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(1+\dfrac{(-h)}{1-x}\Bigg)}}{\dfrac{(-h) \times (1-x)}{1-x}}\Bigg]}$
$=\,\,\,$ $\dfrac{1}{2} \times \Bigg[\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(1+\dfrac{h}{1+x}\Bigg)}}{\dfrac{h}{1+x} \times (1+x)}}$ $+$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(1+\dfrac{(-h)}{1-x}\Bigg)}}{\dfrac{(-h)}{1-x} \times (1-x)}\Bigg]}$
$=\,\,\,$ $\dfrac{1}{2} \times \Bigg[\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(1+\dfrac{h}{1+x}\Bigg)}}{(1+x) \times \dfrac{h}{1+x}}}$ $+$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(1+\dfrac{(-h)}{1-x}\Bigg)}}{(1-x) \times \dfrac{(-h)}{1-x}}\Bigg]}$
$=\,\,\,$ $\dfrac{1}{2} \times \Bigg[\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \Bigg(\dfrac{1}{1+x} \times \dfrac{\log_{e}{\Bigg(1+\dfrac{h}{1+x}\Bigg)}}{\dfrac{h}{1+x}}\Bigg)}$ $+$ $\displaystyle \large \lim_{h \,\to\, 0}{\normalsize \Bigg(\dfrac{1}{1-x} \times \dfrac{\log_{e}{\Bigg(1+\dfrac{(-h)}{1-x}\Bigg)}}{\dfrac{(-h)}{1-x}}\Bigg)\Bigg]}$
The input of the limiting operation is in terms of $h$. So, the terms in $x$ are constants and they can be excluded from the limit operation. Hence, use the constant multiple rule of limits to separate the constants from limiting operation.
$=\,\,\,$ $\dfrac{1}{2} \times \Bigg[\dfrac{1}{1+x} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(1+\dfrac{h}{1+x}\Bigg)}}{\dfrac{h}{1+x}}}$ $+$ $\dfrac{1}{1-x} \times \displaystyle \large \lim_{h \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Bigg(1+\dfrac{(-h)}{1-x}\Bigg)}}{\dfrac{(-h)}{1-x}}\Bigg]}$
Evaluate the Limit of the functions
$(1)\,\,\,$ When $h\,\to\,0$ then $\dfrac{h}{1+x}\,\to\,\dfrac{0}{1+x}$. Therefore $\dfrac{h}{1+x}\,\to\,0$. Now, take $y = \dfrac{h}{1+x}$. Hence, $y\,\to\,0$.
$(2)\,\,\,$ Similarly, when $h\,\to\,0$ then $-h\,\to\,0$, and then $\dfrac{-h}{1-x}\,\to\,\dfrac{0}{1-x}$. Therefore $\dfrac{-h}{1-x}\,\to\,0$. Now, take $z = \dfrac{-h}{1-x}$. Hence, $z\,\to\,0$.
Now, we can express the first term in the expression in terms of $y$ and second term in $z$ by our assumption.
$=\,\,\,$ $\dfrac{1}{2} \times \Bigg[\dfrac{1}{1+x} \times \displaystyle \large \lim_{y \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Big(1+y\Big)}}{y}}$ $+$ $\dfrac{1}{1-x} \times \displaystyle \large \lim_{z \,\to\, 0}{\normalsize \dfrac{\log_{e}{\Big(1+z\Big)}}{z}\Bigg]}$
The limit of each function is one as per the logarithmic limit rule.
$=\,\,\,$ $\dfrac{1}{2} \times \Bigg[\dfrac{1}{1+x} \times 1$ $+$ $\dfrac{1}{1-x} \times 1\Bigg]$
Now, simplify the whole expression by the mathematical operation.
$=\,\,\,$ $\dfrac{1}{2} \times \Bigg[\dfrac{1}{1+x}+\dfrac{1}{1-x}\Bigg]$
$=\,\,\,$ $\dfrac{1}{2} \times \Bigg[\dfrac{1 \times (1-x)+(1+x) \times 1}{(1+x)(1-x)}\Bigg]$
$=\,\,\,$ $\dfrac{1}{2} \times \Bigg[\dfrac{(1-x)+(1+x)}{(1+x)(1-x)}\Bigg]$
$=\,\,\,$ $\dfrac{1}{2} \times \dfrac{(1-x)+(1+x)}{(1+x)(1-x)}$
$=\,\,\,$ $\dfrac{1}{2} \times \dfrac{1-x+1+x}{1-x^2}$
$=\,\,\,$ $\dfrac{1}{2} \times \dfrac{1+1-x+x}{1-x^2}$
$=\,\,\,$ $\dfrac{1}{2} \times \dfrac{2-\cancel{x}+\cancel{x}}{1-x^2}$
$=\,\,\,$ $\dfrac{1}{2} \times \dfrac{2}{1-x^2}$
$=\,\,\,$ $\dfrac{1 \times 2}{2 \times (1-x^2)}$
$=\,\,\,$ $\dfrac{2}{2 \times (1-x^2)}$
$=\,\,\,$ $\dfrac{\cancel{2}}{\cancel{2} \times (1-x^2)}$
$=\,\,\,$ $\dfrac{1}{1-x^2}$
Therefore, we have proved that the differentiation of the inverse hyperbolic tangent function equals to the reciprocal of the difference of square of variable from one.
$\therefore \,\,\,\,\,\,$ $\dfrac{d}{dx}{\, \tanh^{-1}{x}}$ $\,=\,$ $\dfrac{1}{1-x^2}$
In this way, the derivative rule of the inverse hyperbolic tangent function is derived mathematically in differential calculus by the first principle of differentiation. | CommonCrawl |
Predictors of substance use among Jimma University instructors, Southwest Ethiopia
Abraham Tamirat Gizaw ORCID: orcid.org/0000-0003-1887-44541,
Demuma Amdisa1 &
Yohannes Kebede Lemu1
Use of substances such as alcohol, khat leaves (Catha edulis) and tobacco has become one of the rising major public health and socioeconomic problems worldwide and dramatically increased in developing countries. The aim of this study was to assess the predictors of substance use among Jimma University instructors.
Institutional based cross-sectional study design was conducted in 2018 among Jimma University instructors. A two-stage cluster sampling procedure was employed to select study participants by their departments and data was collected using structured, self-administered questionnaire with severity assessed by the standardized fifth version of a diagnostic statistical manual of mental health criteria for substance use disorder. Multivariate logistic regression was used to identify independent predictors of substance use. Variables with a P-value < 0.05 in the final fitting model were declared to be associated with the outcome variable.
A total of 330 instructors were involved in this study, with a response rate of 96.2%. About 225 of the respondents have ever used the substance in life (khat, alcohol, or cigarette or all) making the lifetime prevalence of substance use 68.2%. The lifetime prevalence of khat chewing, alcohol use, and smoking cigarette was 51.6, 81.3, and 17.3% respectively. The prevalence of substance uses disorder among users was 36.9%. Living with family (AOR = 0.220 [2.004–8.536] 95%CI), no family substance use history (AOR = 0.220 [0.098–0.495] 95% CI), friends substance use (AOR = 9.047 [4.645–17.620] 95% CI), Social norm favors substance use, (AOR = 1.123 [1.020–1.238] 95% CI), perceived benefit of substance use (AOR = 1.077 [1.008–1.151] 95% CI) were predictors of substance use.
Perception toward substance, the influence of family and peer were associated with substance use. Therefore, designing a multifaceted approach directed to an individual, interpersonal and community-level intervention targeted to substance misperception and social norms contributing to substance use.
A psychoactive substance is a chemical that acts primarily upon the central nervous system when taken, and alters brain function, resulting in temporary changes in perception, mood, consciousness, and behavior [1]. Use of substances such as alcohol, khat leaves (Catha edulis) and tobacco has become one of the rising major public health and socioeconomic problems worldwide [2]. It is estimated that 90% of the global population aged 12 or older are classified with dependency on psychoactive substances [3]. About 230 million people or 5% of the world's adult population, are estimated to have used an illegal drug at least once in their life. Alcohol and other drugs (Khat and tobacco) users are estimated to be 27 million, which is 0.6% of the world adult population [4].
Globally, the harmful use of alcohol causes approximately 3.3 million deaths every year (or 5.9% of all deaths), and 5.1% of the global burden of disease is attributable to alcohol consumption. Annually, 320,000 young people aged 15–29 years die from alcohol-related causes resulting in 9% of all deaths in that age group globally [4]. Generally, alcohol and drug use disorders are more common among males than females [5]. Research has shown, particularly in developing countries, has dramatically increased [6]. At least 15.3 million people have substance use disorders worldwide [7]. Substance use is often initiated in adolescence, but it is during adulthood that prevalence rates for its disorder peak [8].
Substance use is harmful leading to, decreased academic performance, increased risk of HIV and other sexually transmitted diseases, psychiatric disorders such as depression, lethargy, hopelessness, and insomnia [9]. It also undermines economic, and social development contributes to crime, instability, and insecurity. Not only that; alcohol and drug abuse is a major burden to society; causing economic costs, health cost, crime-related costs and losses in productivity [10]. Heavy consumption of alcohol, when shared with chewing khat, is associated with aggravating the situation, suicide attempts are one of it [8]. Alcohol and other drug use also cost to society, with estimated annual expenses of $185 billion in the United States for alcohol and $181 billion for other drug use and consequences [9].
In Sub-Sahara Africa, psychoactive substance use has dramatically increased in recent years. The rapid economic, social, and cultural transitions that most countries in sub-Saharan Africa is now experiencing have created a favorable condition for increased and socially disruptive use of drugs and alcohol [2]. According to the study done in Tanzania shows that a large percentage of the adults had used tobacco over the past 30 days (24.0% for Dar es Salaam and 38.8% for the old stone town in Zanzibar). Of the various kinds of tobacco, cigarettes were the most popular. For alcohol, 33.7% of the adult respondents in Dar es Salaam and 19.4% in Zanzibar had consumed alcohol over the past 30 days, with beer being the most popular drink [11]. Khat use is another psychoactive substance that is common in East Africa, the Arabian Peninsula and immigrants living in the west of these countries. In Ethiopia, the national level prevalence of khat use was estimated at 15%. The highest prevalence (64.9%) was observed from the southwestern part of Ethiopia and the lowest in 4 and 7.8% from the northern part. These studies indicated that Khat use was mainly associated with Muslim religion followers, males, alcohol drinkers and cigarette smokers [12].
In Ethiopia, alcohol and other drugs like khat are commonly used in both urban and rural areas, especially by youngsters. Khat chewing, drinking alcohol and using drugs are taken as means of spending spare time and entertainment. Khat and alcoholic drinks have been used traditionally for a long period of time, now khat is consumed through many faiths, social level and age groups [13]. According to the Ethiopian Demographic and Health Survey (EDHS), 2016 report 35% of women and about half of men (46%) reported drinking alcohol at some point in their lives. Regarding cigarette smoking and the use of any type of tobacco are rare among women (less than 1%). Four percent of men smoke any type of tobacco, among whom almost all smoke cigarettes [14].
In Ethiopian universities, different independent and fragmented studies have been conducted to assess the prevalence and predictors of khat chewing. In addition to prevalence, socio-demographic (being male gender), and other predictors like peer pressure, family khat chewing practice, alcohol drinking, and cigarette smoking were the most common predictors reported by the studies [15]. Smoking cigarettes, drinking alcohol, and chewing khat were widely prevalent among men. Among men, the prevalence of current daily smoking was 11.0%. Binge drinking of alcohol was reported by 10.4% of men. Similarly, 15.9% of men regularly chewed khat. Consequently, 26.6% of men and 2.4% of women reported practicing one or more of the behaviors [16]. A similar study was done in Mekele university Ethiopia showed that 82 % of ever users of sleeping pills were current users; nearly 72% ever khat users were currently chewed khat, and approximately 67% ever smokers were persisted to smoke currently. Comparably, 65% of cannabis ever users have consumed 30 days prior to the study. Heroin 10 and cocaine 14 were the least current consumed drugs [17].
In Ethiopian University not only students but also instructors use the psychoactive substance. The main reason given for smoking among university instructor is, for relaxation with friends (47.1% of ever smokers) followed by peer pressure (23.5%) and to keep alert while reading as well as for relaxation with friends was the main reason for starting chewing 40, 31.7% respectively [18]. Another study done on instructors and students showed that the prevalence of khat in Ethiopia has been reported as 32 and 42% respectively. Khat chewing is believed to affect a large segment of the Ethiopian population, especially the productive age group. It has a negative impact on health, socio-economic and political matters [19]. Prolonged and excessive use of khat is linked with several health problems [20]. A study done in Jimma town on prisoners related to substance use disorder shows that the overall prevalence of substance use disorder was 55.9%. The prevalence of khat abuse was 41.9%; alcohol use disorder, 36.2%; nicotine dependence, 19.8%; and cannabis use disorder, 3.6% and a family history of substance use were positively associated with substance use disorder [21].
Substances consumption is not legally prohibited in Ethiopia except for tobacco smoking in public places. Culturally substance is consumed in social gatherings and among friends as a leisure time activity and relaxation experience. Besides this, alcohol production like beers is increasing with a huge irresponsible advertisement.
Even if substance use has become a common problem in Ethiopia, most of the studies done mainly focused on adolescents and university students. Contrary, there is a scarcity of information available regarding the problem among adults. Moreover, university instructors are a segment of a population who can contribute a great role in the prevention of initiation of substance use among university students and the backbone for the development of a country as well. So, an assessment of substance use and associated factors is important to help efforts in reducing undesired consequences of it. Therefore the aim of this study is to assess predictors of substance use among university instructors at Jimma University, Ethiopia.
Method and materials
The institution-based cross-sectional study design was conducted from 19th March to 20th May 2018 at Jimma University, Ethiopia. Jimma University is located in Jimma city, Oromia regional state, 335 km southwest of Addis Ababa. There are four campuses in the University (Main campus, Technology campus, college of Business and Economics, and Agricultural campus) with a total of 1687 teaching staff within nine colleges.
The sample size was determined using a single population proportion formula with the assumption of 95% confidence level, 5% marginal error, 10% non-response rate and the (p), the proportion of substance use taken to be 50%.
The sample size was determined using formula (n):
$$ n=\frac{{\left({Z}_{\alpha /2}\right)}^2(p.q)}{d^2} $$
$$ n=384 $$
Since the source population is less than 10, 000, using population correction formula, NF = n/1 + n/N, where, N; Source population all teaching staff of Jimma University in 2017/2018 = 1687 NF is; required sample size, and n; calculated sample size = 384. The total sample size was 312. By considering a 10% non-response rate, the final sample size was 343.
Sampling procedure
A two-stage cluster sampling procedure was employed to select study participants from Jimma University (JU) academic staff. First, 30% of study departments were selected from the total department found on the campus. Then, each academic staff under each selected department was included in the study. Computer-generated random numbers were used to select the department based on lists of the department (Fig. 1).
Sampling procedure for predictors of substance use among Jimma University instructors, 2018
Data collection tools
Data were collected using a structured questionnaire adapted from different studies and modified accordingly. The questionnaire was translated into the local language, Amharic and then back-translated to English by language experts. The questionnaire was structured into five sections: (a) socio-demographic data (b) substance-related perceptions (c) social influence (d) precipitators for substance use (life stressors and depression level) were assessed as potential predictive factors for substance use.
The severity of the outcome variable (substance use) disorder level assessed by using the latest version of the fifth diagnostic statics manual of mental health, (DSM-5) criteria for substance use disorder. The DSM-5; a diagnostic criterion of substance use disorder is, simplified and characterized by severity rather than distinctions between abuse and dependence. It is a standardized tool, which works for all countries around the world. The reliability coefficient of the tool for this study was 0.974. Individuals with two positive answers were taken under substance use disorder.
Data quality control
The questionnaire was prepared in English and pretested on (5%) of the sample, at the nearby university instructors, Mizan-Tepi University, which is 298 km away from Jimma town. Seven data collectors and two facilitators were trained, and proper instruction was given by the investigator before the survey. The collected data reviewed and checked on a daily base for completeness before data entry.
Measurements and operational definitions
Substance: three commonly used psychoactive drugs: alcohol, cigarette, and khat.
Substance Use: Taking any of the three commonly used psychoactive substances: alcohol, cigarette and/or khat.
Substance use disorder: From the eleven DSM criteria questions, if respondents who used any of the three substances answer yes for two questions.
Lifetime prevalence: the proportion of individuals who had ever used the substance in their lifetime.
The current prevalence of substance use: the proportion of individuals who used substances within one month preceding the study.
The perceived benefit of substance; the summed score of six items of Likert scale approaching to a maximum sum score considered to a high perceived substance benefit.
Perceived risk of substance: the summed score of six items of Likert scale approaching to a maximum sum score considered a high perceived risk of substance use.
Family support function: the summed score of nine items of Likert scale approaching a maximum sum score considered a high family support function.
Social norm: the summed score of four items of Likert scale approaching a maximum sum score considered important individuals or groups that approve the respondent's substance use.
Perceived availability of substance: summed score of three items of Likert scale approaching to maximum sum score considered a high perceived availability of substances.
Perceived accessibility of substance: the summed score of three items of Likert scale approaching a maximum sum score considered high perceived accessibility of substance.
Data were coded and entered using Epi Data version 3.1 then exported to statistical package for social science (SPSS) version 20 for analysis. After cleaning data, descriptive statistics such as frequency, proportions, and percentage were done for categorical variables while the measure of central tendency and dispersion conducted for numerical variables. Logistic regression analysis was used to identify factors associated with substance use. Bivariate logistic regression carried out to select a candidate for multivariable logistic regression analysis with P-value < 0.25 then, the candidate variables were entered into multiple logistic regressions using a backward method to identify statistically significant predictors of substance use and to control the possible confounders. The degree of association between independent and dependent variables assessed using odds ratio and statistically significant factors were declared at 95% confidence interval at a P-value of less than 0.05. Finally, the test for model fitness was done using the Hosmer-Lemeshow model test. The multicollinearity of the independent variable was checked by the variance inflation factor (VIF).
Socio-demographic characteristics
A total of 330 instructors were involved in this study, making a response rate of 96.2%. Out of the total respondents, 276(83.9%) were male and, 146(44.4%) found in the age group 25–29 years. The mean age was 29.73 ± 6.46 (SD). More than half of the respondents were single 187(56.8%), and 164(49.8%) living alone as current living arrangements. The majority of the respondents of the study, 197 (59.9%) were second-degree holders and at the college of health science 105(31.9%). The mean incomes of the respondents were 5650.05 Birr (SD 1748.554) (See Table 1).
Table 1 Socio-demographic characteristics of instructors, Jimma University, Southwest Ethiopia, 2018
Substance-related perception
The respondent's mean score for risk perception of substance use was 21.85 ± 8.23 (SD); while the perceived benefit of substance use 13 ± 7.11 (SD). Whereas, the mean score of the respondent's perceived availability and perceived accessibility of substance were 8.85 ± 4.02 (SD), 7.48 ± 3.87 respectively. The mean score for the perceived affordability of substance was 2.95 ± 1.57 (SD).
One hundred twenty-six (38.18%) of the respondents had a family history of substance use. Whereas for those who currently use substances the number of family substance usage accounted were almost equal to those whose families were not used, 113(50.2%) and 112(49.8%) respectively. The mean score for the family support function of the respondent was 37.29 ± 7.15 (SD). Regarding respondent's friends' substance use behavior; 196(87.1%) of the respondents had friends who had used the substance. Respondents' mean score for social norm accounted for 8.93 ± 4.97 (SD).
Substance use precipitators
Respondents' mean score for life stressors were 24.45 ± 11.08 (SD). While the mean score of the respondent's depression status was 17.12 ± 7.71 (SD). Reason for using substance; among respondents who chewed khat 62% was for reading and 54.6% for liking the feeling. Those instructors who drank alcohol 41% were for getting relief from sadness and like the feeling.
Prevalence of substance use
Regarding the history of substance use, 225 (68.2%) of the respondents had ever used the substance. Among total respondents, 120 (53.3%) chewed khat in their lifetime and almost all of them chewed in the past thirty days 117 (97.5%). Of them, about 46(38.3%) respondents chew at least once in a week. From those who had a history of substance use; the majority of them 183(81.3%) used alcohol in their life, as well as drunk in the past thirty days 223 (99.1). Frequency of drinking lead by at least once in a month 89(48.6%), and the number of users decreased as the frequency of usage increased from at least once in a week 67(36.6%) to daily 4(2.2%). From the types of drinks containing alcohol, majority of the respondents frequently drank beer 150 (82.0%) with mostly drinking one-two bottles at particular day 86 (47.0%) and, the number of users decreased as the dose/bottle increased to three to four, five to six, and seven to nine; 58 (31.7%), 32 (17.5%), 7 (3.8%) respectively. From substance users, 39 (17.3%) respondents have smoked cigarettes in life and all of them (100%) smoked in the past thirty days. Of the respondents smoked cigarette 14(35.0%) were on a daily bases, and less than five cigarettes per day 26(66.7%). From those who used substance 141(62.7%) of them used only one of the substances from the three, while 55(24.4%) and 29(12.9%) of them used two and all of the three substances respectively. The most commonly used substance among instructors was alcohol followed by khat and cigarettes (See Table 2).
Table 2 Substance use characteristics of instructors in Jimma University, Southwest Ethiopia, 2018
Context of use
The most preferred time of use for the respondent was; in the afternoon 95(79.2%) for chewing khat, at night 155(84.7%) for drinking alcohol, and 15(38.5%) of them any time for smoking cigarette. The majority of respondents who used substances preferred their friends both to chew khat with 75(63.6%) and, 147(80.3%) for drinking alcohol. The mean age for initiation of substance was found to be 20 ± 2.83 (SD).
Substance use disorder prevalence and characteristics
Of the total of 225 respondents who used the substance, 83 of them were in the substance use disorder whereas the rest 142 was not in the substance use disorder making the prevalence of substance use disorder 36.9%. Of the total 83 instructors who had substance use disorder, 48 (58%) had mild substance use disorder and 19(23%) had moderate substance use disorder. However, 16 (19%) of the instructors had severe substance use disorder which affected their daily activities and their life (See Table 3).
Table 3 Substance Use Disorder (SUD) using DSM-V Criteria among instructors in Jimma University, Southwest Ethiopia, 2018
Factors associated with substance use
From those candidate variables in bivariate analysis; living arrangement, family substance use history, friends substance use history, perceived benefit of substance and social norm were found to be significant predictors of substance use among instructors. Instructors who lived with families were 4 times more likely to use substances than those who live alone (AOR =4.136 [2.004–8.536] 95% CI). Instructors with no family history of substance use had 4.5 times less risk of using the substance as compared to those instructors with a family history of substance use (AOR = 0.220 [0.098–0.495] 95% CI). Meanwhile, instructors with a friend's history of substance use had a 9 times higher risk of substance use as compared to those instructors with no friends history of substance use (AOR = 9.047 [4.645–17.620] 95% CI). As a perceived benefit of the substance of instructors increases by one unit the odds of becoming at risk for substance use increases by 1.1 (AOR =1.077 [1.008–1.151] 95% CI). As social norms to substance use increase by one unit, the odds of becoming at risk for substance use increase by 1.12 (AOR =1.123 [1.020–1.238] 95% CI) (See Table 4).
Table 4 Multivariable logistic regression for substance use among instructors, Jimma University, Southwest Ethiopia, 2018
This study revealed that the prevalence of substance use among Jimma University instructors was 68.4% which is consistent with the study done in Jimma zone which was 68.5% [22]. On contrary, the prevalence of substance use found in this study is relatively higher than the study done in Gondar university instructors which are 42% of the instructors were either lifetime cigarette smokers or khat chewers or both [11]. The difference may be due to the current study have alcohol use in addition to the prevalence and risk factors of cigarette smoking and khat chewing. In addition in this study substance use was taken as using any of substances like khat chewing, cigarette smoking or alcohol drinking. The geographic differences and the availability of excessive khat production in the area may be contributed to the differences.
The lifetime prevalence of alcohol drinking found in this study was 81.3% is relatively higher than that of the study done in India (33.78%), Zambia (61%), and (67%) in a rural part of South Africa [11, 23, 24]. These differences might be due to socio-cultural differences and study population size. The previous studies were conducted in the community on a large population while the current study was institution-based and conducted in small study subjects. This result for alcohol prevalence is also higher than, a cross-sectional study was done in Addis Ababa and Jimma town in the past twelve months alcohol consumption was 69 and 50% respectively [16, 22].
The lifetime prevalence of khat chewing found in this study was 53.3%, higher than the study done among Gondar University instructors 21% and in Addis Ababa adults 18.3%, while it is lower than the study done in Jimma 68.5% [16, 18, 22]. The difference might be the study settings in which khat chewing is common in Jimma than in Gondar and in Addis Ababa. Meanwhile, the result of this study was lower than that done in Jimma town since the current study was conducted in one institution instructors. The lifetime prevalence of cigarette smoking in this study was 17.3%. This was higher than the study done among Debre-Berhan University students which were 7.4% [25]. This difference might be due to the instructor's financial capability to afford the price of cigarettes relative to the students. Whereas study was done in different countries in Africa, America and Asia showed higher lifetime prevalence which was 27.8% in Sudan [10], 26% in America, 22.84% in India,30% in South Africa,31% in Zambia, and 19.7% in Zanzibar [23,24,25,26,27]. The difference may be due to the socio-cultural difference between the study settings. The prevalence of tobacco results in this study is also lower than the study conducted in Jimma town 35.5% and Jimma psychiatric outpatient ward 20.5% [22, 28] and Jimma town prisoners which was 19.8% [21].
This study showed that the prevalence of substance use disorder among Jimma University instructors was 36.9%. The current finding was lower than the study done in Jimma town prisoners which were 55% [21]. This is also relatively higher than the study conducted in Ukraine and USA lifetime prevalence rates of substance use disorders 15 and 8% respectively [29, 30]. The possible reason could be due to population size differences and study settings. The previous studies were population-based surveys starting from 12 years old individuals while this study conducted only among one institution instructors. In addition, the difference might be due to the fact that these countries have better behavioral therapy services to prevent as well as an early treatment center for substance use disorder before its magnitude showed boldly.
Regarding predictive factors, this study showed that instructors who live with family were 4 times more likely to use substances than those who live alone counterparts (AOR = 4.136, [2.004–8.536] 95% CI). This might be due to the social norms and religion conditions that the family perceives some substance use as normal behavior and religiously connected; especially the khat chewing considered normal at weekends and holidays among Muslims. Similarly, alcohol consumption in the special holidays is considered normal in Orthodox Christianity followers which might be contributed to substance use with families posing them at risk of psychoactive substance use behavior. The finding is in line with other studies done in India that assessed the prevalence and the pattern of substance abuse and revealed that living alone or with a friend is factor less often associated with substance use [23].
This study showed that instructors with no family history of substance use had 4.5 times less risk of using the substance as compared to those instructors with a family history of substance use (AOR =0.220 [0.098–0.495] 95% CI). The finding of the current study was consistent with a study done in high school students in Woreta town, North East, Ethiopia [31]. It is also consistent with systematic analysis conducted to summarize the key epidemiologic literature that has studied social (or exogenous) factors that may shape substance use behavior and showed that, parental substance use appears to be the primary social factors associated with smoking and alcohol initiation [32].
The other predictive factor which revealed in this study were instructors with a friend history of substance use had a nine times higher risk of substance use as compared to those instructors with no friends history of substance use (AOR = 9.047 [4.645–17.620] [95% CI]). The current study was similar to the study done in Hawassa University students on alcohol and khat use; students who had a friend who uses the substance had 4.6 times higher odds of substance use than those students who had no friends who used substances [33]. Also, the finding of the current study was in line with another study revealed that students who had friends who used substances had 2.14 times higher risk of using substances than those students who had no friends who had used substances, even though the study population was different [31].
Another predictive factor for substance use which the study revealed was the social norms that favor substance use. As the social norm, that favor substance use increases the likelihood that instructors to use substance increases too. This study is in line with the previous study done in Ethiopia, which shows that community norms favorable to substance use were two times more likely to lead to adolescent substance use than community norms that were not favorable to substance use even though the study was conducted among the different population [31]. Similar findings were also reported from the study done among college freshman, perceived peer drinking norms were positively correlated with both alcohol consumption and alcohol problems [34].
This study revealed that the perceived benefit of using a substance is a predictor of substance use among instructors. The possible explanation could be, when instructors perceive using substance benefits, they tend to use it by taking it as a reason for its advantage. Conversely, as the instructor's perceived benefit of using a substance is less or the perceived risk of using a substance is high the likelihood of engaging in substance use among instructors decreases [18].
In the present study living arrangement, family substance use history, friend's substance use history, social norms and perceived benefit of substance use were positively associated with substance use among instructors. The influence of family, peer, as well as society at large, plays a great role for the instructors to use substances than socio-demographic factors. So that, substance use is the result of a multiplicity of factors and cannot be corrected by a single intervention. Moreover, it should be prevented by start working from individual to the community level, in a way that the risk of substance taking is understood and the norms of the community become favorable in ensuring positive health for the individual as part of the community. Therefore designing a multifaceted approach directed to an individual, interpersonal and community-level intervention targeted to substance misperception and social norms contributing to substance use.
The datasets used and analyzed during the current study are available from the corresponding author on reasonable le request.
AUD:
DSM:
Diagnostic statistical manual of mental health
EDHS:
Ethiopian Demographic and Health Survey
Human Immune Deficiency Virus
KAP:
Knowledge, Attitude, Practice
NCSR:
National Comorbidity Survey Replication
SRS:
Simple Random Sampling
Quinn PD, Hur K, Chang Z, Krebs EE, Bair MJ, Scott EL. Incident and long-term opioid therapy among patients with psychiatric conditions and medications: a national study of commercial health care claims. Pain. 2017;158(2017):140–8.
Tesfaye G, Derese A, Hambisa MT. Substance use and associated factors among university students in Ethiopia: a cross-sectional study. J Addict. 2014;2014:p1–8.
Deressa W, Azazh A. Substance use and its predictors among undergraduate medical students of Addis Ababa University in Ethiopia. BMC Public Health. 2011;11(660):1–11.
Birega MG, Addis B, Agmasu M, Tadele M. Descriptive study on magnitude of substance abuse among students of Aman Poly technique college students, Bench Maji zone south west Ethiopia. J Addict Res Ther. 2017;8:320. https://doi.org/10.4172/2155-6105.1000320.
Organization WH. ATLAS on substance use (2010) Resources for the prevention and treatment of substance use disorders. Ave Appia, 1211 Geneva 27, Switz. 2010. Available at https://www.who.int/substance_abuse/links/en/
Schulte MT, Ramo MSD, Brown SA. Gender differences in factors influencing alcohol use and drinking progression among adolescents. Clin Psychol Rev. 2010;29(6):535–47.
UNODC. Global overview of drug demand and supply Latest trends, cross-cutting issues. 2017, pp 1–64. Available at https://www.unodc.org/wdr2018/
Schulte MT. Substance use and associated health conditions throughout the lifespan. Public Health Rev. 2014;35(2):1–27.
Patrick ME, Wightman P, Schoeni RF, Schulenberg JE. Socioeconomic status and substance use among young adults: a comparison across constructs and drugs. J Stud Alcohol Drugs. 2012;73(5):772–82.
Osman T, Victor C, Abdulmoneim A, Mohammed H, Abdalla F, Ahmed A, et al. Epidemiology of substance use among university students in Sudan. J Addict. 2016;2016:1–8.
WHO/UNDCP Global Initiative on Primary Prevention of Substance Abuse "Global Initiative", World. substance use in Southern Africa knowledge, attitudes, practices and opportunities for intervention summary of baseline assessments in the Republic of South Africa, the United Republic of Tanzania and the Republic of Zambia WHO/UNDCP Global Initiative. 2014, pp 1–90.
Mihretu A, Teferra S, Fekadu A. Problematic khat use as a possible risk factor for harmful use of other psychoactive substances: a mixed-method study in Ethiopia. Subst Abus Treat Prev Policy. 2017;12(1):1–7.
Manaye M. An assesment of drug abuse among secondary school students of Harari region. Book. 2011, pp 1–99.
The Federal Democratic Republic Of Ethiopia Ethiopia. Ethiopia demographic and health survey. 2016, pp 1–551.
Gebrie A, Alebel A, Zegeye A, Tesfaye B. Prevalence and predictors of khat chewing among Ethiopian university students: a systematic review and meta-analysis. PLoS One. 2018;13(4):e0195718. https://doi.org/10.1371/journal.%20pone.0195718.
Tesfaye F, Byass P, Berhane Y, Bonita R, Wall S. Association of smoking and khat (Catha edulis Forsk) use with high blood pressure among adults in Addis Ababa, Ethiopia, 2006. Prev Chronic Dis. 2008;5(3):A89. http://www.cdc.gov/pcd/issues/2008/jul/07_0137.htm. Accessed
Teferi KA. Psychoactive substance abuse and intention to stop among students of Mekelle University, Ethiopia. Psychoactive substances abuse and intention to stop among students of Mekelle University, Ethiopia. 2011. Available at http://localhost:80/xmlui/handle/123456789/11908.
Kebede Y. Cigarette smoking and Khat Chewing among University instructors in Ethiopia. East Afr Med J. 2002;79(5):274–8.
Kebede Y, Abula T, Ayele B, Feleke A, Degu G, Kifle A, et al. Substance abuse for the Ethiopian health center team: Ethiopian public health training initiative. 2005, pp 1–84.
Gebrehanna E, et al. Prevalence and predictors of harmful Khat use among university students in Ethiopia. Subst Abuse Res Treat. 2014;8:45–51. https://doi.org/10.4137/SaRt.S14413.
Yitayih Y, Abera M, Tesfaye E, Mamaru A, Soboka M, Adorjan K. Substance use disorder and associated factors among prisoners in a correctional institution in Jimma, Southwest Ethiopia: a cross-sectional study. BMC Psychiatry. 2018;18(314):1–9.
Town J, Jima SB, Tefera TB, Ahmed MB. Prevalence of Tobacco consumption, alcohol, Khat (Catha Edulis) use and high blood pressure among adults in Jimma Town, South West Ethiopia. Sci J Public Health. 2015;3(5):650–4.
Kumar V, Nehra DK, Kumar P, et al. Prevalence and pattern of substance abuse: a study from de-addiction center. Delhi Psychiatry J. 2013;16(1):1–13.
Siegel K. Gender, sexual orientation, and adolescent HIV testing: a qualitative analysis. J Assoc Nurses AIDS Care. 2011;99(2011):358–66.
Gebremariam TB, Mruts KB, Neway TK. Substance use and associated factors among Debre Berhan University students, Central Ethiopia. Subst Abus Treat Prev Policy. 2018;13(1):1–8.
Merline AC, Malley PMO, Schulenberg JE, Bachman JG, Johnston LD. Substance use among adults 35 years of age: prevalence, adulthood predictors, and impact of adolescent substance use. Res Pract. 2004;94(1):96–102.
World Health Organization (WHO). Substance use in Southern Africa knowledge, attitudes, practices and opportunities for intervention summary. 2003. http://www.who.int/substance_abuse/UNDCP_WHO_initiative.
Zenebe Y, Negash A, Gt F, Krahl W. Alcoholism & drug dependence alcohol use disorders and it's associated factors among psychiatric outpatients. J Alcohol Drug Depend Res Artic. 2015;3(3):1–8.
Merikangas KR, Mcclair VL. Epidemiology of Substance use disorder. Hum Genet. 2012;131:779–89. https://doi.org/10.1007/s00439-012-1168-0.
Tice P, Tice P. Behavioral health trends in the United States: results from the 2014 national survey on drug use and health results from the 2014 national survey on drug use and health. 2014. Retrieved from https://www.samhsa.gov/data/sites/default/files/NSDUH-FRR1-2014/NSDUH-FRR1-2014.
Birhanu AM, Bisetegn TA, Woldeyohannes SM. High prevalence of substance use and associated factors among high school adolescents in Woreta Town, Northwest Ethiopia: multi-domain factor analysis. BMC Public Health. 2014;14(1):1–11.
Galea S, Nandi A, Vlahov D. The social epidemiology of substance use. Epidemiol Rev. 2004;26:36–52.
Kassa A, Wakgari N, Taddesse F. Determinants of alcohol use and khat chewing among Hawassa University students, Ethiopia: a cross-sectional study. Afr Health Sci. 2016;16(3):822–30.
Stone AL, Becker LG, Huber AM, Catalano RF. Review of risk and protective factors of substance use and problem use in emerging adulthood. Addict Behav. 2012;37(7):747–75. https://doi.org/10.1016/j.addbeh.2012.02.
We would like to thank Jimma University, Faculties and Department officials for facilitating the data collection process and for their co-operation during data collection.
The research was conducted by financial funding from Jimma University.
Institute of Health, Faculty of Public Health, Department of Health, Behavior, and society, Jimma University, Jimma, Ethiopia
Abraham Tamirat Gizaw, Demuma Amdisa & Yohannes Kebede Lemu
Abraham Tamirat Gizaw
Demuma Amdisa
Yohannes Kebede Lemu
Abraham T designed the study, collected data, analyzed the data and reviewed the manuscript.; Demuma A designed the study, supervised data collection, analyzed the data drafted the manuscript and critically reviewed the manuscript, Yohannes K; designed the study, collected data, analyzed the data and reviewed the manuscript. All authors read and approved the final manuscript.
Correspondence to Abraham Tamirat Gizaw.
The research was approved by the research ethics committee (REC) of the institute of health (IoH), Jimma University (JU), before data collection. Written consent was sought from each eligible respondent. The objectives of the study and its benefits were explained in a language they can understand. Study participants were informed that the study would not have any risks. Furthermore, items seeking personal information (like name, phone number, and identification numbers) were kept confidential.
The authors declare that there is no conflict of interest in this work.
Gizaw, A.T., Amdisa, D. & Lemu, Y.K. Predictors of substance use among Jimma University instructors, Southwest Ethiopia. Subst Abuse Treat Prev Policy 15, 2 (2020). https://doi.org/10.1186/s13011-019-0248-8
Jimma University | CommonCrawl |
Spectral analysis for transition front solutions in Cahn-Hilliard systems
On the mass-critical generalized KdV equation
January 2012, 32(1): 167-190. doi: 10.3934/dcds.2012.32.167
Asymptotic behavior of solutions to a one-dimensional full model for phase transitions with microscopic movements
Jie Jiang 1, and Boling Guo 2,
Institute of Applied Physics and Computational Mathematics, PO Box 8009, Beijing 100088, China
Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing, 100088
Received July 2010 Revised November 2010 Published September 2011
This paper is devoted to the study of long-time behavior of the solutions to a one-dimensional full model for the first order phase transitions. Our system features a strongly nonlinear internal energy balance equation, governing the evolution of the absolute temperature $\theta$, which is coupled with an evolution equation for the phase change parameter $f$ with a third-order nonlinearity $G_2'(f)$ in place of the customarily constant latent heat. The main novelty of this paper is that we perform an argument to establish Lemma 3.1 which enables us to obtain uniform estimates of the global solutions with respect to time. Asymptotic behavior of the solutions as time goes to infinity and the compactness of the orbit are obtained. Furthermore, we investigate the dynamics of the system and prove the existence of global attractors.
Keywords: global attractor., longtime behaviors, microscopic movements, First order phase transitions, existence and uniqueness.
Mathematics Subject Classification: Primary: 35B40, 80A22; Secondary: 35B41, 35K2.
Citation: Jie Jiang, Boling Guo. Asymptotic behavior of solutions to a one-dimensional full model for phase transitions with microscopic movements. Discrete & Continuous Dynamical Systems - A, 2012, 32 (1) : 167-190. doi: 10.3934/dcds.2012.32.167
V. Berti, M. Fabrizio and C. Giorgi, Well-posedness for solid-liquid phase transitions with a fourth-order nonlinearity,, Phys. D, 236 (2007), 13. doi: 10.1016/j.physd.2007.07.009. Google Scholar
E. Bonetti, P. Colli, M. Fabrizio and G. Gilardi, Existence and boundedness of solutions for a singular phase field system,, J. Differential Equations, 246 (2009), 3260. Google Scholar
G. Bonfanti, M. Frémond and F. Luterotti, Global solution to a nonlinear system for irreversible phase changes,, Adv. Math. Sci. Appl., 10 (2000), 1. Google Scholar
G. Bonfanti, M. Frémond and F. Luterotti, Local solutions to the full model of phase transitions with dissipation,, Adv. Math. Sci. Appl., 11 (2001), 791. Google Scholar
M. Frémond, "Non-Smooth Thermomechanics,", Springer-Verlag, (2002). Google Scholar
E. Feireisl, Mathematical theory of compressible, viscous, and heat conducting fluids,, Comput. Math. Appl., 53 (2007), 461. doi: 10.1016/j.camwa.2006.02.042. Google Scholar
E. Feireisl, H. Petzeltová and E. Rocca, Existence of solutions to a phase transition model with microscopic movements,, Math. Methods Appl. Sci., 32 (2009), 1345. doi: 10.1002/mma.1089. Google Scholar
M. Fabrizio, Ginzburg-Landau equations and first and second order phase transitions,, Internat. J. Engrg. Sci., 44 (2006), 529. doi: 10.1016/j.ijengsci.2006.02.006. Google Scholar
M. Fabrizio, Ice-water and liquid-vapor phase transitions by a Ginzburg-Landau model,, J. Math. Phys., 49 (2008). doi: 10.1063/1.2992478. Google Scholar
P. Germain, "Cours de Méchanique des Milieux Continus. Tome I: Théorie Générale,", Masson er Cie, (1973). Google Scholar
B. Guo and P. Zhu, Global existence of smooth solution to nonlinear thermoviscoelastic system with clamped boundary conditions in solid-like materials,, Comm. Math. Phys., 203 (1999), 365. doi: 10.1007/s002200050617. Google Scholar
J. Jiang and Y. Zhang, Counting the set of equilibria for a one-dimensional full model for phase transitions with microscopic movements,, Q. Appl. Math., (). Google Scholar
Ph. Laurençot, G. Schimperna and U. Stefanelli, Global existence of a strong solution to the one-dimensional full model for irreversible phase transitions,, J. Math. Anal. Appl., 271 (2002), 426. doi: 10.1016/S0022-247X(02)00127-0. Google Scholar
F. Luterotti and U. Stefanelli, Existence result for the one-dimensional full model of phase transitions,, Z. Anal. Anwendungen, 21 (2002), 335. Google Scholar
F. Luterotti and U. Stefanelli, Errata and addendum to: "Existence result for the one-dimensional full model of phase transitions", [Z. Anal. Anwendungen, 21 (2002), 335-350],, Z. Anal. Anwendungen, 22 (2003), 239. Google Scholar
F. Luterotti, G. Schimperna and U. Stefanelli, Existence results for a phase transition model based on microscopic movements,, Differential Equations: Inverse and Direct Problems, (2006), 245. Google Scholar
F. Luterotti, G. Schimperna and U. Stefanelli, Existence result for a nonlinear model related to irreversible phase changes,, Math. Models Methods Appl. Sci., 11 (2001), 809. doi: 10.1142/S0218202501001112. Google Scholar
R. Racke and S. Zheng, Global existence and asymptotic behavior in nonlinear thermoviscoelasticity,, J. Diff. Equ., 134 (1997), 46. doi: 10.1006/jdeq.1996.3216. Google Scholar
E. Rocca and R. Rossi, Global existence of strong solutions to the one-dimensional full model for phase transition in thermoviscoelastic materials,, Applications of Mathematics, 53 (2008), 485. Google Scholar
C. Shang, Asymptotic behavior of weak solutions to nonlinear thermoviscoelastic systems with constant temperature boundary conditions,, Asymptot. Anal., 55 (2007), 229. Google Scholar
C. Shang, Global attractor for the Ginzburg-Landau thermoviscoelastic systems with hinged boundary conditions,, J. Math. Anal. Appl., 343 (2008), 1. doi: 10.1016/j.jmaa.2008.01.043. Google Scholar
W. Shen and S. Zheng, On the coupled Cahn-Hilliard equations,, Comm. Partial Differential Equations, 18 (1993), 701. Google Scholar
W. Shen and S. Zheng, Maximal attractor for the coupled Cahn-Hilliard equations,, Nonlinear Anal., 49 (2002), 21. doi: 10.1016/S0362-546X(00)00246-7. Google Scholar
J. Sprekels and S. Zheng, Maximal attractor for the system of a Landau-Ginzburg theory for structural phase transitions in shape memory alloys,, Phys. D, 121 (1998), 252. doi: 10.1016/S0167-2789(98)00167-5. Google Scholar
R. Temam, "Infinite-Dimensional Dynamical Systems in Mechanics and Physics,", Applied Mathematical Sciences, 68 (1988). Google Scholar
H. Wu and S. Zheng, Global attractor for the 1-D thin film equation,, Asympt. Anal., 51 (): 101. Google Scholar
S. Zheng, "Nonlinear Parabolic Equations and Hyperbolic-Parabolic Coupled Systems,", Pitman Series Monographs and Surveys in Pure and Applied Mathematics, 76 (1995). Google Scholar
Emil Minchev. Existence and uniqueness of solutions of a system of nonlinear PDE for phase transitions with vector order parameter. Conference Publications, 2005, 2005 (Special) : 652-661. doi: 10.3934/proc.2005.2005.652
Giovanna Bonfanti, Fabio Luterotti. Global solution to a phase transition model with microscopic movements and accelerations in one space dimension. Communications on Pure & Applied Analysis, 2006, 5 (4) : 763-777. doi: 10.3934/cpaa.2006.5.763
Xiangjin Xu. Sub-harmonics of first order Hamiltonian systems and their asymptotic behaviors. Discrete & Continuous Dynamical Systems - B, 2003, 3 (4) : 643-654. doi: 10.3934/dcdsb.2003.3.643
Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio. Global attractor for a nonlocal p-Laplacian equation without uniqueness of solution. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1801-1816. doi: 10.3934/dcdsb.2017107
Honghu Liu. Phase transitions of a phase field model. Discrete & Continuous Dynamical Systems - B, 2011, 16 (3) : 883-894. doi: 10.3934/dcdsb.2011.16.883
Messoud A. Efendiev, Sergey Zelik, Hermann J. Eberl. Existence and longtime behavior of a biofilm model. Communications on Pure & Applied Analysis, 2009, 8 (2) : 509-531. doi: 10.3934/cpaa.2009.8.509
Sylvia Anicic. Existence theorem for a first-order Koiter nonlinear shell model. Discrete & Continuous Dynamical Systems - S, 2019, 12 (6) : 1535-1545. doi: 10.3934/dcdss.2019106
Tatsien Li (Daqian Li). Global exact boundary controllability for first order quasilinear hyperbolic systems. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1419-1432. doi: 10.3934/dcdsb.2010.14.1419
Tong Yang, Fahuai Yi. Global existence and uniqueness for a hyperbolic system with free boundary. Discrete & Continuous Dynamical Systems - A, 2001, 7 (4) : 763-780. doi: 10.3934/dcds.2001.7.763
Ming Mei, Yau Shu Wong, Liping Liu. Phase transitions in a coupled viscoelastic system with periodic initial-boundary condition: (I) Existence and uniform boundedness. Discrete & Continuous Dynamical Systems - B, 2007, 7 (4) : 825-837. doi: 10.3934/dcdsb.2007.7.825
Shu-Yi Zhang. Existence of multidimensional non-isothermal phase transitions in a steady van der Waals flow. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 2221-2239. doi: 10.3934/dcds.2013.33.2221
A. Jiménez-Casas. Invariant regions and global existence for a phase field model. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 273-281. doi: 10.3934/dcdss.2008.1.273
Mauro Garavello, Benedetto Piccoli. Coupling of microscopic and phase transition models at boundary. Networks & Heterogeneous Media, 2013, 8 (3) : 649-661. doi: 10.3934/nhm.2013.8.649
Azer Khanmamedov, Sema Simsek. Existence of the global attractor for the plate equation with nonlocal nonlinearity in $ \mathbb{R} ^{n}$. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 151-172. doi: 10.3934/dcdsb.2016.21.151
Shaoqiang Tang, Huijiang Zhao. Stability of Suliciu model for phase transitions. Communications on Pure & Applied Analysis, 2004, 3 (4) : 545-556. doi: 10.3934/cpaa.2004.3.545
Tatyana S. Turova. Structural phase transitions in neural networks. Mathematical Biosciences & Engineering, 2014, 11 (1) : 139-148. doi: 10.3934/mbe.2014.11.139
Nobuyuki Kenmochi, Noriaki Yamazaki. Global attractor of the multivalued semigroup associated with a phase-field model of grain boundary motion with constraint. Conference Publications, 2011, 2011 (Special) : 824-833. doi: 10.3934/proc.2011.2011.824
Elisabetta Rocca, Giulio Schimperna. Global attractor for a parabolic-hyperbolic Penrose-Fife phase field system. Discrete & Continuous Dynamical Systems - A, 2006, 15 (4) : 1193-1214. doi: 10.3934/dcds.2006.15.1193
Shihui Zhu. Existence and uniqueness of global weak solutions of the Camassa-Holm equation with a forcing. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 5201-5221. doi: 10.3934/dcds.2016026
Hiroshi Matano, Yoichiro Mori. Global existence and uniqueness of a three-dimensional model of cellular electrophysiology. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1573-1636. doi: 10.3934/dcds.2011.29.1573
Jie Jiang Boling Guo | CommonCrawl |
A Theory of Selective Prediction
Mingda Qiao, Gregory Valiant ;
We consider a model of selective prediction, where the prediction algorithm is given a data sequence in an online fashion and asked to predict a pre-specified statistic of the upcoming data points. The algorithm is allowed to choose when to make the prediction as well as the length of the prediction window, possibly depending on the observations so far. We prove that, even without any distributional assumption on the input data stream, a large family of statistics can be estimated to non-trivial accuracy. To give one concrete example, suppose that we are given access to an arbitrary binary sequence $x_1, \ldots, x_n$ of length $n$. Our goal is to accurately predict the average observation, and we are allowed to choose the window over which the prediction is made: for some $t < n$ and $m \le n - t$, after seeing $t$ observations we predict the average of $x_{t+1}, \ldots, x_{t+m}$. This particular problem was first studied in Drucker (2013) and referred to as the "density prediction game". We show that the expected squared error of our prediction can be bounded by $O(\frac{1}{\log n})$ and prove a matching lower bound, which resolves an open question raised in Drucker (2013). This result holds for any sequence (that is not adaptive to when the prediction is made, or the predicted value), and the expectation of the error is with respect to the randomness of the prediction algorithm. Our results apply to more general statistics of a sequence of observations, and we highlight several open directions for future work.
@InProceedings{pmlr-v99-qiao19a,
title = {A Theory of Selective Prediction},
author = {Qiao, Mingda and Valiant, Gregory},
pdf = {http://proceedings.mlr.press/v99/qiao19a/qiao19a.pdf},
url = {http://proceedings.mlr.press/v99/qiao19a.html},
abstract = {We consider a model of selective prediction, where the prediction algorithm is given a data sequence in an online fashion and asked to predict a pre-specified statistic of the upcoming data points. The algorithm is allowed to choose when to make the prediction as well as the length of the prediction window, possibly depending on the observations so far. We prove that, even without any distributional assumption on the input data stream, a large family of statistics can be estimated to non-trivial accuracy. To give one concrete example, suppose that we are given access to an arbitrary binary sequence $x_1, \ldots, x_n$ of length $n$. Our goal is to accurately predict the average observation, and we are allowed to choose the window over which the prediction is made: for some $t < n$ and $m \le n - t$, after seeing $t$ observations we predict the average of $x_{t+1}, \ldots, x_{t+m}$. This particular problem was first studied in Drucker (2013) and referred to as the "density prediction game". We show that the expected squared error of our prediction can be bounded by $O(\frac{1}{\log n})$ and prove a matching lower bound, which resolves an open question raised in Drucker (2013). This result holds for any sequence (that is not adaptive to when the prediction is made, or the predicted value), and the expectation of the error is with respect to the randomness of the prediction algorithm. Our results apply to more general statistics of a sequence of observations, and we highlight several open directions for future work.}
%T A Theory of Selective Prediction
%A Mingda Qiao
%A Gregory Valiant
%F pmlr-v99-qiao19a
%X We consider a model of selective prediction, where the prediction algorithm is given a data sequence in an online fashion and asked to predict a pre-specified statistic of the upcoming data points. The algorithm is allowed to choose when to make the prediction as well as the length of the prediction window, possibly depending on the observations so far. We prove that, even without any distributional assumption on the input data stream, a large family of statistics can be estimated to non-trivial accuracy. To give one concrete example, suppose that we are given access to an arbitrary binary sequence $x_1, \ldots, x_n$ of length $n$. Our goal is to accurately predict the average observation, and we are allowed to choose the window over which the prediction is made: for some $t < n$ and $m \le n - t$, after seeing $t$ observations we predict the average of $x_{t+1}, \ldots, x_{t+m}$. This particular problem was first studied in Drucker (2013) and referred to as the "density prediction game". We show that the expected squared error of our prediction can be bounded by $O(\frac{1}{\log n})$ and prove a matching lower bound, which resolves an open question raised in Drucker (2013). This result holds for any sequence (that is not adaptive to when the prediction is made, or the predicted value), and the expectation of the error is with respect to the randomness of the prediction algorithm. Our results apply to more general statistics of a sequence of observations, and we highlight several open directions for future work.
Qiao, M. & Valiant, G.. (2019). A Theory of Selective Prediction. Proceedings of the Thirty-Second Conference on Learning Theory, in PMLR 99:2580-2594 | CommonCrawl |
The equivalence of two formulae for the Laplace—Beltrami operator
Let $M$ be a (pseudo-)Riemannian manifold with metric $g_{ab}$. Let $\nabla_a$ be the Levi-Civita connection on $M$. It's well-known that the Laplace—Beltrami operator can be defined in this context as $$\nabla^2 = \nabla^a \nabla_a = g^{ab} \nabla_a \nabla_b$$ where $g^{ab}$ is the dual metric and repeated indices are summed. However, we also have the coordinate formula $$\nabla^2 \phi = \frac{1}{\sqrt{| \det g |}} \partial_a \left( \sqrt{| \det g |} g^{ab} \partial_b \phi \right)$$ which, as I understand, comes from using the formula for the Hodge dual.
Without invoking advanced machinery, what is the easiest way to directly prove the equivalence of the two definitions? I can see that if the partial derivatives of $g_{ab}$ vanish, then $$\nabla_{a} \left( g^{ab} \nabla_b \phi \right) = \partial_a \left( g^{ab} \partial_b \phi \right) = \frac{1}{\sqrt{| \det g |}} \partial_a \left( \sqrt{| \det g |} g^{ab} \partial_b \phi \right)$$ and in the general case it suffices to prove that $$\Gamma^{b}_{\phantom{b}ab} = \frac{1}{\sqrt{| \det g |}} \partial_a \left( \sqrt{| \det g |} \right)$$ but then it is seems to be necessary to compute the derivative of a determinant. Is there a trick which can be used to avoid this calculation?
differential-geometry riemannian-geometry
Zhen Lin
Zhen LinZhen Lin
$\begingroup$ I don't know of a trick to avoid this, but (from a linear algebra or introductory differential geometry class) you probably know how to compute the derivative of $det$ - right? $\endgroup$ – Gerben May 2 '11 at 18:17
$\begingroup$ @Gerben: Yes, I can finish the proof that way. But I think it's not unfair to say that the derivative of $\det$ is esoteric knowledge in the context this problem arose. (It came up in a undergraduate general relativity exam, and I suspect candidates are not expected to be analysts.) $\endgroup$ – Zhen Lin May 2 '11 at 18:59
$\begingroup$ I didn't see this comment before I wrote my answer. Still, I think it is quite reasonable to expect people to know how to differentiate the determinant; it is after all apparent from Cramer's rule. $\endgroup$ – Glen Wheeler May 3 '11 at 19:54
For general relativity students I think this might be a reasonable proof. One needs to know that your second expression for $\nabla^2\phi$ (the one using $\sqrt{|\det g|}$) is independent of the choice of coordinates (*). For any point you can then choose coordinate system such that $\partial_a g_{bc}=0$ and the two formulas (as you notice) give the same result.
(*) means (as you also notice - you just want us to restate it in a simpler language - and I'm not sure whether I succeed to do it) that if $u^a$ is a vector-valued density, i.e. if $u^a=v^a \sqrt{|\det g|}$ where $v^a$ is a vector field, then $\partial_a u^a$ is a density, i.e. $\partial_a u^a=f\sqrt{|\det g|}$ for some function $f$ (with $f$ independent of the choice of coordinates - which implies $f=\nabla_av^a$). If we really want to avoid differential forms then we can invoke Gauss theorem (in coordinates), notice that the flow of $u^a$ though a hypersurface is independent of the coordinates, and hence its divergence $\partial_a u^a$ is a well-defined (independent of coordinates) density.
edit: here is (really the same, but) a bit more sensible argument why $f$ (see above) is independent of coordinates. If $\psi$ is a function with compact support then $\int (\partial_a\psi)\, v^a \sqrt{|\det g|} dx^1\dots dx^n$ is independent of the choice of coordinates, and it is equal to (by per partes) $$-\int \psi \partial_a\Bigl( v^a \sqrt{|\det g|}\Bigr)\sqrt{|\det g|}^{-1}\,\sqrt{|\det g|} dx^1\dots dx^n$$ which shows (by choosing $\psi$ with smaller and smaller support and with $\int\psi\sqrt{|\det g|} dx^1\dots dx^n=1$) that $$f=\partial_a\Bigl( v^a \sqrt{|\det g|}\Bigr)\sqrt{|\det g|}^{-1}$$ is independent of coordinates.
$\begingroup$ Your first paragraph seems like the most plausible intended solution. I can't help but feel it's circular reasoning though — in some sense, this proof is how we know the second formula is independent of the choice of coordinates, isn't it? $\endgroup$ – Zhen Lin May 2 '11 at 19:57
$\begingroup$ @Zhen Lin: you are certainly right - I tried to mumble something in the second paragraph why the formula is independent of coordinates, and now I added (+-) a proof of the fact $\endgroup$ – user8268 May 2 '11 at 20:48
This does not need any fancy trickery or complicated machinery. The moral of the story is: do not be scared of differentiating determinants! Formally, the derivative of a determinant is the trace. This actually happens quite often in geometric analysis, because the measure on a Riemannian manifold is given by $d\mu = \sqrt{\det g} \mathcal{H}^n$, where $n$ is the dimension of the manifold and $\mathcal{H}^n$ is $n$-dimensional Hausdorff measure.
We have $$ \partial_k \det A = (\partial_k A_{ij})A^{ij} \det A, $$ so in particular $$ \partial_i \sqrt{\det g} = \frac{(\partial_ig_{pq})g^{pq}}{2} \sqrt{\det g}. $$ Thus $$\begin{align*} \frac{1}{\sqrt{\det g}}\partial_i\Big(g^{ij}\sqrt{\det g}\partial_j\phi\Big) &= (\partial_ig^{ij})\partial_j\phi + g^{ij}\frac{1}{\sqrt{\det g}}\Big(\partial_i\sqrt{\det g}\partial_j\phi\Big) + g^{ij}\partial_{ij}\phi\\ &= (\partial_ig^{ij})\partial_j\phi + g^{ij}\frac{1}{\sqrt{\det g}}\Big(\frac{(\partial_ig_{pq})g^{pq}}{2} \sqrt{\det g}\partial_j\phi\Big) + g^{ij}\partial_{ij}\phi\\ &= (\partial_ig^{ij})\partial_j\phi + \frac{1}{2}g^{ij}(\partial_ig_{pq})g^{pq}\partial_j\phi + g^{ij}\partial_{ij}\phi\\ &= g^{ij}\partial_{ij}\phi - g^{ij}\Gamma_{ij}^p(\partial_ig_{pq})g^{pq}\partial_j\phi, \end{align*}$$ using the standard expression for the coefficients of the Levi-civita connection (the Christoffel symbols) in terms of the metric.
Glen WheelerGlen Wheeler
Not the answer you're looking for? Browse other questions tagged differential-geometry riemannian-geometry or ask your own question.
Laplace-Beltrami on a Curve
Definition of covariant derivative of a covariant derivative
Metric compatibility of dual connection
Calculate Ricci Tensor (Axial Symmetry)
Divergence in Definition of Laplace-Beltrami Operator
Example of a degenerate metric which doesn't have the Levi-Civita connection
Let $M$ be a Riemannian manifold. When does the Laplace-Beltrami operator differ from the trace of the Hessian?
Lie Derivative along One Forms
Proving an identity regarding Levi-civita connections of a metric
Hodge-$\star$ operator computation on a smooth two-dimensional manifold | CommonCrawl |
\begin{document}
\title{Harmonic forms and generalized solitons}
\markboth{{\small\it {\hspace{4cm} Harmonic forms and generalized solitons}}}{\small\it{Harmonic forms and generalized solitons \hspace{4cm}}}
\textbf{Abstract:} For a generalized soliton $(g,\xi,\eta,\beta,\gamma,\delta)$ we provide necessary and sufficient conditions for the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ to be a solution of the Schr\"{o}dinger-Ricci equation, a harmonic or a Schr\"{o}dinger-Ricci harmonic form. We also characterize the $1$-forms orthogonal to $\xi^{\flat}$, underlying the results obtained for the Ricci and Yamabe solitons. Further, we formulate the results for the case of gradient generalized solitons. Several applications and examples are also presented.\\
{ \textbf{2020 Mathematics Subject Classification:} 35C08, 35Q51, 53B05. }
{ \textbf{Keywords:} Gradient solitons; harmonic $1$-forms. }
\section{Introduction}
The soliton phenomenon was firstly described by John Scott Russell (1808--1882) in 1834, who observed a solitary wave, that is, a wave keeping the shape while it propagates with a constant speed (see, e.g., \cite{A98}). The notion of soliton as a stationary solution of a geometric flow on a Riemannian manifold has been lately considered and intensively studied. In the 1980s, R.S. Hamilton introduced the intrinsic geometric flows, Ricci flow \cite{ham1} and Yamabe flow \cite{ham2} which are evolution equations for Riemannian metrics. In a $2$-dimensional manifold, Ricci and Yamabe flow are equivalent, but not for higher dimensions. Recently, in 2019, M. Crasmareanu and S. G\"{u}ler considered a scalar combination of Ricci and Yamabe flow called Ricci-Yamabe flow of type $(\alpha,\beta)$ \cite{cg}, for $\alpha$ and $\beta$ real numbers. Due to the multiple choices of these scalars, the Ricci-Yamabe flow proves to be useful in different geometrical and physical theories. Let us remark that an interpolation flow between Ricci and Yamabe flows has been proposed by J.-P. Bourguignon in \cite{cal} under the name Ricci-Bourguignon flow, but which depends on a single scalar, studied further by E. Calvino-Louzao, E. Garcia-Rio, P. Gilkey, J.H. Park and R. V\'{a}zquez-Lorenzo in \cite{calv}, and by G. Catino, L. Cremaschi, Z. Djadli, C. Mantegazza and L. Mazzieri in \cite{r}. Also, unormalized version of Ricci-Bourguignon-Yamabe flow has been considered in \cite{mes} from the point of view of spectral geometry. Generalized fixed points of the above mentioned flows, the corresponding solitons, Ricci, Yamabe or Ricci-Bourguignon solitons with different kind of potential vector fields are investigated a lot in the Riemannian and pseudo-Riemannian setting, on manifolds endowed with various geometrical structures, and obstructions on the Ricci and scalar curvatures are consequently determined.
On the other hand, the nonlinear Schr\"{o}dinger-Ricci equation describes the evolution of the envelope of modulated wave groups, underlying the wave mechanics. In Riemannian geometry, the Schr\"{o}dinger-Ricci equation involving the Ricci curvature tensor, the Laplace-Hodge operator and the Lie derivative of the metric, is subtle connected to the soliton equation and different types of harmonicity.
Considering these, we propose in the present paper to bring into light some aspects relating a more general notion of soliton, which we introduce here, harmonic forms and the solutions of the Schr\"{o}dinger-Ricci equation. Precisely, we provide necessary and sufficient conditions for the dual $1$-form of the potential vector field of a generalized soliton to be a solution of the Schr\"{o}dinger-Ricci equation, a harmonic form or a Schr\"{o}dinger-Ricci harmonic form. Note that some partial results for the case of $\eta$-Ricci solitons were obtained in \cite{blagha}. Concerning the particular case of Ricci solitons, we prove the followings: \textit{the dual $1$-form of the potential vector field is always a Schr\"{o}dinger-Ricci harmonic form} (Theorem \ref{te44}), \textit{on a complete Riemannian manifold, the dual $1$-form of the potential vector field is a solution of the Schr\"{o}dinger-Ricci equation if and only if the scalar curvature is constant} (Theorem \ref{t166}), \textit{if the potential vector field is of constant length and its dual $1$-form is a harmonic form, then the Ricci soliton is steady} (Theorem \ref{te1}). For generalized Yamabe solitons, we show that \textit{the dual $1$-form of the potential vector field is a solution of the Schr\"{o}dinger-Ricci equation and a Schr\"{o}dinger-Ricci harmonic form} (Theorem \ref{tt} and Theorem \ref{tp}).
\section{Preliminaries}
Throughout the paper all the manifolds are assumed to be smooth. Let $(M,g)$ be an $n$-dimensional Riemannian manifold and denote by $\Ric$ and $\scal$ its Ricci curvature tensor and scalar curvature, respectively. We denote the set of all smooth vector fields of $M$ by $\mathfrak{X}(M)$. Let $\eta$ be a $1$-form and $\xi$ a vector field on $M$, and denote by ${\mathcal L} _{\xi}$ the Lie derivative operator in the direction of $\xi$. In \cite{bcd} we considered the following notion.
\begin{definition} We say that $(g,\xi,\eta,\beta,\gamma,\delta)$ defines a \textit{generalized soliton} on a Riemannian manifold $(M,g)$ if \begin{equation}\label{generalsol} \frac{1}{2}{\mathcal L} _{\xi}g+\beta\cdot\Ric=\gamma\cdot g+\delta\cdot \eta\otimes \eta, \end{equation} with $\beta,\gamma,\delta$ smooth functions on $M$. \end{definition}
In all the rest of this paper, we shall assume that $\eta:=\xi^{\flat}$ is the dual $1$-form of the potential vector field $\xi$ of the generalized soliton and simply denote it by $(M^n,g,\xi,\beta,\gamma,\delta)$. Moreover, if $\beta=1$, we talk about an almost $\xi^{\flat}$-Ricci soliton and if $\beta=0$, we talk about an almost $\xi^{\flat}$-Yamabe soliton. In both of the cases, if $\gamma$ and $\delta$ are constant, we just drop the adjective \textit{almost} and call the soliton $\xi^{\flat}$-Ricci and $\xi^{\flat}$-Yamabe soliton, respectively. Also, if $\beta=\delta=0$, we shall say that $(g,\xi,\gamma)$ defines an almost generalized Yamabe soliton or a generalized Yamabe soliton on $M$ provided $\gamma$ is a constant.
If the vector field $\xi$ is of gradient type, i.e., $\xi:=\grad(f)$, for a smooth function $f$ on $M$, and if we denote by $\Hess(f)$ the Hessian of $f$, by $\nabla$ the Levi-Civita connection of $g$ and by $Q$ the Ricci operator, i.e., $$g(QX,Y):=\Ric(X,Y)$$ for $X$, $Y\in \mathfrak{X}(M)$, then ${\mathcal L} _{\xi}g=2\Hess(f)$ and thus (\ref{generalsol}) takes the form \begin{equation}\label{generalgr} \Hess(f)+\beta\cdot\Ric=\gamma\cdot g+\delta\cdot \eta\otimes \eta, \end{equation} with $\beta,\gamma,\delta$ smooth functions on $M$, which is equivalent to \begin{equation}\label{h} \nabla \grad(f)+\beta\cdot Q=\gamma\cdot I+\delta\cdot \eta\otimes \xi, \end{equation} where $I$ is the identity endomorphism on $\mathfrak{X}(M)$, and we say that $(g,\grad(f),\eta,\beta,\gamma,\delta)$ defines a \textit{gradient generalized soliton} on $(M,g)$.
A generalized soliton with $\delta=0$ will be called \textit{steady} if $\gamma=0$, \textit{shrinking} if $\gamma>0$, \textit{expanding} if $\gamma<0$, or \textit{undefined}, otherwise.
By taking the trace and by taking the divergence of (\ref{generalsol}), we consequently obtain \begin{equation}\label{tr}
\div(\xi)+\beta \scal=n\gamma+\delta |\xi|^2, \end{equation} \begin{equation}\label{di} \frac{1}{2}\div({\mathcal L} _{\xi}g)+\frac{\beta}{2}d(\scal)+i_{Q(\grad(\beta))}g=d\gamma+\eta(\grad(\delta))\eta+\delta\div(\xi)\eta+\delta i_{\nabla_{\xi}\xi}g. \end{equation}
Also from (\ref{generalsol}) we find \begin{equation}\label{po} \beta\, i_{Q(\grad(\beta))}g=\gamma\, i_{\grad(\beta)}g+\delta \eta(\grad(\beta))i_{\xi}g-\frac{1}{2}\left(i_{\nabla_{\grad(\beta)}\xi}g+d\beta\circ \nabla \xi\right) \end{equation} and \begin{equation}\label{pin}
\beta i_{Q\xi}g=(\gamma +\delta |\xi|^2)\eta-\frac{1}{2}\left(i_{\nabla_{\xi}\xi}g+\eta\circ \nabla \xi\right). \end{equation}
\section{Harmonic aspects in a generalized soliton}\label{Section4}
Consider $(M,g)$ an $n$-dimensional Riemannian manifold, $n\geq 3$, and denote by $$\flat:TM\rightarrow T^*M, \ \ \flat(X):=i_Xg, \ \ \sharp:T^*M\rightarrow TM, \ \ \sharp:=\flat^{-1}$$ the musical isomorphisms. Further, we shall use the notations $X^{\flat}=:\flat(X)$ and $\theta^{\sharp}=:\sharp(\theta)$.
Let $\mathcal{T}^0_{2,s}(M)$ be the set of symmetric $(0,2)$-tensor fields on $M$ and for $Z\in \mathcal{T}^0_{2,s}(M)$, denote by $Z^{\sharp}:TM\rightarrow TM$ and by $Z_{\sharp}:T^*M\rightarrow T^*M$ the maps defined as follows $$g(Z^{\sharp}(X),Y):=Z(X,Y), \ \ Z_{\sharp}(\theta)(X):=Z(\sharp(\theta),X).$$ We also denote by $Z^{\sharp}$ the map $Z^{\sharp}:T^*M\times T^*M\rightarrow C^{\infty}(M)$ $$Z^{\sharp}(\theta_1,\theta_2):=Z(\sharp(\theta_1),\sharp(\theta_2))$$ and identify $Z_{\sharp}$ with the map, also denoted by $Z_{\sharp}:T^*M\times TM\rightarrow C^{\infty}(M)$ $$Z_{\sharp}(\theta,X):=Z_{\sharp}(\theta)(X).$$
It is known that \cite{cho} \begin{equation}\label{e61} \div({\mathcal L}_Xg)=(\Delta+\Ric_{\sharp})(X^{\flat})+d(\div(X)), \end{equation} where $\Delta$ is the Laplace-Hodge operator on differential forms with respect to the metric $g$. By a direct computation we deduce $$\Ric_{\sharp}(\theta)=i_{Q(\theta^{\sharp})}g,$$ for any $1$-form $\theta$.
We are interested to find necessary and sufficient conditions for the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ of a generalized soliton to be a solution of the Schr\"{o}dinger-Ricci equation, a harmonic form or a Schr\"{o}dinger-Ricci harmonic form.
\subsection{Schr\"{o}dinger-Ricci solutions}\label{Subection4.1}
A $1$-form $\theta$ on a Riemannian manifold $(M,g)$ is a \textit{solution of the Schr\"{o}dinger-Ricci equation} if $$ (\Delta+\Ric_{\sharp})(\theta)+d(\div(\theta^{\sharp}))=0. $$
\begin{lemma} A $1$-form $\theta$ on a Riemannian manifold $(M,g)$ is a solution of the Schr\"{o}dinger-Ricci equation if and only if the dual vector field $\theta^{\sharp}$ of $\theta$ satisfies \begin{equation}\label{l1} \div({\mathcal L}_{\theta^{\sharp}}g)=0. \end{equation} \end{lemma}
We deduce the following proposition from (\ref{tr}), (\ref{di}), (\ref{po}) and (\ref{l1}).
\begin{proposition}\label{t1} Let $(M^n, g,\xi,\beta,\gamma,\delta)$ be a generalized soliton such that $\beta$ is nowhere zero on $M$. Then the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a solution of the Schr\"{o}dinger-Ricci equation if and only if it satisfies $$ -\frac{\beta}{2}(d(\scal)+2\delta \scal\cdot\xi^{\flat})=\frac{\gamma}{\beta} d\beta-d\gamma-\frac{1}{2\beta}(\nabla_{\grad(\beta)}\xi^{\flat}+d\beta\circ \nabla \xi) $$ \begin{equation}\label{kk} +\left(\frac{\delta}{\beta}\xi^{\flat}(\grad(\beta))-\xi^{\flat}(\grad(\delta))\right)\xi^{\flat}-
\delta(n\gamma+\delta|\xi|^2)\xi^{\flat}-\delta \nabla_{\xi}\xi^{\flat}. \end{equation} \end{proposition} \begin{proof} From (\ref{l1}) we know that $\xi^{\flat}$ is a solution of the Schr\"{o}dinger-Ricci equation if and only if $\div({\mathcal L}_{\xi}g)=0$, which by means of (\ref{tr}), (\ref{di}) and (\ref{po}), is equivalent to \begin{equation}\begin{aligned} &\nonumber 0=-\frac{\beta}{2}d(\scal)-i_{Q(\grad(\beta))}g +d\gamma+\xi^{\flat}(\grad(\delta))\xi^{\flat}+\delta\div(\xi)\xi^{\flat}+\delta i_{\nabla_{\xi}\xi}g \\& \hskip.3in =-\frac{\beta}{2}d(\scal)-\frac{\gamma}{\beta} i_{\grad(\beta)}g-\frac{\delta}{\beta} \xi^{\flat}(\grad(\beta))i_{\xi}g+\frac{1}{2\beta}\left(i_{\nabla_{\grad(\beta)}\xi}g+d\beta\circ \nabla \xi\right)+ d\gamma
\\& \hskip.3in +\xi^{\flat}(\grad(\delta))\xi^{\flat}+\delta(n\gamma+\delta |\xi|^2-\beta \scal)\xi^{\flat}+\delta i_{\nabla_{\xi}\xi}g \\& \hskip.3in =-\frac{\beta}{2}d(\scal)-\frac{\gamma}{\beta} d\beta-\frac{\delta}{\beta} \xi^{\flat}(\grad(\beta))\xi^{\flat}+\frac{1}{2\beta}\left(\nabla_{\grad(\beta)}\xi^{\flat}+d\beta\circ \nabla \xi\right)+ d\gamma
\\& \hskip.3in +\xi^{\flat}(\grad(\delta))\xi^{\flat}+\delta(n\gamma+\delta |\xi|^2-\beta \scal)\xi^{\flat}+\delta \nabla_{\xi}\xi^{\flat}. \end{aligned} \end{equation} \end{proof}
Proposition \ref{t1} implies the following.
\begin{theorem} Let $(M,g)$ be a Riemannian manifold of constant scalar curvature and $\xi$ a constant length vector field such that its dual $1$-form $\xi^{\flat}$ is a solution of the Schr\"{o}dinger-Ricci equation. If $(g,\xi,\gamma,\delta)$ defines a $\xi^{\flat}$-Ricci soliton on $M$, then either the soliton is a Ricci soliton or $\xi$ is a divergence-free vector field. \end{theorem} \begin{proof} Taking into account that $\beta=1$, $\gamma$, $\delta$ and $\scal$ are constant, we obtain $$\delta\left(\div(\xi)\xi^{\flat}+\nabla_{\xi}\xi^{\flat}\right)=0$$ from (\ref{kk}), which either implies $\delta=0$ or $\div(\xi)\xi^{\flat}=-\nabla_{\xi}\xi^{\flat}$. Computing the second relation in $\xi$, we find
$$\div(\xi)=-\frac{\xi(|\xi|^2)}{2|\xi|^2}=0.$$ \end{proof}
\begin{theorem}\label{t166} Let $(M,g)$ be a complete Riemannian manifold. If $(M,g,\xi,\gamma)$ is a Ricci soliton, then the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a solution of the Schr\"{o}dinger-Ricci equation if and only if the scalar curvature of $M$ is constant. \end{theorem} \begin{proof} For $\beta=1$, $\delta=0$ and $\gamma$ constant, we deduce that (\ref{kk}) holds if and only if $d(\scal)=0$, hence the conclusion. \end{proof}
Similarly, we deduce the following proposition from (\ref{tr}), (\ref{di}) for $\beta=0$, and (\ref{l1}).
\begin{proposition}\label{p11} Let $(M^n, g,\xi,\gamma,\delta)$ be an almost $\xi^{\flat}$-Yamabe soliton. Then the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a solution of the Schr\"{o}dinger-Ricci equation if and only if it satisfies \begin{equation}\label{p1} d\gamma+\xi^{\flat}(\grad(\delta))\xi^{\flat}+
\delta(n\gamma+\delta|\xi|^2)\xi^{\flat}+\delta \nabla_{\xi}\xi^{\flat}=0. \end{equation} \end{proposition} \begin{proof} From (\ref{l1}) we know that $\xi^{\flat}$ is a solution of the Schr\"{o}dinger-Ricci equation if and only if $\div({\mathcal L}_{\xi}g)=0$, which by means of (\ref{tr}) and (\ref{di}) for $\beta=0$, is equivalent to \begin{equation}\begin{aligned} &\nonumber 0=d\gamma+\xi^{\flat}(\grad(\delta))\xi^{\flat}+\delta\div(\xi)\xi^{\flat}+\delta i_{\nabla_{\xi}\xi}g
\\& \hskip.3in =d\gamma+\xi^{\flat}(\grad(\delta))\xi^{\flat}+\delta(n\gamma+\delta|\xi|^2)\xi^{\flat}+\delta \nabla_{\xi}\xi^{\flat}. \end{aligned} \end{equation} \end{proof}
We immediately obtain the followings from Proposition \ref{p11}.
\begin{theorem} Let $(M,g,\xi,\gamma,\delta)$ be a $\xi^{\flat}$-Yamabe soliton whose potential vector field $\xi$ is of constant length. If the dual $1$-form $\xi^{\flat}$ of $\xi$ is a solution of the Schr\"{o}dinger-Ricci equation, then either the soliton is a generalized Yamabe soliton or $\xi$ is a divergence-free vector field. \end{theorem} \begin{proof} Taking into account that $\gamma$ and $\delta$ are constant, we obtain $$\delta\left(\div(\xi)\xi^{\flat}+\nabla_{\xi}\xi^{\flat}\right)=0$$ from (\ref{p1}), which either implies $\delta=0$ or $\div(\xi)\xi^{\flat}=-\nabla_{\xi}\xi^{\flat}$. Computing the second relation in $\xi$, we find
$$\div(\xi)=-\frac{\xi(|\xi|^2)}{2|\xi|^2}=0.$$ \end{proof}
\begin{theorem}\label{c:4.7} Let $(M,g)$ be a complete Riemannian manifold. If $(M,g,\xi,\gamma)$ is an almost generalized Yamabe soliton, then the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a solution of the Schr\"{o}dinger-Ricci equation if and only if the soliton is a generalized Yamabe soliton. \end{theorem} \begin{proof} For $\delta=0$, we deduce that (\ref{p1}) holds if and only if $d\gamma=0$, hence the conclusion. \end{proof}
\begin{theorem}\label{tt} For every generalized Yamabe soliton $(M,g,\xi,\gamma)$, the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a solution of the Schr\"{o}dinger-Ricci equation. \end{theorem}
\subsection{Schr\"{o}dinger-Ricci harmonic forms}\label{Subection4.2}
A $1$-form $\theta$ on a Riemannian manifold $(M,g)$ is called \textit{Schr\"{o}dinger-Ricci harmonic} if it satisfies $$(\Delta+\Ric_{\sharp})(\theta)=0.$$
\begin{lemma} A $1$-form $\theta$ on a Riemannian manifold $(M,g)$ is a Schr\"{o}dinger-Ricci harmonic form if and only if the dual vector field $\theta^{\sharp}$ of $\theta$ satisfies \begin{equation}\label{l2} \div({\mathcal L}_{\theta^{\sharp}}g)=d(\div(\theta^{\sharp})). \end{equation} \end{lemma}
Since
$$d(\div(\theta^{\sharp}))=i_{\grad(\div(\theta^{\sharp}))}g, \ \ \theta\circ \nabla \theta^{\sharp}=\frac{1}{2}d(|\theta^{\sharp}|^2), \ \ (\nabla_X\theta)^{\sharp}=\nabla_X\theta^{\sharp},$$ we deduce the following proposition from (\ref{tr}), (\ref{di}), (\ref{po}) and (\ref{l2}).
\begin{proposition}\label{te} Let $(M^n,g,\xi,\beta,\gamma,\delta)$ be a generalized soliton such that $\beta$ is nowhere zero on $M$. Then the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a Schr\"{o}dinger-Ricci harmonic form if and only if it satisfies $$
\frac{\scal}{2}(d\beta-2\beta\delta \xi^{\flat})=\frac{\gamma}{\beta}d\beta+\frac{n-2}{2}d\gamma+\frac{|\xi|^2}{2}d\delta-\frac{1}{2\beta}(\nabla_{\grad(\beta)}\xi^{\flat}+d\beta\circ \nabla \xi)$$ \begin{equation}\label{kk2} +\left(\frac{\delta}{\beta}\xi^{\flat}(\grad(\beta))-\xi^{\flat}(\grad(\delta))\right)\xi^{\flat}-
\delta(n\gamma+\delta|\xi|^2)\xi^{\flat}-\delta(\nabla_{\xi}\xi^{\flat}-\xi^{\flat}\circ \nabla \xi). \end{equation} \end{proposition} \begin{proof} From (\ref{l2}) we know that $\xi^{\flat}$ is a Schr\"{o}dinger-Ricci harmonic form if and only if $\div({\mathcal L}_{\xi}g)=d(\div(\xi))$, which by means of (\ref{tr}), (\ref{di}) and (\ref{po}), is equivalent to \begin{equation}\begin{aligned} &\nonumber 0=-\beta d(\scal)-2i_{Q(\grad(\beta))}g +2d\gamma+2\xi^{\flat}(\grad(\delta))\xi^{\flat}+2\delta\div(\xi)\xi^{\flat}+2\delta i_{\nabla_{\xi}\xi}g
\\& \hskip.3in -nd\gamma-\delta d(|\xi|^2)-|\xi|^2d\delta +\beta d(\scal)+\scal \cdot d\beta \\& \hskip.3in =-\frac{2\gamma}{\beta} i_{\grad(\beta)}g-\frac{2\delta}{\beta} \xi^{\flat}(\grad(\beta))i_{\xi}g+ \frac{1}{\beta}\left(i_{\nabla_{\grad(\beta)}\xi}g+d\beta\circ \nabla \xi\right)
\\& \hskip.3in +2d\gamma+2\xi^{\flat}(\grad(\delta))\xi^{\flat}+2\delta(n\gamma+\delta |\xi|^2-\beta \scal)\xi^{\flat}+2\delta i_{\nabla_{\xi}\xi}g
\\& \hskip.3in -nd\gamma-\delta d(|\xi|^2)-|\xi|^2d\delta +\scal \cdot d\beta \\& \hskip.3in =-\frac{2\gamma}{\beta} d\beta-\frac{2\delta}{\beta} \xi^{\flat}(\grad(\beta))\xi^{\flat}+\frac{1}{\beta}\left(\nabla_{\grad(\beta)}\xi^{\flat}+d\beta\circ \nabla \xi\right)-(n-2)d\gamma
\\& \hskip.3in +2\xi^{\flat}(\grad(\delta))\xi^{\flat}+2\delta(n\gamma+\delta |\xi|^2)\xi^{\flat}+2\delta (\nabla_{\xi}\xi^{\flat}-\xi^{\flat}\circ \nabla \xi)-|\xi|^2d\delta +\scal (d\beta-2\beta\delta \xi^{\flat}). \end{aligned}\end{equation} \end{proof}
Proposition \ref{te} implies the following.
\begin{theorem} Let $(M,g)$ be a Riemannian manifold and $\xi$ a vector field such that its dual $1$-form $\xi^{\flat}$ is a Schr\"{o}dinger-Ricci harmonic form. If $(g,\xi,\gamma,\delta)$ defines a $\xi^{\flat}$-Ricci soliton on $M$, then either the soliton is a Ricci soliton or $\xi$ is a divergence-free vector field. \end{theorem} \begin{proof} Taking into account that $\beta=1$, $\gamma$ and $\delta$ are constant, we obtain $$\delta\left(\div(\xi)\xi^{\flat}+\nabla_{\xi}\xi^{\flat}-\xi^{\flat}\circ \nabla \xi\right)=0$$ from (\ref{kk2}), which either implies $\delta=0$ or $\div(\xi)\xi^{\flat}=\xi^{\flat}\circ \nabla \xi-\nabla_{\xi}\xi^{\flat}$. Computing the second relation in $\xi$, we find $$\div(\xi)=0.$$ \end{proof}
\begin{theorem}\label{te44} For every Ricci soliton $(M,g,\xi,\gamma)$, the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a Schr\"{o}dinger-Ricci harmonic form. \end{theorem} \begin{proof} For $\beta=1$, $\delta=0$ and $\gamma$ constant, we deduce that (\ref{kk2}) always holds, hence the conclusion. \end{proof}
Similarly, we deduce the following proposition from (\ref{tr}), (\ref{di}) for $\beta=0$, and (\ref{l2}).
\begin{proposition}\label{p22} Let $(M^n, g,\xi,\gamma,\delta)$ be an almost $\xi^{\flat}$-Yamabe soliton. Then the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a Schr\"{o}dinger-Ricci harmonic form if and only if it satisfies \begin{equation}\label{p2}
\frac{n-2}{2}d\gamma+\frac{|\xi|^2}{2}d\delta-\xi^{\flat}(\grad(\delta))\xi^{\flat}-
\delta(n\gamma+\delta|\xi|^2)\xi^{\flat}-\delta(\nabla_{\xi}\xi^{\flat}-\xi^{\flat}\circ \nabla \xi)=0. \end{equation} \end{proposition} \begin{proof} From (\ref{l2}) we know that $\xi^{\flat}$ is a Schr\"{o}dinger-Ricci harmonic form if and only if $\div({\mathcal L}_{\xi}g)=d(\div(\xi))$, which by means of (\ref{tr}) and (\ref{di}) for $\beta=0$, is equivalent to \begin{equation}\begin{aligned} &\nonumber 0=2d\gamma+2\xi^{\flat}(\grad(\delta))\xi^{\flat}+2\delta\div(\xi)\xi^{\flat}+2\delta i_{\nabla_{\xi}\xi}g
-nd\gamma-\delta d(|\xi|^2)-|\xi|^2d\delta
\\& \hskip.3in =2d\gamma+2\xi^{\flat}(\grad(\delta))\xi^{\flat}+2\delta(n\gamma+\delta|\xi|^2)\xi^{\flat}+2\delta \nabla_{\xi}\xi^{\flat}
-nd\gamma-2\delta \xi^{\flat}\circ \nabla \xi-|\xi|^2d\delta. \end{aligned}\end{equation} \end{proof}
We immediately obtain the followings from Proposition \ref{p22}.
\begin{theorem} If $(M,g,\xi,\gamma,\delta)$ is a $\xi^{\flat}$-Yamabe soliton such that the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a Schr\"{o}dinger-Ricci harmonic form, then either the soliton is a generalized Yamabe soliton or $\xi$ is a divergence-free vector field. \end{theorem} \begin{proof} Taking into account that $\gamma$ and $\delta$ are constant, we obtain $$\delta\left(\div(\xi)\xi^{\flat}+\nabla_{\xi}\xi^{\flat}-\xi^{\flat}\circ \nabla \xi\right)=0$$ from (\ref{p2}), which either implies $\delta=0$ or $\div(\xi)\xi^{\flat}=\xi^{\flat}\circ \nabla \xi-\nabla_{\xi}\xi^{\flat}$. Computing the second relation in $\xi$, we find $$\div(\xi)=0.$$ \end{proof}
\begin{theorem}\label{c:4.14} Let $(M^n,g)$ be a complete Riemannian manifold, $n>2$. If $(M^n,g,\xi,\gamma)$ is an almost generalized Yamabe soliton, then the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a Schr\"{o}dinger-Ricci harmonic form if and only if the soliton is a generalized Yamabe soliton. \end{theorem} \begin{proof} For $\delta=0$, we deduce that (\ref{p2}) holds if and only if $(n-2)d\gamma=0$, hence the conclusion. \end{proof}
\begin{theorem}\label{tp} For every generalized Yamabe soliton $(M,g,\xi,\gamma)$, the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a Schr\"{o}dinger-Ricci harmonic form. \end{theorem}
\subsection{Harmonic forms}\label{Subection4.3}
A $1$-form $\theta$ on a Riemannian manifold $(M,g)$ is called \textit{harmonic} if it satisfies $$\Delta(\theta)=0.$$
We can notice that if $\theta^{\sharp}\in\ker Q$, then $\theta$ is Schr\"{o}dinger-Ricci harmonic if and only if it is harmonic.
\begin{lemma} A $1$-form $\theta$ on a Riemannian manifold $(M,g)$ is a harmonic form if and only if the dual vector field $\theta^{\sharp}$ of $\theta$ satisfies \begin{equation}\label{l3} \div({\mathcal L}_{\theta^{\sharp}}g)=\Ric_{\sharp}(\theta)+d(\div(\theta^{\sharp})). \end{equation} \end{lemma}
Since
$$\Ric_{\sharp}(\theta)=i_{Q(\theta^{\sharp})}g, \ \ d(\div(\theta^{\sharp}))=i_{\grad(\div(\theta^{\sharp}))}g, \ \ \theta\circ \nabla \theta^{\sharp}=\frac{1}{2}d(|\theta^{\sharp}|^2), \ \ (\nabla_X\theta)^{\sharp}=\nabla_X\theta^{\sharp},$$ we deduce the following proposition from (\ref{tr}), (\ref{di}), (\ref{po}), (\ref{pin}) and (\ref{l3}).
\begin{proposition}\label{teor} Let $(M^n,g,\xi,\beta,\gamma,\delta)$ be a generalized soliton such that $\beta$ is nowhere zero on $M$. Then the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a harmonic form if and only if it satisfies $$
\frac{\scal}{2}(d\beta-2\beta\delta \xi^{\flat})=\frac{\gamma}{\beta} d\beta+\frac{n-2}{2}d\gamma+\frac{|\xi|^2}{2}d\delta-\frac{1}{2\beta}(\nabla_{\grad(\beta)}\xi^{\flat}+d\beta\circ \nabla \xi)$$$$+\left(\frac{\delta}{\beta}\xi^{\flat}(\grad(\beta))-\xi^{\flat}(\grad(\delta))\right)\xi^{\flat}-
\frac{(2n\beta\delta-1)\gamma+\delta(2\beta\delta-1)|\xi|^2}{2\beta}\xi^{\flat}$$ \begin{equation}\label{kk3} -\left(\delta+\frac{1}{4\beta}\right)\nabla_{\xi}\xi^{\flat}+ \left(\delta-\frac{1}{4\beta}\right)\xi^{\flat}\circ \nabla \xi. \end{equation} \end{proposition} \begin{proof} From (\ref{l3}) we know that $\xi^{\flat}$ is a harmonic form if and only if $\div({\mathcal L}_{\xi}g)=\Ric_{\sharp}(\xi^{\flat})+d(\div(\xi))$, which by means of (\ref{tr}), (\ref{di}), (\ref{po}) and (\ref{pin}), is equivalent to \begin{equation}\begin{aligned} &\nonumber 0=-\beta d(\scal)-2i_{Q(\grad(\beta))}g +2d\gamma+2\xi^{\flat}(\grad(\delta))\xi^{\flat}+2\delta\div(\xi)\xi^{\flat}+2\delta i_{\nabla_{\xi}\xi}g
\\& \hskip.3in -i_{Q\xi}g-nd\gamma-\delta d(|\xi|^2)-|\xi|^2d\delta +\beta d(\scal)+\scal \cdot d\beta \\& \hskip.3in =-\frac{2\gamma}{\beta} i_{\grad(\beta)}g-\frac{2\delta}{\beta} \xi^{\flat}(\grad(\beta))i_{\xi}g+ \frac{1}{\beta}\left(i_{\nabla_{\grad(\beta)}\xi}g+d\beta\circ \nabla \xi\right)
\\& \hskip.3in +2d\gamma+2\xi^{\flat}(\grad(\delta))\xi^{\flat}+2\delta(n\gamma+\delta |\xi|^2-\beta \scal)\xi^{\flat}+2\delta i_{\nabla_{\xi}\xi}g
\\& \hskip.3in -\frac{1}{\beta}(\gamma+\delta|\xi|^2)\xi^{\flat}+\frac{1}{2\beta}(i_{\nabla_{\xi}\xi}g+\xi^{\flat}\circ \nabla \xi)-nd\gamma-\delta d(|\xi|^2)-|\xi|^2d\delta +\scal \cdot d\beta \\& \hskip.3in =-\frac{2\gamma}{\beta} d\beta-\frac{2\delta}{\beta} \xi^{\flat}(\grad(\beta))\xi^{\flat}+\frac{1}{\beta}\left(\nabla_{\grad(\beta)}\xi^{\flat}+d\beta\circ \nabla \xi\right)-(n-2)d\gamma
\\& \hskip.3in +2\xi^{\flat}(\grad(\delta))\xi^{\flat}+\Big(\Big(2n\delta-\frac{1}{\beta}\Big)\gamma+\Big(2\delta-\frac{1}{\beta}\Big) \delta|\xi|^2\Big)\xi^{\flat}
\\& \hskip.3in +\left(2\delta+\frac{1}{2\beta}\right) \nabla_{\xi}\xi^{\flat}-\left(2\delta-\frac{1}{2\beta}\right) \xi^{\flat}\circ \nabla \xi-|\xi|^2d\delta +\scal (d\beta-2\beta\delta \xi^{\flat}). \end{aligned}\end{equation} \end{proof}
Proposition \ref{teor} implies the following.
\begin{theorem} \label{te1} Let $(M,g,\xi,\gamma)$ be a Ricci soliton whose potential vector field $\xi$ is of constant length. If the dual $1$-form $\xi^{\flat}$ of $\xi$ is a harmonic form, then the Ricci soliton is steady. \end{theorem} \begin{proof} For $\beta=1$, $\delta=0$ and $\gamma$ constant, we obtain $$\gamma \xi^{\flat}=\frac{1}{2}\left(\nabla_{\xi}\xi^{\flat}+\xi^{\flat}\circ \nabla \xi\right).$$ from (\ref{kk3}). Computing this relation in $\xi$, we find
$$\gamma=\frac{\xi(|\xi|^2)}{2|\xi|^2}=0.$$ \end{proof}
Similarly, we deduce the following proposition from (\ref{tr}), (\ref{di}) for $\beta=0$, and (\ref{l3}).
\begin{proposition}\label{p33} Let $(M^n, g,\xi,\gamma,\delta)$ be an almost $\xi^{\flat}$-Yamabe soliton. Then the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a harmonic form if and only if it satisfies
$$Q\xi=-(n-2)\grad(\gamma)-|\xi|^2\grad(\delta)+2\xi^{\flat}(\grad(\delta))\xi $$ \begin{equation}\label{p4}
+2\delta\left(\nabla_{\xi}\xi-\frac{1}{2}\grad(|\xi|^2)+(n\gamma+\delta|\xi|^2)\xi\right). \end{equation} \end{proposition} \begin{proof} From (\ref{l3}) we know that $\xi^{\flat}$ is a harmonic form if and only if $\div({\mathcal L}_{\xi}g)=\Ric_{\sharp}(\xi^{\flat})+d(\div(\xi))$, which by means of (\ref{tr}), (\ref{di}) for $\beta=0$, and (\ref{pin}), is equivalent to
\begin{equation}\begin{aligned} &\nonumber 0=2d\gamma+2\xi^{\flat}(\grad(\delta))\xi^{\flat}+2\delta\div(\xi)\xi^{\flat}+2\delta i_{\nabla_{\xi}\xi}g-i_{Q\xi}g-nd\gamma-\delta d(|\xi|^2)-|\xi|^2d\delta
\\& \hskip.3in =2i_{\grad(\gamma)}g+2\xi^{\flat}(\grad(\delta))i_{\xi}g+2\delta(n\gamma+\delta|\xi|^2)i_{\xi}g+2\delta i_{\nabla_{\xi}\xi}g-i_{Q\xi}g
\\& \hskip.3in -ni_{\grad(\gamma)}g-\delta i_{\grad(|\xi|^2)}g-|\xi|^2i_{\grad(\delta)}g. \end{aligned}\end{equation} \end{proof}
We immediately obtain the following from Proposition \ref{p33}.
\begin{corollary} If $(M,g,\xi,\gamma)$ is an almost generalized Yamabe soliton, then the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a harmonic form if and only if $$Q\xi=-(n-2)\grad(\gamma).$$
In particular, if $(M,g,\xi,\gamma)$ is a generalized Yamabe soliton, then $\xi^{\flat}$ is a harmonic form if and only if $\xi \in \ker Q$. \end{corollary} \begin{proof} For $\delta=0$, we deduce that (\ref{p4}) holds if and only if $Q\xi=-(n-2)\grad(\gamma)$. In particular, if $\gamma$ is constant, then the second assertion follows, too. \end{proof}
Using the fact that $(\nabla_X\theta)^{\sharp}=\nabla_X\theta^{\sharp}$, for any vector field $X$ and any $1$-form $\theta$, we conclude
\begin{proposition} Let $(M,g,\xi,\beta,\gamma,\delta)$ be a generalized soliton such that $\beta$ is nowhere zero on $M$. Then a harmonic $1$-form $\theta$ on $(M,g)$ is Schr\"{o}dinger-Ricci harmonic if and only if it satisfies $$\gamma \theta+\delta \theta(\xi)\xi^{\flat}=\frac{1}{2}(\nabla_{\theta^{\sharp}}\xi^{\flat}+\theta\circ \nabla \xi).$$ \end{proposition} \begin{proof} From (\ref{l2}) and (\ref{l3}) we find that $\theta$ is Schr\"{o}dinger-Ricci harmonic if and only if $\Ric_{\sharp}(\theta)=0$, which from the generalized soliton equation (\ref{generalsol}) is equivalent to $$0=\beta i_{Q(\theta^{\sharp})}g=\gamma i_{\theta^{\sharp}}g+\delta \xi^{\flat}(\theta^{\sharp})\xi^{\flat}-\frac{1}{2}(i_{\nabla_{\theta^{\sharp}}\xi}g+\theta\circ \nabla \xi),$$ hence the conclusion. \end{proof}
The relations between the above notions can be summarized as the following.
\begin{theorem}\label{t4} Let $\theta$ be a $1$-form on a Riemannian manifold $(M,g)$. \\ {\rm (i)} If $\theta$ is a solution of the Schr\"{o}dinger-Ricci equation, then $\theta$ is
{\rm (a)} a Schr\"{o}dinger-Ricci harmonic form if and only if $d(\div(\theta^{\sharp}))=0$;
{\rm (b)} a harmonic form if and only if $\Ric_{\sharp}(\theta)+d(\div(\theta^{\sharp}))=0$.\\ {\rm (ii)} If $\theta$ is a Schr\"{o}dinger-Ricci harmonic form, then $\theta$ is
{\rm (a)} a solution of the Schr\"{o}dinger-Ricci equation if and only if $d(\div(\theta^{\sharp}))=0$;
{\rm (b)} a harmonic form if and only if $\Ric_{\sharp}(\theta)=0$.\\ {\rm (iii)} If $\theta$ is a harmonic form, then $\theta$ is
{\rm (a)} a solution of the Schr\"{o}dinger-Ricci equation if and only if $\Ric_{\sharp}(\theta)+d(\div(\theta^{\sharp}))=0$;
{\rm (b)} a Schr\"{o}dinger-Ricci harmonic form if and only if $\Ric_{\sharp}(\theta)=0$. \end{theorem}
From (\ref{generalsol}), (\ref{tr}) and Theorem \ref{t4}, we find the relations between the cases when the dual $1$-form $\xi^{\flat}$ of the potential vector field $\xi$ is a solution of the Schr\"{o}dinger-Ricci equation, is a Schr\"{o}dinger-Ricci harmonic form or a harmonic form. \begin{proposition} Let $(M^n,g,\xi,\beta,\gamma,\delta)$ be a generalized soliton such that $\beta$ is nowhere zero on $M$.\\ {\rm (i)} If $\xi^{\flat}$ is a solution of the Schr\"{o}dinger-Ricci equation, then $\xi^{\flat}$ is
{\rm (a)} a Schr\"{o}dinger-Ricci harmonic form if and only if $$\beta\scal-n\gamma-\delta|\xi|^2 \ \ \text{is constant};$$
{\rm (b)} a harmonic form if and only if $$d(\beta\scal)=\frac{1}{\beta}(\gamma +\delta |\xi|^2)\xi^{\flat}+nd\gamma+|\xi|^2d\delta-\frac{1}{2\beta}\nabla_{\xi}\xi^{\flat}+\left(2\delta-\frac{1}{2\beta}\right)\xi^{\flat}\circ \nabla \xi.$$ \\ {\rm (ii)} If $\xi^{\flat}$ is a Schr\"{o}dinger-Ricci harmonic form, then $\xi^{\flat}$ is
{\rm (a)} a solution of the Schr\"{o}dinger-Ricci equation if and only if $$\beta\scal-n\gamma-\delta|\xi|^2 \ \ \text{is constant};$$
{\rm (b)} a harmonic form if and only if $$(\gamma +\delta |\xi|^2)\xi^{\flat}=\frac{1}{2}(\nabla_{\xi}\xi^{\flat}+\xi^{\flat}\circ \nabla \xi).$$\\ {\rm (iii)} If $\xi^{\flat}$ is a harmonic form, then $\xi^{\flat}$ is
{\rm (a)} a solution of the Schr\"{o}dinger-Ricci equation if and only if $$d(\beta\scal)=\frac{1}{\beta}(\gamma +\delta |\xi|^2)\xi^{\flat}+nd\gamma+|\xi|^2d\delta-\frac{1}{2\beta}\nabla_{\xi}\xi^{\flat}+\left(2\delta-\frac{1}{2\beta}\right)\xi^{\flat}\circ \nabla \xi;$$
{\rm (b)} a Schr\"{o}dinger-Ricci harmonic form if and only if $$(\gamma +\delta |\xi|^2)\xi^{\flat}=\frac{1}{2}(\nabla_{\xi}\xi^{\flat}+\xi^{\flat}\circ \nabla \xi).$$ \end{proposition}
\subsection{$1$-forms orthogonal to $\xi^{\flat}$}\label{Subsection3.4}
Two $1$-forms $\theta_1$ and $\theta_2$ are \textit{orthogonal} if $g(\theta_1^{\sharp},\theta_2^{\sharp})=0$, i.e., $\langle\theta_1,\theta_2\rangle=0$.
Since $\theta_1(\theta_2^{\sharp})=\theta_2(\theta_1^{\sharp})$, remark that $\theta_1$ and $\theta_2$ are orthogonal if and only if $$\theta_1^{\sharp}\in \ker \theta_2\ \ (\textit{or} \ \ \theta_2^{\sharp}\in \ker \theta_1).$$
Computing the generalized soliton equation (\ref{generalsol}) in $(\theta^{\sharp},\theta^{\sharp})$, we obtain $$g(\nabla_{\theta^{\sharp}}\xi,\theta^{\sharp})+\beta g(Q(\theta^{\sharp}),\theta^{\sharp})-\gamma g(\theta^{\sharp},\theta^{\sharp})=\delta (\xi^{\flat}(\theta^{\sharp}))^2$$ and we can state
\begin{proposition}\label{j} Let $(M,g,\xi,\beta,\gamma,\delta)$ be a generalized soliton such that $\delta$ is nowhere zero on $M$. Then a $1$-form $\theta$ is orthogonal to $\xi^{\flat}$ if and only if the dual vector field $\theta^{\sharp}$ of $\theta$ satisfies $$ \nabla_{\theta^{\sharp}}\xi+\beta Q(\theta^{\sharp})-\gamma \theta^{\sharp}\in \ker \theta. $$ \end{proposition}
Computing now the same equation in $(\theta^{\sharp}, \cdot\,)$, we find $$\frac{1}{2}(\nabla_{\theta^{\sharp}}\xi^{\flat}+\xi^{\flat}\circ \nabla \theta^{\sharp})=i_{\gamma \theta^{\sharp}-\beta Q(\theta^{\sharp})}g+\delta \xi^{\flat}(\theta^{\sharp})\xi^{\flat}$$ and we can state
\begin{proposition} Let $(M,g,\xi,\beta,\gamma,\delta)$ be a generalized soliton such that $\delta$ is nowhere zero on $M$. Then a $1$-form $\theta$ is orthogonal to $\xi^{\flat}$ if and only if the dual vector field $\theta^{\sharp}$ of $\theta$ satisfies $$ \frac{1}{2}(\nabla_{\theta^{\sharp}}\xi^{\flat}+\xi^{\flat}\circ \nabla \theta^{\sharp})=i_{\gamma \theta^{\sharp}-\beta Q(\theta^{\sharp})}g. $$ \end{proposition}
\section{The gradient case}\label{Section5}
Let $\xi=\grad(f)$, $\xi^{\flat}=df$ with $f$ a smooth function on $(M,g)$. Taking into account that \cite{blag} $$ \div({\mathcal L}_{\xi}g)=2 d(\div(\xi))+2i_{Q\xi}g $$ and using (\ref{e61}): $$2\div(\Hess(f))=\Delta(df)+\Ric_{\sharp}(df)+d(\Delta (f)),$$ we deduce \begin{equation}\label{er} \Delta(df)=d(\Delta(f))+\Ric_{\sharp}(df). \end{equation} Therefore, $df$ is
(a) a solution of the Schr\"{o}dinger-Ricci equation if and only if $$\Delta(df)=0;$$
(b) a Schr\"{o}dinger-Ricci harmonic form if and only if $$\Delta(df)=\frac{1}{2}d(\Delta(f));$$
(c) a harmonic form if and only if $$df\circ Q=-d(\Delta(f)).$$
Also, an exact $1$-form $df$ is harmonic if and only if the function $f$ is harmonic and from (\ref{er}) we deduce that $df$ is harmonic if and only if $$df\circ Q=0 \Longleftrightarrow df\in \ker (\Ric_{\sharp}).$$
Computing the gradient generalized soliton equation (\ref{h}) in $(\theta^{\sharp}, \cdot\,)$, we obtain $$\nabla_{\theta^{\sharp}}\xi+\beta Q(\theta^{\sharp})=\gamma \theta^{\sharp}+\delta \xi^{\flat}(\theta^{\sharp})\xi$$ which, by taking the inner product with $\xi$, implies
$$\frac{1}{2}\theta^{\sharp}(|\xi|^2)+\beta\theta(Q\xi)=(\gamma+\delta|\xi|^2)\xi^{\flat}(\theta^{\sharp}).$$
Hence, we can state the followings.
\begin{proposition}\label{P:5.1} Let $(M,g,\xi,\beta,\gamma,\delta)$ be a gradient generalized soliton such that $\delta$ is nowhere zero on $M$. Then a $1$-form $\theta$ is orthogonal to $\xi^{\flat}$ if and only if the dual vector field $\theta^{\sharp}$ of $\theta$ satisfies $$ \nabla_{\theta^{\sharp}}\xi+\beta Q(\theta^{\sharp})-\gamma \theta^{\sharp}=0, $$ hence we have \begin{equation}\label{k}
\frac{1}{2}\theta^{\sharp}(|\xi|^2)=-\beta\theta(Q\xi). \end{equation} \end{proposition}
\begin{remark} If $(g,\xi:=\grad(f),\beta,\gamma,\delta)$ defines a gradient generalized soliton on a compact and oriented manifold $M$, and the $1$-form $\theta$ is orthogonal to $\xi^{\flat}$, then $$0=g(\nabla_{\xi}\xi,\theta^{\sharp})+\beta g(\xi,Q(\theta^{\sharp}))= (\nabla_{\xi}\xi^{\flat})\theta^{\sharp}+\beta \xi^{\flat}(Q(\theta^{\sharp})),$$ hence $$\theta^{\sharp}\in \ker(\nabla_{\xi}\xi^{\flat}+\beta (\xi^{\flat}\circ Q)).$$ \end{remark}
\begin{proposition}\label{P:5.11} Let $(M,g,\xi,\gamma,\delta)$ be a gradient almost $\xi^{\flat}$-Yamabe soliton such that $\delta$ is nowhere zero on $M$. Then a $1$-form $\theta$ is orthogonal to $\xi^{\flat}$ if and only if the dual vector field $\theta^{\sharp}$ of $\theta$ satisfies $$ \nabla_{\theta^{\sharp}}\xi=\gamma \theta^{\sharp}, $$ hence we have \begin{equation}\label{k}
\theta^{\sharp}(|\xi|^2)=0, \end{equation}
i.e., $|\xi|^2$ is constant along each integral curve of $\theta^{\sharp}$. \end{proposition}
In \cite{bcd} we proved that \begin{equation}\begin{aligned}\label{ghad}& \frac{1}{2}\Delta(\theta(\grad(f)))=\gamma \div(\theta^{\sharp})-\beta\div(Q(\theta^{\sharp}))+\frac{\beta}{2}\theta^{\sharp}(\scal)\\&\hskip.3in+2\langle df,\Delta (\theta)\rangle- \langle d(\Delta(f)), \theta\rangle+\delta (\nabla_{\grad(f)}\theta)(\grad(f)) \end{aligned}\end{equation} and the following Bochner-type formula for a gradient generalized soliton.
\begin{proposition}\label{P:5.5} \cite{bcd} Let $(M^n,g,\xi:=\grad(f),\beta,\gamma,\delta)$ be a gradient generalized soliton and $\theta$ a $1$-form on $M$. Then \begin{equation}\begin{aligned}\label{gh} &\left(\frac{1}{2}\Delta-\delta \nabla_{\grad(f)}\right)(\theta(\grad(f)))+\langle d(\Delta(f)), \theta\rangle-2\langle df,\Delta (\theta)\rangle-\gamma \div(\theta^{\sharp})\\&\hskip.3in +\beta\div(Q(\theta^{\sharp}))=
-\frac{\scal}{2}\theta^{\sharp}(\beta)+\frac{n}{2}\theta^{\sharp}(\gamma)+\frac{|\grad(f)|^2}{2}\theta^{\sharp}(\delta)\\&\hskip.5in+\frac{\delta}{2}\theta^{\sharp}(|\grad(f)|^2)-\frac{1}{2}\theta^{\sharp}(\Delta(f))-\delta \theta(\nabla_{\grad(f)}\grad(f)),\end{aligned}\end{equation} where $\langle \theta_1,\theta_2\rangle:=\sum_{i=1}^n\theta_1(E_i)\theta_2(E_i)$, for any $1$-forms $\theta_1$ and $\theta_2$, and $\{E_i\}_{1\leq i\leq n}$ a local orthonormal frame field on $(M,g)$. \end{proposition}
For $1$-forms orthogonal to $df$, from (\ref{ghad}) we deduce
\begin{proposition}\label{P:5.6} Let $(M,g,\xi:=\grad(f),\beta, \gamma,\delta)$ be a gradient generalized soliton and let $\theta$ be a closed and co-closed $1$-form on the closed Riemannian manifold $(M,g)$, orthogonal to $\xi^{\flat}=df$. Then $$\beta\div(Q(\theta^{\sharp}))= \frac{\beta}{2}\theta^{\sharp}(\scal)-\delta\theta(\nabla_{\grad(f)}\grad(f)).$$ \end{proposition}
As consequences, we have
\begin{corollary}\label{C:5.7} Let $(M,g,\xi:=\grad(f),\gamma)$ be a gradient almost Ricci soliton and let $\theta$ be a closed and co-closed $1$-form on the closed Riemannian manifold $(M,g)$, orthogonal to $\xi^{\flat}=df$.
{\rm (i)} If $Q(\theta^{\sharp})$ is divergence-free, then the scalar curvature of $M$ is constant along each integral curve of $\theta^{\sharp}$.
{\rm (ii)} In the compact case, $\int_M\theta^{\sharp}(\scal)dv=0$, where $dv$ is the volume element of $(M,g)$. \end{corollary}
\begin{corollary}\label{C:5.8} Let $(M,g,\xi:=\grad(f),\gamma,\delta)$ be a gradient almost $\xi^{\flat}$-Yamabe soliton and let $\theta$ be a closed and co-closed $1$-form on the closed Riemannian manifold $(M,g)$, orthogonal to $\xi^{\flat}=df$. Then either the soliton is a gradient almost generalized Yamabe soliton or $\theta^{\sharp}$ is orthogonal to $\nabla_{\grad(f)}\grad(f)$. \end{corollary}
\section{Generalized solitons on Euclidean submanifolds \\as examples}\label{Section6}
For general references on Euclidean submanifolds, we refer to \cite{book73,book20}. In this section, we will present some examples of generalized solitons on Euclidean submanifolds whose potential vector fields are given by their canonical vector fields.
For a Euclidean submanifold $M$ in $\mathbb E^{m}$ with position vector field ${\bf x}$, denote by ${\bf x}^T$ and ${\bf x}^N$ the tangential and normal components of ${\bf x}$, respectively. Thus, we have \begin{align} \label{6.1} {\bf x}={\bf x}^T+{\bf x}^N.\end{align} The tangent vector field ${\bf x}^{T}$ on $M$ is called the {\it canonical vector field} of $M$.
Let $\phi:(M,g) \to (\mathbb E^m,\tilde g)$ be an isometric immersion of a Riemannian $n$-manifold $(M,g)$ into the Euclidean $m$-space $(\mathbb E^m,\tilde g)$. We denote by $\nabla$ and $\widetilde\nabla$ the Levi-Civita connections of $(M,g)$ and of $(\mathbb E^m,\tilde g)$, respectively. Then the formula of Gauss and the formula of Weingarten are given respectively by \begin{align} &\label{6.2}\widetilde \nabla_XY=\nabla_XY+h(X,Y), \\& \label{6.3}\widetilde \nabla_X \zeta=-A_\zeta X+D_X\zeta,\end{align} for vector fields $X,Y$ tangent to $M$ and $\zeta$ normal to $M$, where $h$ is the second fundamental form, $A$ the shape operator, and $D$ the normal connection of $\phi$.
The shape operator $A$ and the second fundamental form $h$ are related by
\begin{align} &\nonumber\tilde g(h(X,Y),\zeta)=g(A_{\zeta}X,Y),\end{align}
and the {\it mean curvature vector} $H$ of $\phi$ is given by
\begin{align}\nonumber H=\(\frac{1}{n}\){\rm Trace}\, (h).\end{align}
A submanifold $M$ is called {\it totally umbilical} if its second fundamental form $h$ satisfies $ h(X,Y)=g(X,Y)H$, for any vector fields $X,Y$ tangent to $M$.
A hypersurface $M$ of $\mathbb E^{n+1}$ is called {\it quasi-umbilical} if its shape operator admits an eigenvalue, said $\kappa$, with multiplicity $mult(\kappa)\geq n-1$ (cf. \cite[page 147]{book73}). On the open subset $U$ of $M$ on which $mult(\kappa)=n-1$, an eigenvector with eigenvalue of multiplicity one is called a {\it distinguished principal direction} of the hypersurface $M$.
\begin{lemma} For a Euclidean submanifold $M$ in $\mathbb E^{m}$, we have \begin{align} \label{6.6}& \nabla_X {\bf x}^T=A_{{\bf x}^N}X+X, \\&\label{6.7} h(X, {\bf x}^T)=-D_X {\bf x}^N.\end{align} \end{lemma} \begin{proof} It is well-known that the position vector field ${\bf x}$ of $M$ in $\mathbb E^{m}$ is a concurrent vector field, i.e., ${\bf x}$ satisfies \begin{align} \label{6.8} \widetilde\nabla_X{\bf x}=X,\end{align} for any vector field $X$ tangent to $M$. It follows from \eqref{6.1}, \eqref{6.2}, \eqref{6.3} and \eqref{6.8} that \begin{equation}\begin{aligned} \label{6.9} X&=\widetilde \nabla_X {\bf x}^T+\widetilde\nabla_X {\bf x}^N=
\nabla_X {\bf x}^T+h(X,{\bf x}^T)-A_{{\bf x}^N}X+D_X {\bf x}^N. \end{aligned}\end{equation} After comparing the tangential and normal components from \eqref{6.9}, we obtain \eqref{6.6} and \eqref{6.7}. \end{proof}
It follows from \eqref{6.6} and the definition of the Lie derivative that \begin{equation}\begin{aligned} \nonumber({\mathcal L}_{{\bf x}^T}g)(X,Y)=2g(X,Y)+2g(A_{{\bf x}^N}X,Y), \end{aligned}\end{equation} for any vector fields $X,Y$ tangent to $M$. After combining \eqref{generalsol} and \eqref{6.11}, we obtain the following.
\begin{theorem}\label{T:6.2} Let $M$ be a submanifold of $\mathbb E^{m}$. Then $(M,g,{\bf x}^T,\beta,\gamma,\delta)$ is a generalized soliton if and only if the Ricci tensor of $M$ satisfies \begin{equation}\label{6.11} \beta\,\Ric(X,Y)=(\gamma-1)g(X,Y)+\delta\, g({\bf x}^{T},X)g({\bf x}^{T},Y)-g(A_{{\bf x}^N}X,Y), \end{equation} for any vector fields $X,Y$ tangent to $M$. \end{theorem}
If the Euclidean submanifold $M$ lies in the unit hypersphere $S_{o}^{m-1}(1)$ of $\mathbb E^{m}$ centered at the origin $o\in\mathbb E^{m}$, then we have ${\bf x}^{N}={\bf x}$ and $A_{{\bf x}^N}=-I$, where $I$ denotes the identity map. Thus, in this case, \eqref{6.11} reduces to \begin{equation}\label{6.12} \beta\,\Ric(X,Y)=\gamma\,g(X,Y),\end{equation} for any vector fields $X,Y$ tangent to $M$.
Consequently, we have the following.
\begin{corollary}\label{C:6.3} If a submanifold $M$ is contained in the unit hypersphere $S_{o}^{m-1}(1)$ of $\mathbb E^{m}$ centered at the origin $o\in \mathbb E^{m}$, then $(M,g,{\bf x}^T,\beta,\gamma,\delta)$ with $\beta$ nowhere zero on $M$ is a generalized soliton if and only if $(M,g)$ is an Einstein manifold. In this case, the ratio $\gamma:\beta$ is a constant. \end{corollary}
If $\beta=0$, \eqref{6.12} also gives the following.
\begin{corollary}\label{C:6.4} Let $M$ be a submanifold lying in a hypersphere $S_{o}^{m-1}$ of $\mathbb E^{m}$ centered at the origin $o\in\mathbb E^{m}$. If $\beta=0$, then $(M,g,{\bf x}^T,\beta,\gamma,\delta)$ is a generalized soliton if and only if $\gamma=0$.\end{corollary}
If $M$ is a hypersurface of $\mathbb E^{m}$, then Theorem \ref{T:6.2} implies the following.
\begin{corollary}\label{C:6.5} Let $M$ be a hypersurface of $\mathbb E^{m}$ with $m>4$. If $(M,g,{\bf x}^T,\beta,\gamma,\delta)$ with $\delta = 0$ is a generalized soliton, then $M$ is a quasi-umbilical hypersurface. In particular, $M$ is a conformally flat space. \end{corollary}
The next classification theorem was proved in \cite[Theorem 6.1]{cd14}.
\begin{theorem} \label{T:6.6} Let $(g,{\bf x}^T,\delta)$ define a Ricci soliton on a hypersurface $M^n$ of $\mathbb E^{n+1}$. Then $M$ is one of the following hypersurfaces of $\mathbb E^{n+1}$: \\ {\rm (1)} a hyperplane through the origin;\\ {\rm (2)} a hypersphere centered at the origin;\\ {\rm (3)} an open part of a flat hypersurface generated by lines through the origin; \\ {\rm (4)} an open part of a circular hypercylinder $S^1(r)\times \mathbb E^{n-1}$, $r>0$;\\ {\rm (5)} an open part of a spherical hypercylinder $S^k(\sqrt{k-1})\times \mathbb E^{n-k}$, $2\leq k\leq n-1$.\end{theorem}
After combining this result with Theorem \ref{te44}, we have the following.
\begin{proposition} For each hypersurface $M$ of the 5 cases listed in Theorem \ref{T:6.6}, the dual $1$-form $({\bf x}^{T})^{\flat}$ of the canonical vector field ${\bf x}^{T}$ of $M$ is Schr\"{o}dinger-Ricci harmonic. \end{proposition}
Recall that a vector field on a Riemannian manifold is called {\it conservative} if it is the gradient of some function, known as a {\it scalar potential}, and it is called {\it incompressible} if it is divergence-free.
The next result was obtained in \cite{chen17}.
\begin{theorem}\label{T:6.8} Let $M$ be a submanifold of $\mathbb{E}^m$. Then we have:\\ {\rm (1)} the canonical vector field ${\bf x}^{T}$ of $M$ is always conservative;\\ {\rm (2)} ${\bf x}^{T}$ is incompressible if and only if $\left< {\bf x},H\right>=-1$ holds identically on $M$. \end{theorem}
Theorem \ref{T:6.8} implies the following.
\begin{corollary}\label{C:6.9} Let $M$ be a compact submanifold of $\mathbb E^{m}$. Then the dual $1$-form $({\bf x}^{T})^{\flat}$ of the canonical vector field ${\bf x}^{T}$ of $M$ is harmonic if and only if $\left< {\bf x},H\right>=-1$ holds identically on $M$.\end{corollary}
Several explicit examples of Euclidean submanifolds with $({\bf x}^{T})^{\flat}$ harmonic $1$-form were given in \cite{chen17}. In particular, by applying \cite[Theorem 3.2]{chen17} and Corollary \ref{C:6.9} we obtain the following.
\begin{corollary} For every equivariantly isometrical immersion of a compact homogeneous Riemannian manifold $M$ into a Euclidean $m$-space $\mathbb E^{m}$, the dual $1$-form $({\bf x}^{T})^{\flat}$ of the canonical vector field ${\bf x}^{T}$ of $M$ is a harmonic $1$-form. \end{corollary}
\textit{Adara M. Blaga}
\textit{Department of Mathematics}
\textit{West University of Timi\c{s}oara}
\textit{Timi\c{s}oara, Rom\^{a}nia}
\textit{[email protected]}
\textit{Bang-Yen Chen}
\textit{Department of Mathematics}
\textit{Michigan State University}
\textit{East Lansing, MI, USA}
\textit{[email protected]}
\end{document} | arXiv |
Yuan Xu1,4,
Jing Cao1,
Yuriy S. Shmaliy2 &
Yuan Zhuang ORCID: orcid.org/0000-0003-3377-96583
Satellite Navigation volume 2, Article number: 22 (2021) Cite this article
Colored Measurement Noise (CMN) has a great impact on the accuracy of human localization in indoor environments with Inertial Navigation System (INS) integrated with Ultra Wide Band (UWB). To mitigate its influence, a distributed Kalman Filter (dKF) is developed for Gauss–Markov CMN with switching Colouredness Factor Matrix (CFM). In the proposed scheme, a data fusion filter employs the difference between the INS- and UWB-based distance measurements. The main filter produces a final optimal estimate of the human position by fusing the estimates from local filters. The effect of CMN is overcome by using measurement differencing of noisy observations. The tests show that the proposed dKF developed for CMN with CFM can reduce the localization error compared to the original dKF, and thus effectively improve the localization accuracy.
With the improvement of people's living standards, the population aging problem has become increasingly serious in China. Consequently, health care for elderly people has gradually received due attention and become a new research area (Li et al., 2016; Xu et al., 2018, 2019; Zhuang et al., 2019c). As an important technology to assist indoor medical care, localization and tracking of target personnel appear in many works (Chen et al., 2020; Tian et al., 2020), and many approaches have been developed to provide the localization with sufficient accuracy.
The Global Positioning System (GPS) is most widely used for localization solution (El-Sheimy & Youssef, 2020; Li et al., 2020; Mosavi & Shafiee, 2016). For example, the GPS signals are used in Sekaran et al. (2020) to navigate a robot car. A drawback of GPS is its signals are not always available in indoor environments (El-Sheimy & Li, 2021). Accordingly, the short-range communication technologies, such as the Radio Frequency IDentification (RFID) (Tzitzis et al., 2019), bluetooth, Wireless Fidelity (WiFi), and Ultra Wide Band (UWB), have been developed for GPS-denied spaces. For example, an active RFID tag-based pedestrian navigation scheme was proposed in Fu and Retscher (2009). In Zhuang and El-Sheimy (2015); WiFi was used to assist the micro electromechanical systems sensors for indoor pedestrian navigation. An improved UWB localization structure was investigated in Yu et al. (2019) in a harsh indoor environment.
The localization techniques discussed above can provide social navigation in indoor environment with a sufficient accuracy. Compared to the RFID, bluetooth, and WiFi, the UWB-based can provide more accurate. Consequently, several UWB-based solutions were proposed in the last decades. However, these short-range communication and localization techniques require the pre-placed devices that cannot always be deployed properly in indoor spaces. To overcome this problem, several self-contained localization structures were proposed, such as the indoor pedestrian navigation scheme (Li et al., 2016) and a foot-mounted pedestrian navigation system based on Inertial Navigation System (INS) (Gu et al., 2015).
The INS-based navigation can be organized as a self-contained system. However, the accuracy is acceptable only in a short time interval due to the accumulation of drift errors. This shortcoming can be circumvented by integrating the INS- and short range-based communication technologies, as shown in Zhuang et al. (2019a). The INS/UWB integrated scheme is a typical example, but there exist many other approaches. For example, in Xu et al. (2021) and Zhang et al. (2020); an UWB/INS integrated pedestrian navigation algorithm was proposed, which employed the INS to assist the UWB to improve robustness. Another INS/UWB integrated scheme was designed for the quadrotor localization in Xu et al. (2021). The seamless indoor pedestrian tracking using the fusion of INS and UWB data is discussed in Xu et al. (2020). The advantages of the integrated schemes are the higher accuracy and robustness.
It is obvious that data fusion can improve the localization accuracy (Zhao & Huang, 2020). In a hybrid navigation technology, Kalman Filter (KF) is typically a linear fusing model (Norrdine et al., 2016; Zhao et al., 2016; Zhuang et al., 2019b). For nonlinear models, it is often organized using the Extended Kalman Filter (EKF) (Hsu et al., 2017), Iterated Extended Kalman Filter (IEKF) (Xu et al., 2013), and Unscented Kalman Filter (UKF) (Chen et al., 2015). Note that the above-mentioned filters are centralized. Although such filters can fuse sensor's data, the drawbacks, compared to the distributed filters, are: higher operation complexity and poorer fault tolerance. Moreover, the sensor data can be affected by Colored Measurement Noise (CMN). For example, Fig. 1 displays the UWB-derived distance with the CMN and white measurement noise. One thus can infer that the CMN is an important error factor in sensor data. It worth noticing that although the KF-based algorithms solve the problem of multi-sensor data fusion in an integrated navigation system and improve the localization accuracy, they are not efficient under CMN observed in UWB data.
The distance with the colored measurement noise and white measurement noise
To mitigate the effect of CMN on the navigation accuracy in INS/UWB integrated schemes in indoor environments, in this paper we modify the distributed KF (dKF) under Gauss–Markov CMN with an assumption that the Colouredness Factor Matrix (CFM) can switch at some points due to unstable operation conditions. A local filter employs the differences between the INS-measured and UWB-measured distances. The main filter produces the final estimates by fusing the estimates provided by local filters. The effect of CMN is mitigated in local filters using measurement differencing. The experiments show that the dKF modified for CMN with switch CFM can reduce the localization Root Mean Square Error (RMSE) by \(26.85\%\) compared to the standard dKF.
The rest of this work is structured as below. First, the INS/UWB integration for human localization scenario operating under CMN is described. Second, a dKF is developed for CMN with switch CFM. Third, the experiment is introduced. Fourth, the comparisons are made in terms of the localization accuracy given by INS, UWB, dKF, and dKF modified for CMN with switch CFM. Finally, conclusions are drawn.
INS/UWB integrated human navigation under CMN
The proposed INS/UWB integrated human localization scheme affected by CMN is shown in Fig. 2. In this structure, the INS and UWB subsystems work in parallel. The fusing filter is organized in the way that one key filter works together with M sub-filters. The jth sub-filter, \(j \in [1,M]\), is employed to estimate the target human position by fusing ranges \(r^{UWB}_j\) and \(r^{INS}_j\) from the target person to the jth UWB Reference Node (RN) under CMN at a discrete time index n. The main filter fuses the results of sub-filters to produce an optimal estimate.
INS/UWB integrated human localization scheme for distributed localization under CMN
Design of dKF for CMN with switch CFM
In this section, we modify the dKF under Gauss–Markov CMN. First, we consider the state-space model of the navigation problem. Then, the dKF is designed under CMN assuming switch CFM. Finally, the main filter fuses the results of the sub-filters.
Sub-filters for CMN
The state equation representing the 2D human dynamics and related to the jth sub-filter is described by:
$$\begin{aligned} {\varvec{x}}_n^{(j)} &= {{\varvec{F}}^{(j)}}{\varvec{x}}_{n - 1}^{(j)} + {\varvec{w}}_n^{(j)}\\&= { \left[ {\begin{array}{*{20}{c}} 1&\quad {{T^{(j)}}}&\quad 0&\quad 0\\ 0&\quad 1&\quad 0&\quad 0\\ 0&\quad 0&\quad 1&\quad {{T^{(j)}}}\\ 0&\quad 0&\quad 0&\quad 1 \end{array}} \right] {\varvec{x}}_{n - 1}^{(j)} + {\varvec{w}}_n^{(j)}} \end{aligned}$$
where the state vector is defined as
$$\begin{aligned} {\varvec{x}}_n^{(j)} = \left[ {\begin{array}{*{20}c} {\delta {\mathrm{Pos}}_n^{E{(j)}} } &\quad {\delta {\mathrm{Vel}}_n^{E{(j)}} } & \quad {\delta {\mathrm{Pos}}_n^{N{(j)}} } &\quad {\delta {\mathrm{Vel}}_n^{N{(j)}} } \\ \end{array}} \right] ^{\mathrm{T}}, \end{aligned}$$
in which \(( {\delta {\mathrm{Pos}}_n^{E{(j)}} ,\delta {\mathrm{Pos}}_n^{N{(j)}} })\) and \(( {\delta {\mathrm{Vel}}_n^{E{(j)}} ,\delta {\mathrm{Vel}}_n^{N{(j)}} } )\) are the position and velocity errors in east and north directions, \(T^{(j)}\) is the sample time for the jth sub-filter, \({\varvec{w}}_n^{(j)} \sim {\mathcal {N}} ({\varvec{0}}, {\varvec{Q}}^{(j)})\) is noise in the jth sub-filter.
The observation equation corresponding to the data obtained by the jth sub-filter is written as
$$\begin{aligned} y_n^{(j)} &= \delta {\mathrm{r}}_n^{(j)} = {\mathrm{r}}_n^{I(j)}- {\mathrm{r}}_n^{U(j)}\\ & = \frac{{\delta _{{\mathrm{x}}n}^{(j)}}}{{\sqrt{\delta _{{\mathrm{x}}n}^{{{(j)}^2}} + \delta _{{\mathrm{y}}n}^{{{(j)}^2}}} }}\delta {\mathrm{Pos}}_n^{E(j)} + \frac{{\delta _{{\mathrm{y}}n}^{(j)}}}{{\sqrt{\delta _{{\mathrm{x}}n}^{{{(j)}^2}} + \delta _{{\mathrm{y}}n}^{{{(j)}^2}}} }}\delta {\mathrm{Pos}}_n^{N(j)} + {\varvec{v}}_n^{(j)}\\ & = {\left[ {\begin{array}{*{20}{c}} {\frac{{\delta _{{\mathrm{x}}n}^{(j)}}}{{\sqrt{\delta _{{\mathrm{x}}n}^{{{(j)}^2}} + \delta _{{\mathrm{y}}n}^{{{(j)}^2}}} }}}\\ 0\\ {\frac{{\delta _{{\mathrm{y}}n}^{(j)}}}{{\sqrt{\delta _{{\mathrm{x}}n}^{{{(j)}^2}} + \delta _{{\mathrm{y}}n}^{{{(j)}^2}}} }}}\\ 0 \end{array}} \right] ^{\mathrm{T}}}{\varvec{x}}_n^{(j)} + {\varvec{v}}_n^{(j)}\\ & = {\varvec{H}}_n^{(j)}{\varvec{x}}_n^{(j)} + {\varvec{v}}_n^{(j)} \end{aligned}$$
where \(j \in [1,M]\), \(\delta _{{\mathrm {x}}n}^{(j)} = {\mathrm{Pos}}_n^{E,I} - x^{(j)}\), \(\delta _{{\mathrm {y}}n}^{(j)} = {\mathrm{Pos}}_n^{N,I} - y^{(j)}\), \(\left( {{\mathrm{Pos}}_n^{E,I} ,{\mathrm{Pos}}_n^{N,I}} \right)\) denotes INS positions in east and north directions, and \({\varvec{v}}_n\) is Gauss–Markov CMN represented with
$$\begin{aligned} {\varvec{v}}_n^{\left( j \right) } = \alpha _n^{\left( j \right) } {\varvec{v}}_{n-1}^{\left( j \right) } + \gamma _n^{\left( j \right) } \end{aligned}$$
where \(\gamma _n^{\left( j \right) } \sim {\mathcal {N}} ({\varvec{0}},{\varvec{R}})\) is white Gaussian driving noise and \(\alpha _n^{\left( j \right) }\) is the CFM.
To address the effects of CMN and apply filtering algorithms, we use measurement differencing and write the new observation equation as
$$\begin{aligned} {\varvec{z}}_n^{\left( j \right) }&= {\varvec{y}}_n^{\left( j \right) } - \alpha _n^{\left( j \right) } {\varvec{y}}_{n-1}^{\left( j \right) } \nonumber \\&= {\varvec{H}}_n^{\left( j \right) } {\varvec{x}}_n^{\left( j \right) } + {\varvec{v}}_n^{\left( j \right) } - \alpha _n^{\left( j \right) } {\varvec{H}}_{n - 1}^{\left( j \right) } {\varvec{x}}_{n - 1}^{\left( j \right) } \nonumber \\&\quad - \alpha _n^{\left( j \right) }{\varvec{v}}_{n - 1}^{\left( j \right) } \end{aligned}$$
From (1) and (3) we obtain
$$\begin{aligned} {\varvec{x}}_{n - 1}^{(j)}&= {{\varvec{F}}^{(j)^{-1}}} ( {{\varvec{x}}_n^{(j)} - {\varvec{w}}_n^{(j)}} ) \end{aligned}$$
$$\begin{aligned} {\varvec{v}} _{n - 1}^{(j)}&= {\alpha _n^{(j)^{-1}} } ( {{\varvec{v}} _n^{(j)} - \gamma _n^{(j)}} ) \end{aligned}$$
and substitute them into (4), giving
$$\begin{aligned} {\varvec{z}}_n^{(j)}&= ( {\varvec{H}}_n^{(j)} - {\varvec{T}}_n^{(j)} ) {\varvec{x}}_n^{(j)} + {\varvec{T}}_n^{(j)} {\varvec{w}}_n^{(j)} + \gamma _n^{(j)} \nonumber \\&= {\varvec{D}}_n^{(j)} {\varvec{x}}_n^{(j)} + {\bar{\gamma }}_n^{(j)} \end{aligned}$$
where \({\varvec{T}}_n^{(j)} = \alpha _n^{(j)} {\varvec{H}}_n^{(j)} {\varvec{F}}^{(j)^{-1}}\), \({\varvec{D}}_n^{(j)} = {\varvec{H}}_n^{(j)} - {\varvec{T}}_n^{(j)}\), \({{\bar{\gamma }}} _n^{(j)} ={\varvec{T}}_n^{(j)} {\varvec{w}}_n^{(j)} + \gamma _n^{(j)}\) and \({\bar{\gamma }}_n^{(j)} \sim {\mathcal {N}} ( {{\varvec{0}},{\bar{{\varvec{R}}}} } )\) is white Gaussian noise
$$\begin{aligned} {\bar{\gamma }}_n^{(j)} = {\varvec{T}}_n^{(j)} {\varvec{w}}_n^{(j)} + \gamma _n^{(j)} \end{aligned}$$
with the covariance
$$\begin{aligned} \overline{\varvec{R}} &= {\mathrm{E}}\{ {{\bar{\gamma }}} _{\mathrm{n}}^{({\mathrm{j}})}{{\bar{\gamma }}} _{\mathrm{n}}^{{{({\mathrm{j}})}^{\mathrm{T}}}}\} = {\varvec{T}}_{\mathrm{n}}^{({\mathrm{j}})}{{\varvec{Q}}^{({\mathrm{j}})}}{\varvec{T}}_{\mathrm{n}}^{{{({\mathrm{j}})}^{\mathrm{T}}}}+ {\varvec{R}}\\ & = {\varvec{T}}_n^{(j)}{\varvec{\Phi }}_n^{(j)} + {\varvec{R}} \end{aligned}$$
where \({\varvec{\Phi }}_n^{(j)} = {\varvec{Q}}^{(j)} {\varvec{T}}_n^{(j)^T}\). It follows from (8) that the observation noise \({\bar{\gamma }}_n^{(j)}\) is time-correlated with system noise \({\varvec{w}}_n^{(j)}\) and the KF cannot be applied straightforwardly. To de-correlate noise, we follow Shmaliy et al. (2020) and modify the state equation (1) as
$$\begin{aligned} {\varvec{x}}_n^{(j)} &= {{\varvec{F}}^{(j)}}{\varvec{x}}_{n - 1}^{(j)} + {\varvec{w}}_n^{(j)} + {\varvec{\beta }}_n^{(j)}[{\varvec{z}}_n^{(j)} - ({\varvec{D}}_n^{(j)}{\varvec{x}}_n^{(j)} + {\bar{{{\varvec{\gamma }}} }}_n^{(j)})]\\ & = ({\varvec{I}} - {\varvec{\beta }}_n^{(j)}{\varvec{D}}_n^{(j)}){{\varvec{F}}^{(j)}}{\varvec{x}}_{n - 1}^{(j)} + {\varvec{\beta }}_n^{(j)}{\varvec{z}}_n^{(j)}\\ &\quad + ({\varvec{I}} - {\varvec{\beta }}_n^{(j)}{\varvec{D}}_n^{(j)}){\varvec{w}}_n^{(j)} - {\varvec{\beta }}_n^{(j)}{\bar{{{\varvec{\gamma }}} }}_n^{(j)}\\ & = {\varvec{A}}_n^{(j)}{\varvec{x}}_{n - 1}^{(j)} + {\varvec{u}}_n^{(j)} + {\varvec{\eta }}_n^{(j)} \end{aligned}$$
where \({{\varvec{\eta }}_n^{(j)} } \sim {\mathcal {N}} ({\varvec{0}},{\varvec{\Theta }}_n^{(j)})\) has the covariance
$$\begin{aligned} {\varvec{\Theta }}_{\mathbf {n}}^{({\mathbf {j}})} &= {\mathbf {E}}\left\{ {[({\mathbf {I}} - {\varvec{\beta }}_{\mathbf {n}}^{({{j}})}{\mathbf {D}}_{\mathbf {n}}^{({{j}})}){\mathbf {w}}_{\mathbf {n}}^{({{j}})} - {\varvec{\beta }}_{\mathbf {n}}^{({{j}})}{ \bar{{{\boldsymbol{\gamma }}} }}_{\mathbf {n}}^{({{j}})}]} \right. \\ &\quad \left. {{{[({\mathbf {I}} - {\varvec{\beta }}_{\mathbf {n}}^{({{j}})}{\mathbf {D}}_{\mathbf {n}}^{({{j}})}){\mathbf {w}}_{\mathbf {n}}^{({{j}})} - {\varvec{\beta }}_{\mathbf {n}}^{({{j}})}{\bar {{{\boldsymbol{\gamma }}} }}_{\mathbf {n}}^{({{j}})}]}^{\mathrm{T}}}} \right\} \\ & = ({\mathbf {I}} - {\varvec{\beta }}_{\mathbf {n}}^{({{j}})}{\mathbf {H}}_{\mathbf {n}}^{({{j}})}){\mathbf {Q}}_{\mathbf {n}}^{({{j}})}{({\mathbf {I}} - {\varvec{\beta }}_{\mathbf {n}}^{({{j}})}{\mathbf {H}}_{\mathbf {n}}^{({{j}})})^{\mathrm{T}}}\\ &\quad + {\varvec{\beta }}_{\mathbf {n}}^{({{j}})}{\mathbf {R}}{({\varvec{\beta }}_{\mathbf {n}}^{({{j}})})^{\mathrm{T}}} \end{aligned}$$
The noise vectors \({{\varvec{\eta }}_n^{(j)} }\) and \({{{\bar{\gamma }}} _n^{(j)} }\) will be de-correlated if the conditions \({\mathrm {E}} \{ {{\varvec{\eta }}_n^{(j)} ( {{{\bar{\gamma }}} _n^{(j)} } )^{\mathrm{T}} } \} = 0\) is satisfied that can be achieved with
$$\begin{aligned} {\varvec{\beta }}_n^{(j)}&= \Phi _n^{(j)} ( {{\varvec{H}}_n^{(j)} \Phi _n^{(j)} + {\varvec{R}}} )^{ - 1}\,, \end{aligned}$$
$$\begin{aligned} {\varvec{\Theta} }_n^{(j)}&= ( {{\varvec{I}} - {\varvec{\beta }}_n^{(j)} {\varvec{H}}_n^{(j)} } ){\varvec{Q}}^{( j )} ( {{\varvec{I}} - {\varvec{\beta }}_n^{(j)} {\varvec{D}}_n^{(j)} } )^{\mathrm{T}} \end{aligned}$$
Provided the de-correlation, the algorithm of the sub-KF method operating under CMN is described in Algorithm 1. Unlike the standard KF, Algorithm 1 requires the CFM \(\alpha _n^{(j)}\) at each n. To set \(\alpha _n^{(j)}\) properly, we take notice of possible time variations in \(\alpha _n^{(j)}\), and make CMF switching by the following steps:
Set several possible values of \(\alpha _n^{(j),i}, i\in [1,q]\).
Run q sub-KFs with \(\alpha _n^{(j),i}, i\in [1,q]\) in parallel.
Compute the Mahalanobis distance (Mahalanobis, 1936)
$$\begin{aligned} L_n^{\alpha _n^{(j),i} } = ( {{\varvec{z}}_n^{(j)} - {\varvec{D}}_n^{(j)} {{\hat{\varvec{x}}}}_n^{(j)-} } )^T {{\varvec{R}}^{(j)^{-1}} } ( {{\varvec{z}}_n^{(j)} - {\varvec{D}}_n^{(j)} {{\hat{\varvec{x}}}}_n^{(j)-} } ) \end{aligned}$$
Find \(\alpha _{n,{\mathrm {opt}}}^{(j)}\) by solving the minimization problem
$$\begin{aligned} \alpha _{n,{\mathrm {opt}}}^{(j)} = \mathop {\arg \min }\limits _{\alpha _n^{(j),i} } {L_n^{\alpha ^{(j),i} } } \end{aligned}$$
A pseudo code of the sub-KF for CMN with switch CFM is listed as Algorithm 2 and the structure of this filter is shown in Fig. 3. Having selected a proper range for CFM, this algorithm determines \(\alpha _{n,{\mathrm {opt}}}^{(j)}\), which is further used in the main filter.
Structure of the sub-KF for CMN with switch CFM
Distributed KF for CMN with switch CMN
The dKF algorithm used in the proposed navigation system is responsible for fusing data collected from local sub-filters and estimating the object position \(\hat{\varvec{x}}_n\) and localization error covariance \({\varvec{P}}_n\) as
$$\begin{aligned} \hat{\varvec{x}}_n&= {\varvec{P}}_n ( {\varvec{P}}_n^{(1)^{-1}} \hat{\varvec{x}}_n^{(1)} + {\varvec{P}}_n^{(2)^{-1}} \hat{\varvec{x}}_n^{(2)} \nonumber \\&\quad + \dots + {\varvec{P}}_n^{(M)^{-1}} \hat{\varvec{x}}_n^{(M)}) \,, \end{aligned}$$
$$\begin{aligned} {\varvec{P}}_n^{-1}&= {\varvec{P}}_n^{(1)^{-1}} + {\varvec{P}}_n^{(2)^{-1}} + \cdots + {\varvec{P}}_n^{(M)^{-1}} \end{aligned}$$
A pseudo code of the main dKF for CMN with switch CFM is listed as Algorithm 3.
Experimental setup
To test the dKF designed under CMN with switch CFM, we exploited the INS/UWB human localization system deployed in the No. 14 building of the University of Jinan, Jinan, China, as shown in Fig. 4. The target person equipped with experimental devices was pictured in Fig. 5. In this work, we conducted two tests, where we exploited the INS and the UWB localization systems described in Xu et al. (2019). The testbed worked as follows. We placed UWB RNs in indoor spaces according to designed positions and installed a Blind Node (BN) on a mobbing target. The target person wore an Inertial Measurement Unit (IMU) to obtain the INS results. The encoder was used to measure the distance from the start point. In this experiment, a target traveled along a planned trajectory, which was complicated with obstacles. Moreover, the method to obtain the ground truth coordinates in the experimental test can be founded in Xu et al. (2019). It has two phases: (1) establishing the mapping between the distance walking along the planned path from the start point and the ground truth coordinate and (2) encoding, to measure the walking distance and calculate the ground truth coordinates through the constructed mapping.
Testing experimental setup
The target human
Localization errors
To test this filter, we specify the state space model for \(q=5\) and \(T^{(j)} = 0.45\, {\mathrm {s}}, j\in [1,4]\), with \(\alpha _n^{(j),1}=0.1\), \(\alpha _n^{(j),2}=0.3\), \(\alpha _n^{(j),3}=0.5\), \(\alpha _n^{(j),4}=0.7\), \(\alpha _n^{(j),5}=0.9\).
Error comparison of KF and dKF
Since the models (1)–(7) are linear, the linear data fusion is used in this paper. The localization error \({\mathrm{Pos}}\_{\mathrm{error}}\) produced by INS, UWB, KF, and dKF in Test 1 is computed as
$$\begin{aligned} {\mathrm{Pos}}\_{\mathrm{error}} = \sqrt{\left({\mathrm{Pos}}^{\mathrm{E}} - {{\mathrm{Pos}}^{\mathrm{E,R}}}\right) ^2 + \left( {\mathrm{Pos}}^{\mathrm{N}} - {\mathrm {Pos}}^{\mathrm{N,R}} \right) ^2 } \end{aligned}$$
where \(\left( {\mathrm {Pos}}^{\mathrm{E,R}},{\mathrm { Pos}}^{\mathrm{N,R}}\right)\) denote the reference coordinates. The cumulative distribution functions (CDFs) are sketched in Fig. 6. From this figure, one can see that the KF and dKF can reduce the localization error over the INS and UWB. Also, the dKF has a better performance than the KF. The INS, UWB, KF, and dKF results in Test 1 are given in Table 1, which suggests that the dKF gives the smallest localization error.
Table 1 Position RMSEs produced by the INS, UWB, KF, and dKF in Test 1
The CDFs of \({\mathrm {Pos}}\_{\mathrm{error}}\) produced by the KF and dKF in Test 1
The trajectories planned or estimated by the INS, UWB, dKF for CMN, and dKF for CMN with switch CMN in test 1 are shown in Fig. 7. As can be seen, the errors in the INS outputs are accumulated, but at a lower rate due to the implementation of the Zero-velocity Update (ZUPT). Figures 8 and 9 show the estimates in the east and north directions produced by the INS, UWB, dKF, and dKF for CMN with switch CFM in test 1. Compared to INS, the UWB trajectory is close to the planned path. It is also noticed that the estimate produced by the dKF is not as accurate as that by the dKF for CMN with switch CFM, whose outputs are closer to the planned path. Figure 10 sketches the CDFs of the localization errors \({\mathrm{Pos}}\_{\mathrm{error}}\) produced by different estimators in test 1. It is clear that the proposed dKF for CMN with switch CFM produces the smallest errors compared to the INS, UWB, and dKF. Table 2 shows the localization results which indicating that the dKF for CMN with switch CFM has the smallest errors.
Trajectories by plan or given by the INS, UWB, dKF for CMN, and dKF for CMN with switch CMN in test 1
Estimates in the east direction produced by the INS, UWB, dKF, and dKF for CMN with switch CFM in test 1
Estimates in the north direction produced by the INS, UWB, dKF, and dKF for CMN with switch CFM in test 1
The CDFs of \({\varvec{Pos}}\_{\varvec{error}}\) produced by the INS, UWB, dKF, and dKF for CMN with switch CFM in test 1
Table 2 Position RMSEs produced by the INS, UWB, dKF for CMN, and dKF for CMN with switch CFM in test 1
We employed another test to verify the performance of the proposed method. Trajectories planned or given by the INS, UWB, dKF, and dKF for CMN with switch CMN in test 2 are shown in Fig. 11. One can see that the UWB trajectory is close to the planned path, unlike that of INS. It is also noticed that the estimate produced by the dKF is not as accurate as that by the dKF for CMN with switch CFM, whose output is much closer to the planned path. The CDFs of \({\varvec{Pos}}\_{\varvec{error}}\) produced by the INS, UWB, dKF, and dKF for CMN with switch CFM in test 2 are shown in Fig. 12. Thus, we conclude that the proposed dKF for CMN with switch CFM gives the smallest errors of 0.9, which is reduced by about 27.4% relative to dKF (Table 3).
Trajectories by plan or given by the INS, UWB, dKF, and dKF for CMN with switch CMN in test 2
The CDFs of \({\mathrm {Pos}}\_{\mathrm{error}}\) produced by the INS, UWB, dKF, and dKF for CMN with switch CFM in test 2
The performances for the dKF for CMN with switch CFM and with constant CFM
We compare the performances of the dKFs designed to the trajectories given by INS and UWB. The CDFs of \({\varvec{Pos}}\_{\varvec{error}}\) produced by the dKF for CMN with switch CFM and with constant CFM in test 1 and test 2 are shown in Figs. 13 and 14 respectively. To compare errors, we set \(\alpha ^{(j)}_{\mathrm{opt}}=0.15\), \(j\in {[}1,4{]}\), in constant CFM for all data. From this figure, we infer that the performances of the dKFs for CMN with switch CFM and with constant CFM are similar in the tests. It should also be emphasized that the method of setting a constant CFM is offline in this work, which is obtained from all the test data. The procedure is not designed and not the best choice for real time applications. Compared with dKF for CMN with constant CFM, the proposed dKF for CMN with switch CFM can obtain the CFM adaptively. Moreover, its performance is similar to the dKF for CMN with constant CFM. Figures 13 and 14 also show that the dKF for CMN with switch CFM can perform online and the results are very close to the optimal ones.
The CDFs of \({\mathrm {Pos}}\_{\mathrm{error}}\) produced by the dKF for CMN with switch CFM and constant CFM in test 1
The dKF designed in this paper under CMN with switch CFM has demonstrated the ability to improve online the performance of the INS/UWB integrated human localization system in indoor environments. The effect was achieved with determining the optimal CFM by solving the minimization problem and modifying the KF-based fusion filter. Accordingly, the effect of the CMN has been essentially mitigated in the output of the main dKF. The tests demonstrated a better performance of the dKF for CMN with switch CFM over the dKF for CMN. It also shows that the accuracy of the INS/UWB integrated system-based human localization can be improved compared to the standard dKF.
The raw data were provided by University of Jinan, Jinan, China. The raw data used in this study are available from the corresponding author upon request.
CDF:
Cumulative distribution function
Colouredness factor matrix
CMN:
Colored measurement noise
dKF:
Distributed Kalman filter
EKF:
Extended Kalman filter
IEKF:
Iterated extended Kalman filter
Inertial measurement unit
INS:
Inertial navigation system
KF:
Kalman filter
Radio frequency identification
UKF:
Unscented Kalman filter
UWB:
Ultra wide band
UWB RN:
UWB reference node
UWB BN:
UWB blind node
ZUPT:
Zero-velocity update
Chen, G., Meng, X., Wang, Y., Zhang, Y., Peng, T., & Yang, H. (2015). Integrated WiFi/PDR/Smartphone using an unscented Kalman filter algorithm for 3D indoor localization. Sensors, 15(9), 24595–24614.
Chen, N., Li, M., Yuan, H., Su, X., & Li, Y. (2020). Survey of pedestrian detection with occlusion. Complex and Intelligent Systems, 7, 1–11.
El-Sheimy, N., & Li, Y. (2021). Indoor navigation: State of the art and future trends. Satellite Navigation, 2, 7.
El-Sheimy, N., & Youssef, A. (2020). Inertial sensors technologies for navigation applications: State of the art and future trends. Satellite Navigation, 1, 2.
Fu, Q., & Retscher, G. (2009). Active RFID trilateration and location fingerprinting based on RSSI for pedestrian navigation. Journal of Navigation, 62(2), 323–340.
Gu, Y., Song, Q., Li, Y., & Ma, M. (2015). Foot-mounted pedestrian navigation based on particle filter with an adaptive weight updating strategy. Journal of Navigation, 68(01), 23–38.
Hsu, Y. L., Wang, J. S., & Chang, C. W. (2017). A wearable inertial pedestrian navigation system with quaternion-based extended Kalman filter for pedestrian localization. IEEE Sensors Journal, 17(10), 3193–3206.
Li, R., Zheng, S., Wang, E., Chen, J., Feng, S., Wang, D., & Dai, L. (2020). Inertial sensors technologies for navigation applications: State of the art and future trends. Satellite Navigation, 1, 12.
Li, Y., Zhuang, Y., Lan, H., Zhang, P., Niu, X., & El-Sheimy, N. (2016). Self-contained indoor pedestrian navigation using smartphone sensors and magnetic features. IEEE Sensors Journal, 16(19), 7173–7182.
Mahalanobis, P. C. (1936). On the generalised distance in statistics. Proceedings of the National Institute of Sciences, 2, 49–55.
MATH Google Scholar
Mosavi, M. R., & Shafiee, F. (2016). Narrowband interference suppression for GPS navigation using neural networks. GPS Solutions, 20(3), 341–351.
Norrdine, A., Kasmi, Z., & Blankenbach, J. (2016). Step detection for ZUPT-aided inertial pedestrian navigation system using foot-mounted permanent magnet. IEEE Sensors Journal, 16(17), 6766–6773.
Sekaran, J. F., Kaluvan, H., & Irudhayaraj, L. (2020). Modeling and analysis of GPS-GLONASS navigation for car like mobile robot. Journal of Electrical Engineering and Technology, 15(5), 927–935.
Shmaliy, Y. S., Zhao, S., & Ahn, C. K. (2020). Kalman and UFIR state estimation with coloured measurement noise using backward Euler method. IET Signal Process, 14(2), 64–71.
Tian, Q., Wang, I. K., & Salcic, Z. (2020). An INS and UWB fusion approach with adaptive ranging error mitigation for pedestrian tracking. IEEE Sensors Journal, 20(8), 4372–4381.
Tzitzis, A., Megalou, S., Siachalou, S., Emmanouil, T. G., Kehagias, A., Yioultsis, T. V., & Dimitriou, A. G. (2019). Localization of RFID tags by a moving robot, via phase unwrapping and non-linear optimization. IEEE Journal of Radio Frequency Identification, 3(4), 216–226.
Xu, Y., Ahn, C. K., Shmaliy, Y. S., Chen, X., & Bu, L. (2019). Indoor INS/UWB-based human localization with missing data utilizing predictive UFIR filtering. IEEE/CAA Journal of Automatica Sinica, 6(4), 952–960.
Xu, Y., Chen, X., & Li, Q. (2013). Autonomous integrated navigation for indoor robots utilizing on-line iterated extended Rauch–Tung–Striebel smoothing. Sensors, 13(12), 15937–15953.
Xu, Y., Li, Y., Ahn, C. K., & Chen, X. (2020). Seamless indoor pedestrian tracking by fusing INS and UWB measurements via LS-SVM assisted UFIR filter. Neurocomputing, 388, 301–308.
Xu, Y., Shmaliy, Y. S., Ahn, C. K., Shen, T., & Zhuang, Y. (2021). Tightly-coupled integration of INS and UWB using fixed-lag extended UFIR smoothing for quadrotor localization. IEEE Internet of Things Journal, 8(3), 1716–1727.
Xu, Y., Shmaliy, Y. S., Ahn, C. K., Tian, G., & Chen, X. (2018). Robust and accurate UWB-based indoor robot localisation using integrated EKF/EFIR filtering. IET Radar Sonar and Navigation, 12(7), 750–756.
Yu, K., Wen, K., Li, Y., Zhang, S., & Zhang, K. (2019). A novel NLOS mitigation algorithm for UWB localization in harsh indoor environments. IEEE Transactions on Vehicular Technology, 68(1), 686–699.
Zhang, Y., Tan, X., & Changsheng, Z. (2020). UWB/INS integrated pedestrian positioning for robust indoor environments. IEEE Sensors Journal, 20(23), 14401–14409.
Zhao, S., & Huang, B. (2020). Trial-and-error or avoiding a guess? Initialization of the Kalman filter. Automatica, 121, 109184.
Zhao, S., Shmaliy, Y. S., & Liu, F. (2016). Fast Kalman-like optimal unbiased FIR filtering with applications. IEEE Transactions on Signal Processing, 64(9), 2284–2297.
Zhuang, Y., & El-Sheimy, N. (2015). Tightly-coupled integration of WiFi and mems sensors on handheld devices for indoor pedestrian navigation. IEEE Sensors Journal, 16(1), 224–234.
Zhuang, Y., Hua, L., Wang, Q., Cao, Y., & Thompson, J. S. (2019a). Visible light positioning and navigation using noise measurement and mitigation. IEEE Transactions on Vehicular Technology, 68(11), 11094–11106.
Zhuang, Y., Wang, Q., Shi, M., Cao, P., Qi, L., & Yang, J. (2019b). Low-power centimeter-level localization for indoor mobile robots based on ensemble Kalman smoother using received signal strength. IEEE Internet of Things Journal, 6(4), 6513–6522.
Zhuang, Y., Yang, J., Qi, L., Li, Y., Cao, Y., & El-Sheimy, N. (2019c). A pervasive integration platform of low-cost MEMS sensors and wireless signals for indoor localization. IEEE Internet of Things Journal, 5(6), 4616–4631.
Yuan Xu do hope his wife Chen Fu will soon get well again.
This work is supported by NSFC Grant 61803175, Shandong Key R&D Program 2019JZZY021005 and Mexican Consejo Nacional de Cienciay Tecnologıa Project A1-S-10287 Grant CB2017-2018.
School of Electrical Engineering, University of Jinan, Jinan, China
Yuan Xu & Jing Cao
Department of Electronics Engineering, Universidad de Guanajuato, Salamanca, Mexico
Yuriy S. Shmaliy
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, China
Yuan Zhuang
Shandong Beiming Medical Technology Co., Ltd, Jinan, China
Yuan Xu
Jing Cao
We all conceived the idea and contributed to the writing of the paper. All authors read and approved the final manuscript.
Correspondence to Yuan Zhuang.
This article does not contain any studies with human participants performed by any of the authors.
Xu, Y., Cao, J., Shmaliy, Y.S. et al. Distributed Kalman filter for UWB/INS integrated pedestrian localization under colored measurement noise. Satell Navig 2, 22 (2021). https://doi.org/10.1186/s43020-021-00053-z
Distributed filtering
Human localization
Ubiquitous Positioning, Indoor Navigation and Location-Based Services | CommonCrawl |
Blind blur assessment of MRI images using parallel multiscale difference of Gaussian filters
Michael E. Osadebey1 na1,
Marius Pedersen ORCID: orcid.org/0000-0001-9797-58212 na1,
Douglas L. Arnold3 na1 &
Katrina E. Wendel-Mitoraj4 na1
Rician noise, bias fields and blur are the common distortions that degrade MRI images during acquisition. Blur is unique in comparison to Rician noise and bias fields because it can be introduced into an image beyond the acquisition stage such as postacquisition processing and the manifestation of pathological conditions. Most current blur assessment algorithms are designed and validated on consumer electronics such as television, video and mobile appliances. The few algorithms dedicated to medical images either requires a reference image or incorporate manual approach. For these reasons it is difficult to compare quality measures from different images and images with different contents. Furthermore, they will not be suitable in environments where large volumes of images are processed. In this report we propose a new blind blur assessment method for different types of MRI images and for different applications including automated environments.
Two copies of the test image are generated. Edge map is extracted by separately convolving each copy of the test image with two parallel difference of Gaussian filters. At the start of the multiscale representation, the initial output of the filters are equal. In subsequent scales of the multiscale representation, each filter is tuned to different operating parameters over the same fixed range of Gaussian scales. The filters are termed low and high energy filters based on their characteristics to successively attenuate and highlight edges over the range of multiscale representation. Quality score is predicted from the distance between the normalized mean of the edge maps at the final output of the filters.
The proposed method was evaluated on cardiac and brain MRI images. Performance evaluation shows that the quality index has very good correlation with human perception and will be suitable for application in routine clinical practice and clinical research.
Magnetic resonance imaging (MRI) system signal is sensitive to motion whereas patient motion is a common behaviour among subjects during brain and cardiac MRI acquisition sessions [1,2,3]. Major motion-related challenges include involuntary patient actions such as cardiac motion, respiratory motion and irregular heart beats. Other motion-related challenges include head motion and the movement of extremities. Steps taken to mitigate the effects of these motion-related challenges often requires trade-offs between MRI system operating parameters [4,5,6]. There is trade-off between high temporal resolution which account for cardiac and respiratory motion and large field of view which amplifies distortions. Concern for the comfort of elderly patients, unpredictable actions of very young children and the mentally unstable patients calls for compromise between signal-to-noise ratio, image resolution and length of scan time.
These challenges introduce distortions such as noise, bias fields and blur which limits the acquisition of high quality image. The focus of this report is on blur. Blur can be considered a unique type of distortion in comparison to noise and bias fields. Blurred boundaries is the consequence of partial volume effect in regions where the boundary between two different tissues is not orthogonal to image slice [7, 8]. Beyond the acquisition stage blur can be introduced into an image as a result of postacquisition processing and the manifestation of pathological conditions. Reported MRI findings in patients with focal cortical dysplasia (FCD) is the cortical thickening and blurring of the grey-white matter boundary [9, 10]. Post acquisition processing methods such as Karhunen–Loeve transform and the use of linear filters for de-noising of cardiac and brain MRI images are known for the blurring of edges with the consequent loss of diagnostic information [11, 12].
Blur, like all distortion processes, is uniformly propagated throughout an image. However, the effect of blur is not uniformly distributed in MRI images because the human anatomy is structurally heterogeneous. Blur weakens the strength of edges which define the visibility of details within an image [13]. Blur erodes the texture features that characterize smoothly varying regions such as the cardiac ventricles and the brain white matter. It causes loss of sharpness in the high density of edges that describe the cortical grey matter region and reduces the contrast between the different anatomical structures [14].
Blur assessment is, and will continue to be an active research area in the image processing community because the reliability of metrics derived from MRI images for the diagnostic evaluation of cardiac and neurological diseases, to a large extent, is dependent on edge information. Edge information is strongly related to the level of blur in an image. Several physiological parameters are based on edge-based metrics derived from MRI images. The physiological parameters include cardiac ejection fraction, myocardial wall motion, blood flow velocity, myocardial perfusion, whole brain volume measurement, whole brain atrophy, white matter atrophy and cortical grey matter atrophy [15,16,17,18,19].
Most current blur assessment algorithms are designed for a general class of images with focus on consumer electronics such as digital cameras, television, video and mobile devices. Generally, the algorithms begins with the extraction of an edge map from the test image. Blur quality index is derived after the edge map is further analyzed in one or combinations of the spatial domain, frequency domain or multi-resolution decomposition. In this report we categorize current blur assessment methods into recent and earlier contributions. Recent contributions include the reports in [20,21,22,23,24]. Earlier contributions include the reports in [25,26,27,28,29,30]. It is not possible to list all the current contributions. However, we will describe the unique design features which distinguish the aforementioned recent contributions.
The concept of increased dynamic range was introduced in [20]. Increasing the dynamic range of generated contrast maps significantly improve blur prediction. Another report measure the blurriness in an image from the steepness of probability density function. The probability density function models the histogram of discrete cosine transform coefficients of edge maps [21]. Color, edge, and structural information is the technique used to discriminate images with different levels of blur in [22]. Exact Zernite moments which reflects human visual characteristics was extracted from test images in [23]. The exact Zernite moments are combined with contrast information from gradient magnitude to measure the level of blur in the image. Changes in structural information resulting from blurriness was encoded with orthogonal moments and visual saliency model in [24].
One of the few contributions focused on medical images is the edge sharpness assessment by parametric (ESAP) modeling [23]. Sharpness assessment in ESAP begins with manual selection of region of interest from edge map extracted from the test image. The intensity level of edge pixels that are appropriate to describe edge sharpness are read and fitted with a sigmoid function. Sharpness quality score is computed from the parameters of the sigmoid function. Another report is based on the Moran statistics [31]. Moran statistics, originally proposed to estimate noise level, is a function of the spatial autocorrelation of mapped data [32]. The peak ratio of Moran's histogram quantifies the degree of image blurring based on the notion that the quantity of image blurring is dependent upon the ratio between the processed peak of Moran's histogram and the original image.
The region-of-interest incorporated in ESAP is a novelty. However the authors acknowledge that ESAP may not correlate with human visual perception. Furthermore, manual selection of the region-of-interest limits its application where large volumes of MRI data are processed. The versatility of the report based on Moran statistics is limited because it is a full-reference method.
In this report we propose a new approach to assess blur distortion in MRI images. The concept behind blur quality evaluation is the existence and persistence of edge information at different image resolutions [21]. Across increasing Gaussian scales, edges in higher quality images have higher persistence than lower quality images. Blur quality is derived from the relationship between three image features. The proposed method incorporates human visual characteristics. The test image is simultaneously fed into two parallel difference of Gaussian (DOG) filters which operate with different parameters at multiscale representation. The different parameters constrains one filter to successively attenuate edges and the other filter to highlight edges over the same fixed range of multiscale representation. Image quality score variable is the distance between the features extracted from the output of each filter at the end of multiscale representation.
The next section describes the methods for our proposed quality assessment "Experiments" section describe the objective and the subjective performance evaluation experiments of the proposed quality metric. Results from the experiment are displayed and discussed in "Results" and "Discussion" sections, respectively. Challenges, limitations and future work is in "Challenges, limitations and future work" section. "Conclusion" section concludes this report.
The flow chart in Fig. 1 and the images in Fig. 2 explains the six sequential steps to implement the proposed blur assessment method. Three symbols used in the flowchart are diagonals, circles and rectangles. A diagonal represents each step in the implementation of the proposed method. The circles are the output of numeric computations and the rectangles are images. The black, brown, purple and blue rectangles are the original image TIM, foreground image FRG, rescaled original image RIM and difference of Gaussian filtered images DoG1, DoG2, respectively.
Flow chart of the proposed method for the assessment of blur in MRI images
The implementation of blind blur assessment in MRI images. a The test image has its pixel intensity level rescaled to lie between 0 and 255. b Foreground of the test image in a is extracted. c The identical edge map from the initial parameters of the low and high energy difference of Gaussian filters. d The output image of the low energy filter at the conclusion of the multiscale representation. e The output image of the high energy filter at the conclusion of the multiscale representation. f The edge map extracted from the image in d. g The edge map extracted from the image in e. h Variation of image features from the output of the low and high energy filters at different Gaussian scales. i The predicted contrast, sharpness and total blur quality scores based on the analysis of the plot in h
Step 1: intensity standardization
The original image is rescaled REX to produce a new image \(I_{d}\) shown in Fig. 2a with intensity levels that lies between 0 and 255. The algorithm standardizes the intensity of all test images by rescaling their intensity levels to lie within the same fixed range. Intensity standardization ensures the standardization of contrast measures. Standardization of contrast measures makes it possible to compare predicted blur assessment indices for different images and images with different contents.
Step 2: extraction of foreground
The foreground region shown in Fig. 2b was extracted FRX from the rescaled original image shown in Fig. 2a to determine the region covered by the anatomical structures within the image grid. The area of foreground region is required to compute feature descriptors in subsequent steps of the algorithm. There are three successive stages within the foreground extraction step. The first step is global threshold set at the first moment of the rescaled orginal image. The output of the global threshold is a binary image. The global threshold is followed by morphological hole filling operation of the binary image. After the hole filling operation, there is morphological cleaning operation of the same binary image. In the cleaning operation, small regions \(\ge 800\) pixels that are unfilled in the hole filling operation are detected and eliminated.
Step 3: compute image feature
The mean mET of the rescaled original image is computed MEX from the indices of foreground pixels extracted in step 2.
Step 4: parallel multiscale DoG filtering
Two duplicate copies of the rescaled original image are generated. Each duplicate is separately and simultaneously convolved with two difference of Gaussian filters (DOG). The filters, \(DoG_{(\sigma _{1},\sigma _{2})}(x,y)\) and \(DoG_{(\sigma _{3},\sigma _{4})}(x,y)\) are defined as:
$$\begin{aligned} DoG_{(\sigma _{1},\sigma _{2})}(x,y)= & {} \left( G_{\sigma _{1}}(x,y) - G_{\sigma _{2}}(x,y)\right) \nonumber \\ DoG_{(\sigma _{3},\sigma _{4})}(x,y)= & {} \left( G_{\sigma _{3}}(x,y) - G_{\sigma _{4}}(x,y)\right) \end{aligned}$$
where \(\sigma _{1},\) \(\sigma _{2},\) \(\sigma _{3}\) and \(\sigma _{4}\) are the widths of the Gaussian filter \(G_{\sigma }(x,y):\)
$$\begin{aligned} G(x,y)=\frac{1}{\sqrt{2\pi }\sigma }\exp \left( -\frac{x^{2}+ y^{2}}{2\sigma ^{2}}\right) \end{aligned}$$
The motivation behind the use of DOG filter is its efficient application in edge detection for feature enhancement, blob detection, face detection and quality evaluation [33,34,35,36]. The DOG filter was implemented using the matlab code available in [37, 38]. Human visual system characteristics are incorporated into the algorithm by tuning each DOG filter to different parameters and for multiscale representation MSX of the rescaled original image. The parameters \(\theta _{1},\) \(\theta _{2}\) for each filter are defined as:
$$\begin{aligned} \theta _{1}&= \{\sigma _{1}, (\sigma _{1} + \sigma _{2})\}, \quad \sigma _{1} = 1, \quad \sigma _{2}=\{1,2,3,\ldots ,L\} \nonumber \\ \theta _{2} & = {} \{ \sigma _{3}, (\sigma _{3} + \sigma _{4})\} \quad \sigma _{3} = \{1,2,3,\ldots ,L\}, \quad \sigma _{4}=1 \end{aligned}$$
where L, the range of the multiscale representation, is defined as:
$$\begin{aligned} L=\frac{\sqrt{d1+d2}}{2} \end{aligned}$$
where d1 and d2 are the row and the column dimensions of the image, respectively. The output of each filter, at each scale of the multiscale representation are denoted DoG1 and DoG2 in the flow chart shown in Fig. 1.
Based on the parameter definitions in Eq. 3, the initial output from both filters are identical because, the initial parameters of both filters are equal:
$$\begin{aligned} \theta _{1}=\theta _{2}=\{1, 2\}. \end{aligned}$$
The initial output from one of the filters is shown in Fig. 2c. In subsequent multiscale representations each filter is tuned to different parameters. The first filter successively increases the level of details while the second filter successively attenuates the level of details. Based on these characteristics the filters are referred to as low and high energy DOG filters, respectively. The output from each filter, at the end of the multiscale representation, are displayed in Fig. 2d, e.
Step 5: extract edge map
At each scale of the multiscale representation, an edge map is extracted from each filter. The mean (mED1), (mED2) of the edge map from each filter is also computed (MEX) from the indices of the foreground pixels. The edge map is the local contrast feature image extracted from the output of the filter using local contrast filters. The edge map extracted from the output (shown in Fig. 2d, e) of each filter are displayed in Fig. 2f, g, respectively. The extracted edge information is sensitive to the size of filter. Heuristic approach was adopted to determine the appropriate filter size. During the performance evaluation of the algorithm, it was observed that the use of \(3 \times 3\) and \(7 \times 7\) filter sizes does not predict quality score which correlated with subjective evaluation by human observers. Specifically, filter size of \(3 \times 3\) underestimate image quality while filter size of \(7 \times 7\) overestimate image quality. We recommend fixed filter size of \(5 \times 5\) for images with dimensions \(256 \times 256\) and \(512 \times 512,\) respectively.
Step 6: compute blur quality index
The level of blur is evaluated from the relationship between image features in each DOG filter. The plot in Fig. 2h show how the first moment of the edge map extracted from each DOG filter vary with different multiscale representations. The blue and red colored plots are for the first and second filters, respectively. Points A and B on the plot represent the first moment of each edge map at the conclusion of the multiscale representation. The distance between A and B is D. The yellow colored plot represents the fist moment of the rescaled original image. The fist moment of the rescaled original image serves as a reference for measuring the persistence of edges in each filter at different Gaussian scales. There are three image features of interest. The first feature of interest is the mean of the rescaled original image \(\mu _{I_{d}}.\) The second and third features of interest are the first moments \(\mu _{C_{A}},\) \(\mu _{C_{B}}\) of the edge map extracted from each filter at the conclusion of the multiscale representation.
The followings hold:
$$\begin{aligned}\mu _{C_{B}} & \le \mu _{C_{A}},\nonumber \\ \mu _{C_{A}} & \le \mu _{I_{d}}, \nonumber \\ \mu _{I_{d}} & \le (\mu _{I_{d}} + \mu _{C_{B}}). \end{aligned}$$
Hereafter, we analyze the plot of the multiscale representation and show that the plot can be used to predict QSX quality index (Fig. 2i) for ideal, extremely degraded and real MRI images.
1 Ideal image
An ideal MRI image is piecewise constant [39]. The edge map in an ideal MRI image is optimized. There are no more details to highlight by the first DOG filter. At the end (point A in Fig. 2h) of the multiscale representation, the final output image \(I_{A}\) from the first DOG filter closely approximates the rescaled original image \(I_{d}.\) Therefore,
$$\begin{aligned} I_{A}& \approx {} I_{d}, \nonumber \\ C_{A} & \approx {} I_{d}, \nonumber \\ \mu _{C_{A}} &\approx {} \mu _{d}. \end{aligned}$$
The second DOG filter successively attenuates edges in the ideal image. At the end (point B in Fig. 2h) of the multiscale representation, there is almost complete depletion of details in the rescaled original image. Therefore,
$$\begin{aligned} \mu _{C_{B}}\approx 0. \end{aligned}$$
The distance \(D_{L_{1}}\) between the mean of the edge map at A and the mean of the edge map at B:
$$\begin{aligned} D_{L_{1}} \approx \Vert \mu _{C_{A}} - \mu _{C_{B}} \Vert = \mu _{C_{A}}. \end{aligned}$$
2 Extremely degraded image
There is absence of details or very sparse details in an extremely degraded MRI image. There are no more details to highlight. At the end of the multiscale representation, the first DOG filter replicates the extremely degraded image. Therefore,
$$\begin{aligned} \mu _{C_{A}}\approx 0. \end{aligned}$$
The second DOG filter will completely erode the sparse details in the extremely degraded image. Therefore,
The distance \(D_{L2}\) between the mean of the edge maps at the output of each filter is:
$$\begin{aligned} D_{L2} = \Vert \mu _{C_{A}} - \mu _{C_{B}} \Vert \approx 0. \end{aligned}$$
3 Real image
We postulate that the distance between the edge maps at the output of the low and the high energy filters is a useful variable for predicting the blur index of an MRI image. The quality index for a real MRI image will lie between the quality index of an extremely degraded image and the quality index of an ideal image:
$$\begin{aligned} 0 \le (D_{L1},D_{L2}) \le 1. \end{aligned}$$
The contrast between the edge and the non-edge regions in the rescaled original image is the contrast quality score. The contrast quality score \(q_{1}\) is determined by normalizing the distance \(D_{L}\) with the mean \(\mu _{I_{d}}\) of the image:
$$\begin{aligned} q_{1}=\frac{\Vert \mu _{C_{A}} - \mu _{C_{B}}\Vert }{\mu _{I_{d}}}, \quad (\mu _{C_{A}} - \mu _{C_{B}}) \le \mu _{I_{d}} \end{aligned}$$
where \((\mu _{C_{A}} - \mu _{C_{B}}) \le \mu _{I_{d}}\) expresses the condition for the validity of q1. The sharpness of the rescaled original image is the sharpness quality score.
The sharpness quality score \(q_{2}\) is determined by normalizing the distance \(D_{L}\) with the \((\mu _{I_{d}} + \mu _{C_{B}}){:}\)
$$\begin{aligned} q_{2}=\frac{\Vert \mu _{C_{A}} - \mu _{C_{B}}\Vert }{\mu _{I_{d}} + \mu _{C_{B}}}, \quad (\mu _{C_{A}} - \mu _{C_{B}}) \le (\mu _{I_{d}} + \mu _{C_{B}}) \end{aligned}$$
where \((\mu _{C_{A}} - \mu _{C_{B}}) \le (\mu _{I_{d}} + \mu _{C_{B}})\) expresses the condition for the validity of q2.
The total quality score Q is the average of the contrast and sharpness quality scores:
$$\begin{aligned} Q=\frac{q_{1}+ q_{2}}{2}. \end{aligned}$$
The difference in the computed values between the contrast and sharpness quality scores is the second term \(\mu _{C_{B}}\) in the normalizing constant in Eq. 15. The choice of \(\mu _{I_{d}}\) and \((\mu _{I_{d}} + \mu _{C_{B}})\) as the normalizing constant in Eqs. 14 and 15 is based on the expression in Eq. 13. The normalizing constant ensures that the blur quality index is defined within a lower and upper limit \(\{0 \le (q_{1},q_{2}) \le 1\}.\)
Performance evaluation of our proposed method was carried out using brain and cardiac MRI volume data. The brain MRI volume data were provided by NeuroRx research Inc. (https://www.neurorx.com), BrainCare Oy. (http://braincare.fi/) and the Alzheimer's disease neuroimaging initiative (ADNI) (http://www.adni.loni.usc.edu). The cardiac MRI volume data were short axis MRI provided by Department of Diagnostic Imaging of the Hospital for Sick Children in Toronto, Canada (http://www.sickkids.ca/DiagnosticImaging/index.html). The cardiac MRI data were originally used as test data in the report [40].
There are 1200 slices from 25 short axis cardiac MRI volume data. The dimension of each cardiac slice is \(256 \times 256\) pixels along the long axis. The slices of the brain MRI volume data are 500 T2, 250 T1 and 300 Fluid Attenuated Inversion Recovery (FLAIR) images. The brain MRI slices from NeuroRx and ADNI have dimension \(256 \times 256\) pixels. The data from BrainCare have dimension \(448 \times 390\) pixels.
The new blur assessment method was implemented in the MatLab computing environment. Gaussian blur and motion blur at different levels were artificially induced on the test data. Gaussian blur was simulated by convolving a slice with rotationally symmetric low pass filter of width w, \(\{w:3< w < 15\}\) pixels. The range of the filter size was scaled from level 1 to level 15. The motion blur was induced on a slice by convolving it with a special filter which approximates the linear motion of a camera. The linear motion is described by two parameters, the linear distance in pixels and the angular distance in degree. Both parameters were scaled from 1 to 15 in unit step.
The performance of our proposed method was evaluated objectively and validated subjectively in four categories of experiments, The categories of the experiments are Cardiac MRI, T2, T1 and FLAIR brain MRI. Subjective evaluation was facilitated using QuickEval [41], a web-based tool for psychometric image evaluation provided by the Norwegian Colour and Visual Computing Laboratory (http://www.colourlab.no/quickeval) at the Norwegian University of Science and Technology, Gjovik, Norway. The observers are one radiologist and one medical imaging professional. The observers assigns a score between 0 and 100, in steps of 1, to each slice. Each score assigned by the observer is divided by 100 to ensure that the subjective and objective scales are in the same range. Each observer was first presented with an undistorted version of an MRI slice, followed by increasing distortion levels of the original slice. The distorted levels are 5, 10 and 15. The mean opinion score (MOS) was used in the validation studies because it is popular and simple to implement [42]. The relationship between the score predicted by our proposed method and the subjective scores assigned by human observers was computed using the spearman rank correlation coefficient [43].
Objective evaluation
Brain MRI without perceived distortion
Six slices from a T2 weighted brain MRI volume data are shown in Fig. 3a–f. The variation of image features (mean of the edge maps) at different Gaussian scales for the low and the high energy DOG filters are shown in Fig. 3g, h. The plot in Fig. 3i show the contrast, sharpness and the total quality scores for 15 slices from the MRI volume data. The results shows that the slices in same MRI volume data have different levels of blur. The minimum and maximum blur levels are \(\approx 0.7\) and \(\approx 0.85,\) respectively.
Six slices with slice numbers a 1, b 4, c 8, d 11 , e 13 and f 15 from T2 brain MRI volume data from BrainCare. g Variation of image features of each slice from the output of the low energy Gaussian filter at different Gaussian scales. h Variation of image features of each slice from the output of the high energy Gaussian filter at different Gaussian scales. i Contrast, sharpness and total quality scores of 15 slices from the MRI volume data
Cardiac MRI without perceived distortion
Six slices from a cardiac MRI volume data are shown in Fig. 4a–f. Figure 4g, h show the variation of image features at different Gaussian scales for the low and the high energy DOG filters. The plot in Fig. 4i show the contrast, sharpness and the total quality scores for 13 slices from the MRI volume data. Blur levels in the cardiac slices contained in the same volume data vary from 0.45 to 0.83.
Six slices with slice numbers a 1, b 3, c 5, d 7 , e 9 and f 11 from short axi MRI volume data from Department of Diagnostic Imaging of the Hospital for Sick Children in Toronto. g Variation of image features of each slice from the output of the low energy Gaussian filter at different Gaussian scales. h Variation of image features of each slice from the output of the high energy Gaussian filter at different Gaussian scales. i Contrast, sharpness and total quality scores of 15 slices from the MRI volume data
The image in Fig. 5a is a slice from a FLAIR brain MRI volume data. The images in Fig. 5b–f are the same image in Fig. 5a but were blurred with Gaussian filter at levels 4, 7, 10, 13 and 15, respectively. Given specific level of Gaussian blur, the variation of the image features at different Gaussian scales for the low and the high energy DOG filters are displayed in Fig. 5g, h, respectively. The contrast, sharpness and the total quality scores for Gaussian blur levels that vary from 0 to 15 are shown in Fig. 5i. In the absence of distortion, the blur level is 0.65. Increasing blurriness decreases the quality of the image from 0.65 to 0.35, for blur level of 15.
a FLAIR brain MRI slice from ADNI and its degraded versions at motion blur levels b 4, c 7, d 10, e 13 and f 15, g variation of image features of each slice from the output of the low energy Gaussian filter at different Gaussian scales. h Variation of image features of each slice from the output of the high energy Gaussian filter at different Gaussian scales. i Contrast, sharpness and total quality scores for different levels of motion blur
Figure 6a is a slice from cardiac MRI volume data. The images in Fig. 6b–f are the same image in Fig. 6a but were blurred with Gaussian filter at levels 4, 7, 10, 13 and 15, respectively. Given specific Gaussian blur, the variation of the image features at different Gaussian scales for the low and the high energy DOG filters are displayed in Fig. 6g, h, respectively. The contrast, sharpness and the total quality scores for Gaussian blur levels from 0 to 15 are shown in Fig. 6i. There is ≈ 50% decrease in the predicted quality index as the blur level increase from 0 to 15.
a Short axis cardiac MRI slice and its degraded versions at motion blur levels b 4, c 7, d 10, e 13 and f 15, g variation of image features of each slice from the output of the low energy Gaussian filter at different Gaussian scales. h Variation of image features of each slice from the output of the high energy Gaussian filter at different Gaussian scales. i Contrast, sharpness and total quality scores for different levels of motion blur
Motion blur
Figure 7a is a conventional T1 weighted brain MRI slice. Its motion blurred versions for motion blur levels of 4, 7, 10, 13 and 15 are shown in Fig. 7b–f, respectively. Variation of the image features for different Gaussian scales for the low and the high energy DOG filters are displayed in Fig. 7g, h, respectively. The contrast, sharpness and the total quality scores for motion blur levels from 0 to 15 are displayed in Fig. 7i. The predicted quality scores deceases from ≈ 0.6 to ≈ 0.15 for motion blur level which increased from 1 to 15.
a Conventional T1 brain MRI slice from NeuroRx and its degraded versions at Gaussian blur levels b 4, c 7, d 10, e 13 and f 15, g variation of image features of each slice from the output of the low energy Gaussian filter at different Gaussian scales. h Variation of image features of each slice from the output of the high energy Gaussian filter at different Gaussian scales. i Contrast, sharpness and total quality scores for different levels of Gaussian blur
A slice in a cardiac MRI volume data is shown in Fig. 8a. Its motion blurred versions are shown in Fig. 8b–f for motion blur levels of 4, 7, 10, 13 and 15, respectively. The variation of the image features for different Gaussian scales for the low and the high energy DOG filters are displayed in Fig. 8g, h. The plot of the motion blur levels from 0 level to 15 level versus contrast, sharpness and the total quality scores are displayed in Fig. 8i. The quality scores decrease from 0.6 to 0.3 for motion blur level which increase from 1 to 15.
a Short axis cardiac MRI slice and its degraded versions at Gaussian blur levels b 4, c 7, d 10, e 13 and f 15, g variation of image features of each slice from the output of the low energy Gaussian filter at different Gaussian scales. h Variation of image features of each slice from the output of the high energy Gaussian filter at different Gaussian scales. i Contrast, sharpness and total quality scores for different levels of Gaussian blur
Subjective validation
Results from the subjective evaluation of our proposed method are tabulated in Tables 1, 2, 3, 4, 5, 6, 7 and 8. Tables 1, 2, 3, and 4 are the results for cardiac, T2, conventional T1 and FLAIR brain MRI volume data degraded by motion blur. Corresponding results for degradation by Gaussian blur are displayed in Tables 5, 6, 7 and 8.
Table 1 Results from validation studies for short axis cardiac MRI volume data degraded by motion blur
Table 2 Results from validation studies for T2 brain MRI volume data degraded by motion blur
Table 3 Results from validation studies for conventional T1 brain MRI volume data degraded by motion blur
Table 4 Results from validation studies for FLAIR brain MRI volume data degraded by motion blur
Table 5 Results from validation studies for short axis cardiac MRI volume data degraded by Gaussian blur
Table 6 Results from validation studies for T2 brain MRI volume data degraded by Gaussian blur
Table 7 Results from validation studies for conventional T1 brain MRI volume data degraded by Gaussian blur
Table 8 Results from validation studies for FLAIR brain MRI volume data degraded by Gaussian blur
Tables 1, 2, 3 and 4 shows that for motion blur level which increase from 0 to 15, observers agreement decrease, from 0.80 to 0.71, from 0.85 to 0.70, from 0.78 to 0.68 and from 0.75 to 0.70, for cardiac, T2, conventional T1 and FLAIR brain MRI volume data, respectively. Corresponding results for Gaussian blur as shown in Tables 5, 6, 7 and 8 are from 0.80 to 0.65, from 0.85 to 0.70, from 0.78 to 0.70 and from 0.85 to 0.68.
Edge information is highly desired in medical images because it can potentially reveal details on the structures associated with normal anatomy and various pathological conditions [13]. The proposed blur assessment method predict the level of blur distortion in an image by generating and analyzing an edge map.
An important characteristics of the proposed method is its standardized quality index. The quality index lies between 0, the quality index for an extremely degraded image and 1, the quality index for an ideal image. The standardized quality index makes the algorithm suitable for application in large clinical trials for evaluating and comparing MRI images acquired from different scanners and different clinical trial sites.
The results displayed in Figs. 3 and 4 demonstrate that the proposed algorithm can assess the variations in the level of blur in the different slices contained within an MRI slice. The criteria for the diagnosis of MS lesions include the presence of periventricular and juxtacortical lesions which are located by the boundary between different brain tissues. Performance evaluation results show that our proposed method will be useful in the clinical trials to assess the reliability of edge information contained in the MRI data.
The plots displayed in Figs. 3, 4, 5, 6, 7 and 8 show general decrease in the contrast and sharpness quality scores for increasing levels of blur. This is a clear evidence that our proposed method can fairly compare and discriminate images based on their levels of blur.
The subjective evaluation results shown in Tables 1, 2, 3, 4, 5, 6, 7 and 8 is evidence that the multiscale representation effectively incorporates HVS characteristics in our proposed method. In all the categories of the experiment there is very good correlation between the objective scores predicted by our proposed method and the subjective evaluation assigned by human observers. The minimum and the maximum correlation coefficient is 0.65 and 0.85, respectively.
Challenges, limitations and future work
Two major challenges may limit the accurate prediction of quality scores. The first is accurate segmentation of the foreground. Inaccurate segmentation can result in wrong computation of image features such as the mean of the test image and the mean of the edge map. If the foreground region is underestimated or overestimated the blur quality index will not correlate with the perceptual quality index. The second challenge is the sensitivity of the algorithm to the size of filter. Future work will focus on how to optimize the size of filter for different dimensions of the image. We hope to incorporate segmentation algorithm so that the algorithm can output blur assessment index for local regions within a slice. This approach will make the algorithm suitable for blur assessment in pathological conditions such as focal cortical dysplasia.
This report propose a new approach to assess the blur level in a MRI image. The proposed method is based on the concept that the quality of an image is measured from the existence and persistence of structural information at different Gaussian scales. The contrast and sharpness features in the image are extracted by simultaneously convolving the image with two multiscale difference of Gaussian filters. The multiscale difference of Gaussian filters extract edge information from the test image and also incorporates human visual system characteristics into the algorithm. The parameters of each difference of Gaussian filter is tuned to either highlight or erode edges. After the conclusion of multiscale representation, blur level is assessed from the difference between the contrast and sharpness quality features in the images at the output of each filter.
The proposed method was evaluated on cardiac and brain MRI images and validated subjectively using human observers. Performance evaluation shows that the proposed method addresses most of the drawbacks associated with current blur assessment methods for MRI images. The quality prediction which lies between 0 and 1 makes it possible to compare the quality scores for different images and images with different contents. Features extracted from the test image are the first moments. This makes the algorithm computationally efficient. The blind nature of the proposed method coupled with computational efficiency makes the proposed method suitable for automated environments and different applications such as clinical trials where large volumes of data are processed.
MRI:
DOG:
difference of Gaussian
LCM:
local contrast feature
ADNI:
Alzheimer's disease neuroimaging initiative
FCD:
transverse relaxation
longitudinal relaxation
fluid attenuated inversion recovery
Axel L, Dougherty L. MR imaging of motion with spatial modulation of magnetization. Radiology. 1989;171(3):841–5.
Zeng LL, Wang D, Fox MD, Sabuncu M, Hu D, Ge M, Buckner RL, Liu H. Neurobiological basis of head motion in brain imaging. Proc Natl Acad Sci. 2014;111(16):6058–62.
El-Rewaidy H, Fahmy AS. Improved estimation of the cardiac global function using combined long and short axis MRI images of the heart. Biomed Eng Online. 2016;15(1):45.
Klinke V, Muzzarelli S, Lauriers N, Locca D, Vincenti G, Monney P, Lu C, Nothnagel D, Pilz G, Lombardi M. Quality assessment of cardiovascular magnetic resonance in the setting of the European CMR registry: description and validation of standardized criteria. J Cardiovasc Magn Reson. 2013;15(1):55.
Rajiah P, Bolen MA. Cardiovascular MR imaging at 3 T: opportunities, challenges, and solutions. Radiographics. 2014;34(6):1612–35.
Pizurica A, Philips W, Lemahieu I, Acheroy M. A versatile wavelet domain noise filtration technique for medical imaging. IEEE Trans Med Imaging. 2003;22(3):323–31.
Bigler ED. Neuroimaging I: basic science. New York: Springer; 2013.
Ahmad R, Ding Y, Simonetti OP. Edge sharpness assessment by parametric modeling: application to magnetic resonance imaging. Concepts Magn Reson Part A. 2015;44(3):138–49.
Blümcke I, Thom M, Aronica E, Armstrong DD, Vinters HV, Palmini A, Jacques TS, Avanzini G, Barkovich AJ, Battaglia G. The clinicopathologic spectrum of focal cortical dysplasias: a consensus classification proposed by an ad hoc task force of the ilae diagnostic methods commission. Epilepsia. 2011;52(1):158–74.
Blackmon K, Kuzniecky R, Barr WB, Snuderl M, Doyle W, Devinsky O, Thesen T. Cortical gray-white matter blurring and cognitive morbidity in focal cortical dysplasia. Cereb Cortex. 2014;25(9):2854–62.
Ding Y, Chung Y-C, Raman SV, Simonetti OP. Application of the Karhunen–Loeve transform temporal image filter to reduce noise in real-time cardiac cine MRI. Phys Med Biol. 2009;54(12):3909.
Baselice F, Ferraioli G, Pascazio V. A 3D MRI denoising algorithm based on Bayesian theory. Biomed Eng Online. 2017;16(1):25.
Sprawls P. Physical principles of medical imaging. New York: Aspen Publishers; 1987.
Osadebey M, Pedersen M, Arnold D, Wendel-Mitoraj K. Bayesian framework inspired no-reference region-of-interest quality measure for brain MRI images. J Med Imaging. 2017;4(2):025504.
Nakamura K, Guizard N, Fonov VS, Narayanan S, Collins DL, Arnold DL. Jacobian integration method increases the statistical power to measure gray matter atrophy in multiple sclerosis. NeuroImage Clin. 2014;4:10–7.
Jiang S, Zhang W, Wang Y, Chen Z. Brain extraction from cerebral mri volume using a hybrid level set based active contour neighborhood model. Biomed Eng Online. 2013;12(1):31.
Gusso S, Salvador C, Hofman P, Cutfield W, Baldi JC, Taberner A, Nielsen P. Design and testing of an MRI-compatible cycle ergometer for non-invasive cardiac assessments during exercise. Biomed Eng Online. 2012;11(1):13.
De Stefano N, Matthews P, Filippi M, Agosta F, De Luca M, Bartolozzi M, Guidi L, Ghezzi A, Montanari E, Cifelli A. Evidence of early cortical atrophy in ms relevance to white matter changes and disability. Neurology. 2003;60(7):1157–62.
Rispoli VC, Nielsen JF, Nayak KS, Carvalho JL. Computational fluid dynamics simulations of blood flow regularized by 3D phase contrast MRI. Biomed Eng Online. 2015;14(1):110.
Gvozden G, Grgic S, Grgic M. Blind image sharpness assessment based on local contrast map statistics. J Vis Commun Image Represent. 2018;50:145–58.
Kerouh F, Ziou D, Serir A. Histogram modelling-based no reference blur quality measure. Signal Process Image Commun. 2018;60:22–8.
Wang L, Wang C, Zhou X. Blind image quality assessment on Gaussian blur images. J inf Process Syst. 2017;13(3):448–63.
Lim C-L, Paramesran R, Jassim WA, Yu YP, Ngan KN. Blind image quality assessment for Gaussian blur images using exact Zernike moments and gradient magnitude. J Frankl Inst. 2016;353(17):4715–33.
MathSciNet Article MATH Google Scholar
Li L, Lin W, Wang X, Yang G, Bahrami K, Kot AC. No-reference image blur assessment based on discrete orthogonal moments. IEEE Trans Cybern. 2016;46(1):39–50.
Chen M-J, Bovik AC. No-reference image blur assessment using multiscale gradient. EURASIP J Image Video Process. 2011;2011(1):3.
Ferzli R, Karam LJ. A no-reference objective image sharpness metric based on the notion of just noticeable blur (JNB). IEEE Trans Image Process. 2009;18(4):717–28.
Wu S, Lin W, Xie S, Lu Z, Ong EP, Yao S. Blind blur assessment for vision-based applications. J Vis Commun Image Represent. 2009;20(4):231–41.
Li C, Yuan W, Bovik A, Wu X. No-reference blur index using blur comparisons. Electron Lett. 2011;47(17):962–3.
Ciancio A, Da Costa AT, Da Silva E, Said A, Samadani R, Obrador P. Objective no-reference image blur metric based on local phase coherence. Electron Lett. 2009;45(23):1162–3.
Bong DBL, Khoo BE. Blind image blur assessment by using valid reblur range and histogram shape difference. Signal Process Image Commun. 2014;29(6):699–710.
Chen TJ, Chuang KS, Chang JH, Shiao YH, Chuang CC. A blurring index for medical images. J Digit Imaging. 2006;19(2):118.
Chuang KS, Huang H. Assessment of noise in a digital image using the join-count statistic and the Moran test. Phys Med Biol. 1992;37(2):357.
MathSciNet Article Google Scholar
Xu H, Lu C, Berendt R, Jha N, Mandal M. Automatic nuclei detection based on generalized Laplacian of Gaussian filters. IEEE J Biomed Health Inform. 2017;21(3):826–37.
Makanyanga J, Ganeshan B, Rodriguez-Justo M, Bhatnagar G, Groves A, Halligan S, Miles K, Taylor SA. MRI texture analysis (MRTA) of T2-weighted images in Crohn's disease may provide information on histological and MRI disease activity in patients undergoing ileal resection. Eur Radiol. 2017;27(2):589–97.
Wang S, Li W, Wang Y, Jiang Y, Jiang S, Zhao R. An improved difference of Gaussian filter in face recognition. J Multimed. 2012;7(6):429–33.
Simone G, Pedersen M, Farup I, Oleari C. Multi-level contrast filtering in image difference metrics. EURASIP J Image Video Process. 2013;2013(1):39.
Štruc V, Pavešic N. Photometric normalization techniques for illumination invariance. In: Zhang YJ, editor. Advances in face image analysis: techniques and technologies. Hershey: IGI Global; 2011. p. 279–300.
Štruc V, Pavešić N. Gabor-based kernel partial-least-squares discrimination features for face recognition. Informatica. 2009;20(1):115–38.
MATH Google Scholar
Zhang Y, Brady M, Smith S. Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans Med Imaging. 2001;20(1):45–57.
Andreopoulos A, Tsotsos JK. Efficient and generalizable statistical models of shape and appearance for analysis of cardiac MRI. Med Image Anal. 2008;12(3):335–57.
Van Ngo K, Storvik JJ, Dokkeberg CA, Farup I, Pedersen M. Quickeval: a web application for psychometric scaling experiments. In: SPIE/IS&T electronic imaging. International Society for Optics and Photonics; 2015. p. 93960.
Reisenhofer R, Bosse S, Kutyniok G, Wiegand T. A haar wavelet-based perceptual similarity index for image quality assessment. Signal Process Image Commun. 2018;61:33–43.
Myers L, Sirois MJ. Spearman correlation coefficients, differences between. Wiley StatsRef: Statistics Reference Online; 2006.
MEO carried out the design and implementation of the proposed metric system. MP contributed to the technical development, analysis and interpretation of the results. DLA and KEM were involved in data analysis as well as interpretation of the experimental results. All authors have been involved in drafting and revising the manuscript and approved the final version to be published. All authors read and approved the final manuscript.
Michael Osadebey obtained his master's degree with distinction in biomedical engineering from Tampere University of Technology, Finland, in 2009. He was a Ragnar Granit research grant recipient from October 2009 to December 2009. Michael obtained his PhD in engineering and computer science from Concordia University, Montreal, Canada, in 2015. His PhD study was focused on the processing of MRI images of the brain. He is a MRI Reader at NeuroRx Research Inc. a Montreal-based clinical research organization (CRO). His duties at NeuroRx include application of advanced image analysis software in the reading of MRI data of neurological diseases patients undergoing clinical trial drug treatment.
Marius Pedersen received his BSc degree in computer engineering and MiT degree in media technology both from Gjovik University College, Norway, in 2006 and 2007, respectively. He completed his PhD program in color imaging from the University of Oslo, Norway, sponsored by Oce in 2011. He is currently employed as a professor at NTNU Gjovik, Norway. He is also the director of the Norwegian Colour and Visual Computing Laboratory (Colourlab). His work is centered on subjective and objective image quality.
Douglas Arnold is the director of Magnetic Resonance Spectroscopy Lab, McGill University, Montreal, Canada, and the president/CEO NeuroRx Research Inc., a Montreal-based CRO. He is a neurologist with special expertise in MRI. His personal research interests are centered on the use of advanced neuroimaging techniques to assess the pathological evolution of multiple sclerosis and Alzheimer's disease and to quantify the effects of therapy on these diseases.
Katrina Wendel-Mitoraj obtained her PhD in biomedical engineering from Tampere University of Technology in 2010. Her PhD study was focused on electroencephalography electrode sensitivity distributions. She is the CEO and founder of BrainCare Oy. BrainCare Oy is a Tampere University of Technology spin-off company founded in 2013 to deliver personalized solutions to improve the quality of life of epilepsy patients. The organization recently concluded clinical trials for an innovative mobile application and supporting solutions for long-term monitoring for epileptic patients.
Data collection and sharing for this project was, in part, funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense Award Number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer's Association; Alzheimer's Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann- La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; MesoScale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (http://www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer's Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California.
The data that support the findings of this study are available from NeuroRx research Inc., BrainCare Oy and the ADNI but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of the aforementioned organizations.
Consent to publish
Marius Pedersen have been supported by the Research Council of Norway, Project No. 247689 'IQMED: Image Quality enhancement in MEDical diagnosis, monitoring and treatment'.
Michael E. Osadebey, Marius Pedersen, Douglas L. Arnold and Katrina E. Wendel-Mitoraj are contributed equally to this work
NeuroRx Research Inc, Montreal, 3575 Parc Avenue, Suite # 5322, Montreal, QC, H2X 3P9, Canada
Michael E. Osadebey
Department of Computer Science, Norwegian University of Science and Technology, Teknologivegen 22, 2815, Gjovik, Norway
Marius Pedersen
Montreal Neurological Institute and Hospital, McGill University, 3801 University St, Montreal, QC, H3A 2B4, Canada
Douglas L. Arnold
BrainCare Oy, Finn-Medi 1 PL 2000, 33521, Tampere, Finland
Katrina E. Wendel-Mitoraj
Correspondence to Marius Pedersen.
Osadebey, M.E., Pedersen, M., Arnold, D.L. et al. Blind blur assessment of MRI images using parallel multiscale difference of Gaussian filters. BioMed Eng OnLine 17, 76 (2018). https://doi.org/10.1186/s12938-018-0514-4
Multi-scale representation
Local contrast feature image | CommonCrawl |
CLRS Solutions 22.2 Breadth-first search
22.2 Breadth-first search 22.2 Breadth-first search Table of contents
Show the $d$ and $\pi$ values that result from running breadth-first search on the directed graph of Figure 22.2(a), using vertex $3$ as the source.
$$ \begin{array}{c|cccccc} \text{vertex} & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline d & \infty & 3 & 0 & 2 & 1 & 1 \\ \pi & \text{NIL} & 4 & \text{NIL} & 5 & 3 & 3 \end{array} $$
Show the $d$ and $\pi$ values that result from running breadth-first search on the undirected graph of Figure 22.3, using vertex $u$ as the source.
$$ \begin{array}{c|cccccc} \text{vertex} & r & s & t & u & v & w & x & y \\ \hline d & 4 & 3 & 1 & 0 & 5 & 2 & 1 & 1 \\ \pi & s & w & u & \text{NIL} & r & t & u & u \end{array} $$
Show that using a single bit to store each vertex color suffices by arguing that the $\text{BFS}$ procedure would produce the same result if lines 5 and 14 were removed.
The textbook introduces the $\text{GRAY}$ color for the pedagogical purpose to distinguish between the $\text{GRAY}$ nodes (which are enqueued) and the $\text{BLACK}$ nodes (which are dequeued).
Therefore, it suffices to use a single bit to store each vertex color.
What is the running time of $\text{BFS}$ if we represent its input graph by an adjacency matrix and modify the algorithm to handle this form of input?
The time of iterating all edges becomes $O(V^2)$ from $O(E)$. Therefore, the running time is $O(V + V^2)$.
Argue that in a breadth-first search, the value $u.d$ assigned to a vertex $u$ is independent of the order in which the vertices appear in each adjacency list. Using Figure 22.3 as an example, show that the breadth-first tree computed by $\text{BFS}$ can depend on the ordering within adjacency lists.
First, we will show that the value $d$ assigned to a vertex is independent of the order that entries appear in adjacency lists. To do this, we rely on theorem 22.5, which proves correctness of BFS. In particular, that we have $v.d = \delta(s, v)$ at the end of the procedure. Since $\delta(s, v)$ is a property of the underlying graph, no matter which representation of the graph in terms of adjacency lists that we choose, this value will not change. Since the $d$ values are equal to this thing that doesn't change when we mess with the adjacency lists, it too doesn't change when we mess with the adjacency lists.
Now, to show that $\pi$ does depend on the ordering of the adjacency lists, we will be using Figure 22.3 as a guide.
First, we note that in the given worked out procedure, we have that in the adjacency list for $w$, $t$ precedes $x$. Also, in the worked out procedure, we have that $u.\pi = t$.
Now, suppose instead that we had $x$ preceding $t$ in the adjacency list of $w$. Then, it would get added to the queue before $t$, which means that it would $u$ as it's child before we have a chance to process the children of $t$. This will mean that $u.\pi = x$ in this different ordering of the adjacency list for $w$.
Give an example of a directed graph $G = (V, E)$, a source vertex $s \in V$, and a set of tree edges $E_\pi \subseteq E$ such that for each vertex $v \in V$, the unique simple path in the graph $(V, E_\pi)$ from $s$ to $v$ is a shortest path in $G$, yet the set of edges $E_\pi$ cannot be produced by running $\text{BFS}$ on $G$, no matter how the vertices are ordered in each adjacency list.
Let $G$ be the graph shown in the first picture, $G_\pi = (V, E_\pi)$ be the graph shown in the second picture, and $s$ be the source vertex.
We could see that $E_\pi$ will never be produced by running BFS on $G$.
If $y$ precedes $v$ in the $Adj[s]$. We'll dequeue $y$ before $v$, so $u.\pi$ and $x.\pi$ are both $y$. However, this is not the case.
If $v$ preceded $y$ in the $Adj[s]$. We'll dequeue $v$ before $y$, so $u.\pi$ and $x.\pi$ are both $v$, which again isn't true.
Nonetheless, the unique simple path in $G_\pi$ from $s$ to any vertex is a shortest path in $G$.
There are two types of professional wrestlers: "babyfaces" ("good guys") and "heels" ("bad guys"). Between any pair of professional wrestlers, there may or may not be a rivalry. Suppose we have $n$ professional wrestlers and we have a list of $r$ pairs of wrestlers for which there are rivalries. Give an $O(n + r)$-time algorithm that determines whether it is possible to designate some of the wrestlers as babyfaces and the remainder as heels such that each rivalry is between a babyface and a heel. If it is possible to perform such a designation, your algorithm should produce it.
This problem is basically just a obfuscated version of two coloring. We will try to color the vertices of this graph of rivalries by two colors, "babyface" and "heel". To have that no two babyfaces and no two heels have a rivalry is the same as saying that the coloring is proper. To two color, we perform a breadth first search of each connected component to get the $d$ values for each vertex. Then, we give all the odd ones one color say "heel", and all the even d values a different color. We know that no other coloring will succeed where this one fails since if we gave any other coloring, we would have that a vertex $v$ has the same color as $v.\pi$ since $v$ and $v.\pi$ must have different parities for their $d$ values. Since we know that there is no better coloring, we just need to check each edge to see if this coloring is valid. If each edge works, it is possible to find a designation, if a single edge fails, then it is not possible. Since the BFS took time $O(n + r)$ and the checking took time $O(r)$, the total runtime is $O(n + r)$.
The diameter of a tree $T = (V, E)$ is defined as $\max_{u,v \in V} \delta(u, v)$, that is, the largest of all shortest-path distances in the tree. Give an efficient algorithm to compute the diameter of a tree, and analyze the running time of your algorithm.
Suppose that a and b are the endpoints of the path in the tree which achieve the diameter, and without loss of generality assume that $a$ and $b$ are the unique pair which do so. Let $s$ be any vertex in $T$. We claim that the result of a single $\text{BFS}$ will return either $a$ or $b$ (or both) as the vertex whose distance from $s$ is greatest.
To see this, suppose to the contrary that some other vertex $x$ is shown to be furthest from $s$. (Note that $x$ cannot be on the path from $a$ to $b$, otherwise we could extend). Then we have
$$d(s, a) < d(s, x)$$
$$d(s, b) < d(s, x).$$
Let $c$ denote the vertex on the path from $a$ to $b$ which minimizes $d(s, c)$. Since the graph is in fact a tree, we must have
$$d(s, a) = d(s, c) + d(c, a)$$
$$d(s, b) = d(s, c) + d(c, b).$$
(If there were another path, we could form a cycle). Using the triangle inequality and inequalities and equalities mentioned above we must have
$$ \begin{aligned} d(a, b) + 2d(s, c) & = d(s, c) + d(c, b) + d(s, c) + d(c, a) \\ & < d(s, x) + d(s, c) + d(c, b). \end{aligned} $$
I claim that $d(x, b) = d(s, x) + d(s, b)$. If not, then by the triangle inequality we must have a strict less-than. In other words, there is some path from $x$ to $b$ which does not go through $c$. This gives the contradiction, because it implies there is a cycle formed by concatenating these paths. Then we have
$$d(a, b) < d(a, b) + 2d(s, c) < d(x, b).$$
Since it is assumed that $d(a, b)$ is maximal among all pairs, we have a contradiction. Therefore, since trees have $|V| - 1$ edges, we can run $\text{BFS}$ a single time in $O(V)$ to obtain one of the vertices which is the endpoint of the longest simple path contained in the graph. Running $\text{BFS}$ again will show us where the other one is, so we can solve the diameter problem for trees in $O(V)$.
Let $G = (V, E)$ be a connected, undirected graph. Give an $O(V + E)$-time algorithm to compute a path in $G$ that traverses each edge in $E$ exactly once in each direction. Describe how you can find your way out of a maze if you are given a large supply of pennies.
First, the algorithm computes a minimum spanning tree of the graph. Note that this can be done using the procedures of Chapter 23. It can also be done by performing a breadth first search, and restricting to the edges between $v$ and $v.\pi$ for every $v$. To aide in not double counting edges, fix any ordering $\le$ on the vertices before hand. Then, we will construct the sequence of steps by calling $\text{MAKE-PATH}(s)$, where $s$ was the root used for the $\text{BFS}$.
MAKE-PATH(u)
for each v ∈ Adj[u] but not in the tree such that u ≤ v
go to v and back to u
for each v ∈ Adj[u] but not equal to u.π
go to v
perform the path proscribed by MAKE-PATH(v)
go to u.π
Previous 22.1 Representations of graphs
Next 22.3 Depth-first search | CommonCrawl |
DCDS-S Home
Tight independent set neighborhood union condition for fractional critical deleted graphs and ID deleted graphs
August & September 2019, 12(4&5): 703-710. doi: 10.3934/dcdss.2019044
Libration points in the restricted three-body problem: Euler angles, existence and stability
Hadia H. Selim 1, , Juan L. G. Guirao 2, and Elbaz I. Abouelmagd 3,4,,
Celestial Mechanics Unit, Astronomy Department, National Research Institute of Astronomy and Geophysics (NRIAG), Helwan 11421, Cairo, Egypt
Departamento de Matemática Aplicada y Estadística, Universidad Politécnica de Cartagena, Hospital de Marina, 30203-Cartagena, Región de Murcia, Spain
Nonlinear Analysis and Applied Mathematics Research Group (NAAM), Mathematics Department, King Abdulaziz University, Jeddah, Saudi Arabia
* Corresponding author: Elbaz I. Abouelmagd
Received May 2017 Revised January 2018 Published November 2018
Full Text(HTML)
The objective of the present paper is to study in an analytical way the existence and the stability of the libration points, in the restricted three-body problem, when the primaries are triaxial rigid bodies in the case of the Euler angles of the rotational motion are equal to $ θ_i = π/2, \, ψ_i = 0, \,\varphi_i = π/2 $, $ i = 1, 2 $. We prove that the locations and the stability of the triangular points change according to the effect of the triaxiality of the primaries. Moreover, the solution of long and short periodic orbits for stable motion is presented.
Keywords: Restricted three-body problem, triaxial rigid bodies, Euler angles, libration points, stability.
Mathematics Subject Classification: Primary: 2010; Secondary: 37N05, 70F15.
Citation: Hadia H. Selim, Juan L. G. Guirao, Elbaz I. Abouelmagd. Libration points in the restricted three-body problem: Euler angles, existence and stability. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 703-710. doi: 10.3934/dcdss.2019044
E. I. Abouelmagd, M. S. Alhothuali, J. L. G. Guirao and H. M. Malaikah, Periodic and secular solutions in the restricted three-body problem under the effect of zonal harmonic parameters, Appl. Math. & Info. Sci., 9 (2015), 1659-1669. Google Scholar
E. I. Abouelmagd, M. S. Alhothuali, J. L. G. Guirao and H. M. Malaikah, On the periodic structure in the planar photogravitational Hill problem, Appl. Math. & Info. Sci., 9 (2015), 2409-2416. Google Scholar
E. I. Abouelmagd, M. S. Alhothuali, J. L. G. Guirao and H. M. Malaikah, The effect of zonal harmonic coefficients in the framework of the restricted three-body problem, Adv. Space Res., 55 (2015), 1660-1672. Google Scholar
E. I. Abouelmagd, F. Alzahrani, J. L. G. Guirao and A. Hobiny, Periodic orbits around the collinear libration points, J. Nonlinear Sci. Appl. (JNSA), 9 (2016), 1716-1727. doi: 10.22436/jnsa.009.04.27. Google Scholar
E. I. Abouelmagd, H. M. Asiri and M. A. Sharaf, The effect of oblateness in the perturbed restricted three-body problem, Meccanica, 48 (2013), 2479-2490. doi: 10.1007/s11012-013-9762-3. Google Scholar
E. I. Abouelmagd, M. E. Awad, E. M. A. Elzayat and I. A. Abbas, Reduction the secular solution to periodic solution in the generalized restricted three-body problem, Astrophys. Space Sci., 350 (2014), 495-505. Google Scholar
E. I. Abouelmagd and S. M. El-Shaboury, Periodic orbits under combined effects of oblateness and radiation in the restricted problem of three bodies, Astrophys. Space Sci., 341 (2012), 331-341. Google Scholar
E. I. Abouelmagd, Existence and stability of triangular points in the restricted three-body problem with numerical applications, Astrophys. Space Sci., 342 (2012), 45-53. Google Scholar
E. I. Abouelmagd and M. A. Sharaf, The motion around the libration points in the restricted three-body problem with the effect of radiation and oblateness, Astrophys. Space Sci., 344 (2013), 321-332. Google Scholar
E. I. Abouelmagd, Stability of the triangular points under combined effects of radiation and oblateness in the restricted three-body problem, Earth Moon Planets, 110 (2013), 143-155. Google Scholar
E. I. Abouelmagd, The effect of photogravitational force and oblateness in the perturbed restricted three-body problem, Astrophys. Space Sci., 346 (2013), 51-69. Google Scholar
E. I. Abouelmagd, J. L. G. Guirao and A. Mostafa, Numerical integration of the restricted three-body problem with Lie series, Astrophys. Space Sci., 354 (2014), 369-378. Google Scholar
E. I. Abouelmagd, A. Mostafa and J. L. G. Guirao, A first order automated Lie transform International Journal of Bifurcation and Chaos, 25 (2015), 1540026, 10pp. doi: 10.1142/S021812741540026X. Google Scholar
E. I. Abouelmagd and A. Mostafa, Out of plane equilibrium points locations and the forbidden movement regions in the restricted three-body problem with variable mass, Astrophys. Space Sci., 357 (2015), 58-68. Google Scholar
E. I. Abouelmagd and J. L. G. Guirao, On the perturbed restricted three-body problem, Applied Mathematics and Nonlinear Sciences, 1 (2016), 123-144. Google Scholar
F. Alzahrani, E. I. Abouelmagd, J. L. G. Guirao and A. Hobiny, On the libration collinear points in the restricted three-body problem, Open Physics, 15 (2017), 58-67. Google Scholar
K. B. Bhatnagar and P. P. Hallan, Effect of perturbed potentials on the stability of libration points in the restricted problem, Celes. Mech. Dyn. Astr., 20 (1979), 95-103. doi: 10.1007/BF01230231. Google Scholar
R. Broucke, A. Elipe and A. Riaguas, On the figure-8 periodic solutions in the three-body problem, Chaos, Solitons and Fractals, 30 (2006), 513-520. doi: 10.1016/j.chaos.2005.11.082. Google Scholar
S. M. Elshaboury, E. I. Abouelmagd, V. S. Kalantonis and E. A. Perdios, The planar restricted three{body problem when both primaries are triaxial rigid bodies: Equilibrium points and periodic orbits, Astrophys. Space Sci., 361 (2016), Paper No. 315, 18 pp. doi: 10.1007/s10509-016-2894-x. Google Scholar
S. W. McCusky, Introduction to Celestial Mechanics, Addision Wesley, 1963.Google Scholar
R. K. Sharma, The linear stability of libration points of the photogravitational restricted three-body problem when the smaller primary is an oblate spheroid, Astrophys. Space Sci., 135 (1987), 271-281. Google Scholar
R. K. Sharma, Z. A. Taqvi and K. B. Bhatnagar, Existence of libration Points in the restricted three body problem when both primaries are triaxial rigid bodies, Indian J. Pure Appl. Math., 32 (2001), 125-141. Google Scholar
R. K. Sharma, Z. A. Taqvi and K. B. Bhatnagar, Existence of libration Points in the restricted three body problem when both primaries are triaxial rigid bodies and source of radition, Indian J. Pure Appl. Math., 32 (2001), 981-994. Google Scholar
J. Singh and B. Ishwar, Stability of triangular points in the photogravitational restricted three body problem, Bull. Astr. Soc. India, 27 (1999), 415-424. Google Scholar
J. Singh and H. L. Mohammed, Robe's circular restricted three-body problem under oblate and triaxial primaries, Earth Moon Planets, 109 (2012), 1-11. doi: 10.1007/s11038-012-9397-8. Google Scholar
V. Szebehely, Theory of Orbits: The Restricted Three Body Problem, Academic Press, 1967.Google Scholar
F. B. Zazzera, F. Topputo and M. Mauro Massari, Assessment of Mission Design Including Utilization of Libration Points and Weak Stability Boundaries, ESA / ESTEC, 2005.Google Scholar
Hildeberto E. Cabral, Zhihong Xia. Subharmonic solutions in the restricted three-body problem. Discrete & Continuous Dynamical Systems - A, 1995, 1 (4) : 463-474. doi: 10.3934/dcds.1995.1.463
Qinglong Zhou, Yongchao Zhang. Analytic results for the linear stability of the equilibrium point in Robe's restricted elliptic three-body problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1763-1787. doi: 10.3934/dcds.2017074
Jungsoo Kang. Some remarks on symmetric periodic orbits in the restricted three-body problem. Discrete & Continuous Dynamical Systems - A, 2014, 34 (12) : 5229-5245. doi: 10.3934/dcds.2014.34.5229
Niraj Pathak, V. O. Thomas, Elbaz I. Abouelmagd. The perturbed photogravitational restricted three-body problem: Analysis of resonant periodic orbits. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 849-875. doi: 10.3934/dcdss.2019057
Jean-Baptiste Caillau, Bilel Daoud, Joseph Gergaud. Discrete and differential homotopy in circular restricted three-body control. Conference Publications, 2011, 2011 (Special) : 229-239. doi: 10.3934/proc.2011.2011.229
Frederic Gabern, Àngel Jorba, Philippe Robutel. On the accuracy of restricted three-body models for the Trojan motion. Discrete & Continuous Dynamical Systems - A, 2004, 11 (4) : 843-854. doi: 10.3934/dcds.2004.11.843
Edward Belbruno. Random walk in the three-body problem and applications. Discrete & Continuous Dynamical Systems - S, 2008, 1 (4) : 519-540. doi: 10.3934/dcdss.2008.1.519
Regina Martínez, Carles Simó. On the stability of the Lagrangian homographic solutions in a curved three-body problem on $\mathbb{S}^2$. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 1157-1175. doi: 10.3934/dcds.2013.33.1157
Xiaojun Chang, Tiancheng Ouyang, Duokui Yan. Linear stability of the criss-cross orbit in the equal-mass three-body problem. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 5971-5991. doi: 10.3934/dcds.2016062
Richard Moeckel. A topological existence proof for the Schubart orbits in the collinear three-body problem. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 609-620. doi: 10.3934/dcdsb.2008.10.609
Marcel Guardia, Tere M. Seara, Pau Martín, Lara Sabbagh. Oscillatory orbits in the restricted elliptic planar three body problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 229-256. doi: 10.3934/dcds.2017009
Mitsuru Shibayama. Non-integrability of the collinear three-body problem. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 299-312. doi: 10.3934/dcds.2011.30.299
Richard Moeckel. A proof of Saari's conjecture for the three-body problem in $\R^d$. Discrete & Continuous Dynamical Systems - S, 2008, 1 (4) : 631-646. doi: 10.3934/dcdss.2008.1.631
Hiroshi Ozaki, Hiroshi Fukuda, Toshiaki Fujiwara. Determination of motion from orbit in the three-body problem. Conference Publications, 2011, 2011 (Special) : 1158-1166. doi: 10.3934/proc.2011.2011.1158
Kuo-Chang Chen. On Chenciner-Montgomery's orbit in the three-body problem. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 85-90. doi: 10.3934/dcds.2001.7.85
Rongchang Liu, Jiangyuan Li, Duokui Yan. New periodic orbits in the planar equal-mass three-body problem. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 2187-2206. doi: 10.3934/dcds.2018090
Abimael Bengochea, Manuel Falconi, Ernesto Pérez-Chavela. Horseshoe periodic orbits with one symmetry in the general planar three-body problem. Discrete & Continuous Dynamical Systems - A, 2013, 33 (3) : 987-1008. doi: 10.3934/dcds.2013.33.987
Samuel R. Kaplan, Mark Levi, Richard Montgomery. Making the moon reverse its orbit, or, stuttering in the planar three-body problem. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 569-595. doi: 10.3934/dcdsb.2008.10.569
Tiancheng Ouyang, Duokui Yan. Variational properties and linear stabilities of spatial isosceles orbits in the equal-mass three-body problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3989-4018. doi: 10.3934/dcds.2017169
Lyudmila Grigoryeva, Juan-Pablo Ortega, Stanislav S. Zub. Stability of Hamiltonian relative equilibria in symmetric magnetically confined rigid bodies. Journal of Geometric Mechanics, 2014, 6 (3) : 373-415. doi: 10.3934/jgm.2014.6.373
PDF downloads (84)
HTML views (299)
Hadia H. Selim Juan L. G. Guirao Elbaz I. Abouelmagd | CommonCrawl |
\begin{document}
\maketitle
\begin{abstract} The manifold which admits a genus-$2$ reducible Heegaard splitting is one of the $3$-sphere, $\mathbb{S}^2 \times \mathbb{S}^1$, lens spaces and their connected sums. For each of those manifolds except most lens spaces, the mapping class group of the genus-$2$ splitting was shown to be finitely presented. In this work, we study the remaining generic lens spaces, and show that the mapping class group of the genus-$2$ Heegaard splitting is finitely presented for any lens space by giving its explicit presentation. As an application, we show that the fundamental groups of the spaces of the genus-$2$ Heegaard splittings of lens spaces are all finitely presented. \end{abstract}
\section*{Introduction}
It is well known that every closed orientable $3$-manifold $M$ can be decomposed into two handlebodies $V$ and $W$ of the same genus $g$ for some $g \geq 0$. That is, $V \cup W = M$ and $V \cap W = \partial V = \partial W = \Sigma$, a genus-$g$ closed orientable surface. We call such a decomposition a {\it Heegaard splitting} for the manifold $M$ and denote it by $(V, W; \Sigma)$. The surface $\Sigma$ is called the {\it Heegaard surface} of the splitting, and the genus of $\Sigma$ is called the {\it genus} of the splitting. The $3$-sphere admits a Heegaard splitting of each genus $g \geq 0$, and a lens space a Heegaard splitting of each genus $g \geq 1$.
The {\it mapping class group of a Heegaard splitting} for a manifold is the group of isotopy classes of orientation-preserving diffeomorphisms of the manifold that preserve each of the two handlebodies of the splitting setwise. We call such a group the {\it Goeritz group} of the splitting, or a {\it genus-$g$ Goeritz group} when the splitting has genus $g$ as well. Further, when a manifold admits a unique Heegaard splitting of genus-$g$ up to isotopy, we call the Goeritz group of the splitting simply the {\it genus-$g$ Goeritz group} of the manifold without mentioning a specific splitting. We note that the mapping class group of a Heegaard splitting is a subgroup of the mapping class group of the Heegaard surface.
It is important to understand the structure of a Goeritz group, in particular, a finite generating set or a finite presentation of it if any. For example, using a finite presentation of the genus-$2$ Goeritz group of the 3-sphere given in \cite{Ge}, \cite{Sc}, \cite{Ak} and \cite{C}, it is constructed a new theory on the collection of the tunnel number-$1$ knots in \cite{CM09}. Further, it has been an open problem whether the fundamental groups of the spaces of genus-$2$ Heegaard splittings of lens spaces are finitely generated/presented or not, see \cite{JM13}. If the genus-$2$ Goeritz groups are shown to be finitely presented, then so are those fundamental groups.
In \cite{CK14} a finite presentation of the genus-$2$ Goeritz group of $\mathbb S^2 \times \mathbb S^1$ was obtained, and in \cite{CK15a} finite presentations of the genus-$2$ Goeritz groups were obtained for the connected sums whose summands are $\mathbb S^2 \times \mathbb S^1$ or lens spaces. We refer the reader to \cite{Joh10}, \cite{Joh11}, \cite{Sc13}, \cite{Kod} and \cite{CKA} for finite presentations or finite generating sets of the Goeritz groups of several Heegaard splittings and related topics.
For the genus-$2$ Goeritz groups of lens spaces, finite presentations are obtained only for a small class of lens spaces in \cite{C2} and \cite{CK15b}. That is, for the lens spaces $L(p, q)$, $1\leq q \leq p/2$, under the condition $p \equiv \pm 1 \pmod q$. In this work, we study the remaining generic lens spaces, the case of $p \not\equiv \pm 1 \pmod q$. We show that the genus-$2$ Goeritz group of each of those lens spaces is again finitely presented and obtain an explicit presentation, which is introduced in Theorem \ref{thm:presentations of the Goeritz groups for non-connected case} in Section \ref{sec:main_section}. The manifold which admits a genus-$2$ reducible Heegaard splitting is one of the $3$-sphere, $\mathbb S^2 \times \mathbb S^1$, lens spaces and their connected sums. Therefore, Theorem \ref{thm:presentations of the Goeritz groups for non-connected case} together with the previous results mentioned above implies the following.
\begin{theorem} The mapping class group of each of the reducible Heegaard splittings of genus-$2$ is finitely presented. \end{theorem}
In other words, the theorem says that the mapping class groups of genus-$2$ Heegaard splittings of Hempel distance $0$ are all finitely presented. It is shown in \cite{Nam07} and \cite{Joh11} that the mapping class groups are all finite for the Heegaard splittings of Hempel distance at least $4$. The mapping class groups of the splittings of Hempel distances $2$ and $3$ still remain mysterious. (Here note that there are no genus-$2$ splittings of Hempel distance $1$.)
To obtain a presentation of the Goeritz group, we have constructed a simply connected simplicial complex on which the group acts ``nicely'', in particular, so that the quotient of the action is a simple finite complex. And then we calculate the isotropy subgroups of each of the simplices of the quotient, and express the Goeritz group in terms of those subgroups.
For the genus-$2$ Heegaard splitting $(V, W; \Sigma)$ of a lens space $L(p, q)$ with $1\leq q \leq p/2$, we have constructed the {\it primitive disk complex}, denoted by $\mathcal P(V)$, whose vertices are defined to be the isotopy classes of the {\it primitive disks} in the handlebody $V$. In \cite{CK15b}, the combinatorial structure of the complex $\mathcal P(V)$ are fully studied and it was shown that $\mathcal P(V)$ is simply connected, in fact contractible, under the condition $p \equiv \pm 1 \pmod q$, and is used to obtained the presentation of the Goeritz group. In the case of $p \not\equiv \pm 1 \pmod q$, the complex $\mathcal P(V)$ is no longer simply connected. In fact, it consists of infinitely many tree components isomorphic to each other. In the present paper, we will construct a new simplicial complex for this case, which we will call the {\it ``tree of trees''}, whose vertices are the tree components of $\mathcal P(V)$.
In Section \ref{sec:disk_complex}, it will be briefly reviewed the primitive disk complex $\mathcal P(V)$ for the genus-$2$ Heegaard splitting of each lens space. In Section \ref{sec:tree_of_trees}, we construct the complex ``tree of trees'' for the case of $p \not\equiv \pm 1 \pmod q$ and develop some related properties that we need. In the main section, Section \ref{sec:main_section}, the action of the Goeritz group on the tree of trees will be investigated to obtain the presentation of the group. Right before Section \ref{sec:main_section}, the simplest example of our case $p \not\equiv \pm 1 \pmod q$, the lens space $L(12, 5)$, will be studied in detail in Section \ref{sec:first_example} as a motivating example. In the final section, we show that the fundamental groups of the spaces of genus-$2$ Heegaard splittings of lens spaces are all finitely presented (up to the Smale Conjecture for $L(2,1)$).
We use the standard notation $L = L(p, q)$ for a lens space. We refer \cite{Ro} to the reader. The integer $p$ can be assumed to be positive. It is well known that two lens spaces $L(p, q)$ and $L(p', q')$ are diffeomorphic if and only if $p = p'$ and $q'q^{\pm 1} \equiv \pm 1 \pmod p$. Thus, we will assume $1 \leq q \leq p/2$ for the lens space $L(p, q)$. Note that each lens space admits a unique Heegaard splitting of each genus $g \geq 1$ up to isotopy by \cite{BO83}. Throughout the paper, any disks in a handlebody are always assumed to be properly embedded, and their intersection is transverse and minimal up to isotopy. In particular, if a disk $D$ intersects a disk $E$, then $D \cap E$ is a collection of pairwise disjoint arcs that are properly embedded in both $D$ and $E$. For convenience, we will not distinguish disks (or union of disks) and diffeomorphisms from their isotopy classes in their notation. Finally, $\operatorname{Nbd}(X)$ will denote a regular neighborhood of $X$ and $\operatorname{cl}(X)$ the closure of $X$ for a subspace $X$ of a space, where the ambient space will always be clear from the context.
\section{The primitive disk complexes} \label{sec:disk_complex}
\subsection{The non-separating disk complex for the genus-$2$ handlebody}
Let $V$ be a genus-$2$ handlebody. The {\it non-separating disk complex}, denoted by $\mathcal{D}(V)$, of $V$ is a simplicial complex whose vertices are the isotopy classes of non-separating disks in $V$ such that a collection of $k+1$ vertices spans a $k$-simplex if and only if it admits a collection of representative disks which are pairwise disjoint. We note that the disk complex $\mathcal{D}(V)$ is $2$-dimensional and every edge of $\mathcal D(V)$ is contained in infinitely but countably many $2$-simplices. In \cite{McC}, it is proved that $\mathcal D(V)$ and the link of any vertex of $\mathcal D(V)$ are all contractible. Thus, the complex $\mathcal D(V)$ deformation retracts to a tree in its barycentric subdivision spanned by the barycenters of the $1$-simplices and $2$-simplices, which we call the {\it dual tree} of $\mathcal{D}(V)$. See Figure \ref{fig:disk_complex}. We note that each component of any full subcomplex of $\mathcal{D}(V)$ is contractible.
\begin{figure}
\caption{A portion of the non-separating disk complex $\mathcal D(V)$ of a genus-$2$ handlebody $V$ with its dual tree.}
\label{fig:disk_complex}
\end{figure}
Let $D$ and $E$ be non-separating disks in $V$ and suppose that the vertices of the disks in $\mathcal{D}(V)$, which we denote by $D$ and $E$ again, are not adjacent to each other, that is, $D \cap E \neq \emptyset$. In the barycentric subdivision of $\mathcal{D}(V)$, the links of the vertices $D$ and $E$ are disjoint trees. Then there exists a unique shortest path in the dual tree of $\mathcal D(V)$ connecting the two links. Let $v_1,$ $w_1$, $v_2$, $w_2, \ldots, w_{n-1}$, $v_n$ be the sequence of vertices of this path. We note that each $v_i$ is trivalent while each $w_i$ has infinite valency in the dual tree. Let $\Delta_1$, $\Delta_2, \ldots, \Delta_n$ be the 2-simplices of $\mathcal{D}(V)$ whose baricenters are the trivalent vertices $v_1$, $v_2, \ldots, v_n$ respectively. We call the full subcomplex of $\mathcal{D}(V)$ spanned by the vertices of $\Delta_1$, $\Delta_2, \ldots, \Delta_n$ the {\it corridor} connecting $D$ and $E$ and we denote it by $\mathcal{C}_{\{ D, E \} } = \{ \Delta_1 , \Delta_2 , \ldots, \Delta_n \}$. See Figure \ref{fig:corridor}. Let $E_*$ and $E_{**}$ be the two vertices of $\Delta_1$ other than $E$. We call the pair $\{ E_* , E_{**} \}$ the {\it principal pair} of $E$ with respect to $D$ for the corridor $\mathcal{C}_{\{ D, E \} }$.
\begin{center} \begin{overpic}[width=12cm,clip]{corridor.eps}
\linethickness{3pt}
\put(0, 52){$D$}
\put(54, 94){$\widetilde{D}$}
\put(280, 94){$E_\ast$}
\put(280, 10){$E_{\ast\ast}$}
\put(335, 52){$E$}
\put(295, 44){$\Delta_1$}
\put(265, 30){$\Delta_2$}
\put(220, 30){$\Delta_3$}
\put(210, 75){$\Delta_4$}
\put(165, 30){$\Delta_5$}
\put(145, 75){$\Delta_6$}
\put(95, 75){$\Delta_7$}
\put(80, 30){$\Delta_8$}
\put(35, 44){$\Delta_9$} \end{overpic} \captionof{figure}{The corridor connecting $D$ and $E$.} \label{fig:corridor} \end{center}
Let $D$ and $E$ be non-separating disks in $V$. We assume that $D$ intersects $E$ transversely and minimally. Let $C$ be an {\it outermost subdisk} of $D$ cut off by $D \cap E$, that is, $C$ is a disk cut off from $D$ by an arc $\alpha$ of $D \cap E$ in $D$ such that $C \cap E= \alpha$. The arc $\alpha$ cuts $E$ into two disks, say $F_1$ and $F_2$. Then we have two disjoint disks $E_1$ and $E_2$ which are isotopic to the disks $F_1 \cup C$ and $F_2 \cup C$ respectively. We note that each of $E_1$ and $E_2$ is isotopic to neither $E$ nor $D$, and each of $E_1$ and $E_2$ has fewer arcs of intersection with $D$ than $E$ had since at least the arc $\alpha$ no longer counts, as $D$ and $E$ are assumed to intersect minimally. Further, it is easy to check that both $E_1$ and $E_2$ are non-separating, and these two disks are determined without depending on the choice of the outermost subdisk of $D$ cut off by $D \cap E$ (this is a special property of a genus-$2$ handlebody). We call the disks $E_1$ and $E_2$ the {\it disks from surgery} on $E$ along the outermost subdisk $C$ or simply the {\it disks from surgery} on $E$ along $D$.
Let $D$, $E$ and $E_0$ be non-separating disks in $V$. Assume that $E$ and $E_0$ are non-separating disks in $V$ which are disjoint and are not isotopic to each other, and that $D$ intersects $E \cup E_0$ transversely and minimally. In the same way to the above, we can consider the surgery on $E \cup E_0$ along an outermost subdisk of $D$ cut off by $D \cap (E \cup E_0)$. In fact, one of the two resulting disks from the surgery is isotopic to either $E$ or $E_0$, and the other, denoted by $E_1$, is isotopic to none of $E$ and $E_0$ (this is also a special property of a genus-$2$ handlebody). We call the disk $E_1$ the {\it disk from surgery} on $E \cup E_0$ along $D$. (If $D$ is already disjoint from $E \cup E_0$, then define simply $E_1$ to be $D$.)
\begin{lemma} \label{lem:principal pair of a corridor} Let $\mathcal{C}_{\{ D, E \} } = \{ \Delta_1 , \Delta_2 , \ldots, \Delta_n \}$ be the corridor connecting $D$ and $E$. Then the disks of the principal pair $\{ E_*, E_{**}\}$ are exactly the disks from surgery on $E$ along $D$. \end{lemma} \begin{proof} We use the induction on the number $n \geq 2$ of the $2$-simplices of the corridor. If $n = 2$, the conclusion holds immediately since each of $D$ and $E$ is disjoint from $E_* \cup E_{**}$, and so any outermost subdisk of $D$ cut off by $D \cap E$ is also disjoint from $E_* \cup E_{**}$. If $n \geq 3$, choose a vertex, say $\widetilde{D}$, of $\Delta_n$ other than $D$ that is not adjacent to $E$. Then we have the (sub)corridor $\mathcal{C}_{\{ \widetilde{D}, E \} } = \{ \Delta_1 , \Delta_2 , \ldots, \Delta_k \}$ connecting $\widetilde{D}$ and $E$ for some $k < n$. See Figure \ref{fig:corridor}. By the assumption of the induction, the disks $E_*$ and $E_{**}$ are exactly the disks from surgery on $E$ along $\widetilde{D}$. Since $D$ is disjoint from $\widetilde{D}$, the disks from surgery on $E$ along $D$ are the same to those from surgery on $E$ along $\widetilde{D}$. \end{proof}
Consider any two consecutive $2$-simplices $\Delta_k$ and $\Delta_{k+1}$, $k \in \{1, 2, \ldots, n-1\}$, of a corridor $\mathcal{C}_{\{ D, E \} } = \{ \Delta_1 , \Delta_2 , \ldots, \Delta_n \}$ connecting $D$ and $E$. When we write $\Delta_k = \{D_0, D_1, D_2\}$ and $\Delta_{k+1} = \{D_1, D_2, D_3\}$ as triples of vertices, we see that $\{D_1, D_2\}$ are the principal pair of $D_0$ with respect to $D$ for the (sub)corridor $\mathcal{C}_{\{ D, D_0 \} } = \{ \Delta_k , \Delta_{k+1} , \ldots, \Delta_n \}$. Further, the followings are immediate from Lemma \ref{lem:principal pair of a corridor}. \begin{itemize} \item $D_1$ and $D_2$ are the disks from surgery on $D_0$ along $D$. \item $D_3$ is the disk from surgery on $D_1 \cup D_2$ along $D$. \end{itemize} This observation implies the following lemma.
\begin{lemma} \label{lem:two corridors} Let $\mathcal{C}_{\{ D, E \} } = \{ \Delta_1 , \Delta_2 , \ldots, \Delta_n \}$ be the corridor connecting $D$ and $E$. For each $k \in \{1, 2, \ldots, n-1\}$, we write the edge $\Delta_k \cap \Delta_{k+1} = \{D_k, D'_k\}$ and the $2$-simplex $\Delta_{k+1} = \{D_{k+1}, D_k, D'_k\}$. Let $F$ be a vertex that is not adjacent to $E$. If $D_{k+1}$ is the disk from surgery on $D_k \cup D'_k$ along $F$ for each $k$, then the corridor $\mathcal{C}_{\{ F, E \} }$ connecting $F$ and $E$ contains the corridor $\mathcal{C}_{\{ D, E \} }$. \end{lemma}
\subsection{The primitive disk complexes for lens spaces}
Let $(V, W; \Sigma)$ be the genus-$2$ Heegaard splitting of a lens space $L = L(p, q)$. A disk $E$ properly embedded in $V$ is said to be {\it primitive} if there exists a disk $E'$ properly embedded in $W$ such that the two loops $\partial E$ and $\partial E' $ intersect transversely in a single point. Such a disk $E'$ is called a {\it dual disk} of $E$, which is also primitive in $W$ having a dual disk $E$. Note that both $W \cup \operatorname{Nbd}(E)$ and $V \cup \operatorname{Nbd}(E')$ are solid tori. Primitive disks are necessarily non-separating.
The {\it primitive disk complex} $\mathcal P(V)$ for the genus-$2$ splitting $(V, W; \Sigma)$ is defined to be the full subcomplex of $\mathcal D(V)$ spanned by the primitive disks in $V$. If a genus-$2$ Heegaard splitting admits primitive disks, then the manifold is one of the $3$-sphere, $\mathbb S^2 \times \mathbb S^1$ or a lens space, and so we can define the primitive disk complex for each of those manifolds. The combinatorial structure of the primitive disk complexes for each of the $3$-sphere and $\mathbb S^2 \times \mathbb S^1$ has been well understood in \cite{C} and \cite{CK14}. For the lens spaces, we have the following results from \cite{CK15b}.
\begin{theorem}[Theorems 4.2 and 4.5 in \cite{CK15b}] \label{thm:contractibility} For a lens space $L(p, q)$ with $1 \leq q \leq p/2$, the primitive disk complex $\mathcal P(V)$ for the genus-$2$ Heegaard splitting $(V, W; \Sigma)$ of $L(p, q)$ is contractible if and only if $p \equiv \pm 1 \pmod{q}$. If $p \not\equiv \pm 1 \pmod q$, then $\mathcal P(V)$ is not connected and consists of infinitely many tree components. \end{theorem}
In the case of $p \not\equiv \pm 1 \pmod q$, each vertex of any tree component of $\mathcal P(V)$ has infinite valency, that is, for each primitive disk $D$ in $V$ there exist infinitely many non-isotopic primitive disks disjoint from $D$. Thus, all the tree components of $\mathcal P(V)$ are isomorphic to each other.
\section{The tree of trees} \label{sec:tree_of_trees}
\subsection{The primitive disks} Let $(V, W; \Sigma)$ be the genus-$2$ Heegaard splitting of a lens space $L = L(p, q)$. In this subsection, we will develop several properties of the primitive disks in $V$ and $W$ we need, in particular, some sufficient conditions for the non-primitiveness. Each simple closed curve on the boundary of the genus-$2$ handlebody $W$ represents an element of the free group $\pi_1 (W)$ of rank 2. The following is a well known fact.
\begin{lemma}[Gordon \cite{Go}] Let $D$ be a non-separating disk in $V$. Then $D$ is primitive if and only if $\partial D$ represents a primitive element of $\pi_1 (W)$. \label{lem:primitive_element} \end{lemma}
Here, an element of a free group is said to be {\it primitive} if it is a member of a generating set. Primitive elements of the rank-$2$ free group has been well understood. In particular, we have the following property.
\begin{lemma}[Osborne-Zieschang \cite{OZ}] Given a generating pair $\{x, y\}$ of the free group $\mathbb Z \ast \mathbb Z$ of rank $2$, a cyclically reduced form of any primitive element can be written as a product of terms each of the form $x^\epsilon y^n$ or $x^\epsilon y^{n+1}$, or else a product of terms each of the form $y^\epsilon x^n$ or $y^\epsilon x^{n+1}$, for some $\epsilon \in \{1,-1\}$ and some $n \in \mathbb Z$. \label{lem:property of primitive elements} \end{lemma}
Therefore, we see that no cyclically reduced form of a primitive element in terms of $x$ and $y$ can contain $x$ and $x^{-1}$ $($and $y$ and $y^{-1})$ simultaneously.
Let $\{E'_1, E'_2\}$ be a complete meridian system of the genus-$2$ handlebody $W$. Assign symbols $x$ and $y$ to the oriented circles $\partial E_1'$ and $\partial E'_2$ respectively. Then any oriented simple closed curve on $\partial W$ intersecting $\partial E_1' \cup \partial E'_2$ transversely and minimally represents an element of the free group $\pi_1 (W) = \langle x, y \rangle$, whose word in $\{x^{\pm 1}, y^{\pm 1} \}$ can be read off from the intersections with $\partial E_1'$ and $\partial E'_2$. Let $l$ be an oriented simple closed curve on $\partial W$ that meets $\partial E_1' \cup \partial E'_2$ transversely and minimally. The following lemma is given in \cite{CK15b}.
\begin{lemma}[Lemma 3.3 in \cite{CK15b}] With a suitable choice of orientations of $\partial E_1'$ and $\partial E_2'$, if a word in $\{x^{\pm 1}, y^{\pm 1} \}$ corresponding to $l$ contains one of the pairs of terms$:$ \begin{enumerate} \item both of $xy$ and $xy^{-1}$, or \item both of $xy^nx$ and $y^{n+2}$ for $n \geq 0$, \end{enumerate} then the element of $\pi_1 (W)$ represented by $l$ cannot be $($a positive power of $)$ a primitive element. \label{lem:property_of_primitive_elements2} \end{lemma}
We introduce three more sufficient conditions for non-primitiveness as follows.
\begin{lemma} With a suitable choice of orientations of $\partial E_1'$ and $\partial E_2'$, if a word in $\{x^{\pm 1}, y^{\pm 1} \}$ corresponding to $l$ contains one of the pairs of terms$:$ \begin{enumerate} \item both of $xy$ and $(xy^{-1})^{\pm 1}$, \item both of $xy$ and $(x^{-1}y)^{\pm 1}$, or \item both of $x^2$ and $y^2$, \end{enumerate} then the element of $\pi_1 (W)$ represented by $l$ cannot be $($a positive power of $)$ a primitive element. \label{lem:key} \end{lemma} \begin{proof} Let $\Sigma'$ be the $4$-holed sphere cut off from $\partial W$ along $\partial E'_1 \cup \partial E'_2$. Denote by ${e_1'}^+$ and ${e_1'}^-$ (by ${e_2'}^+$ and ${e_2'}^-$, respectively) the boundary circles of $\Sigma'$ coming from $\partial E_1'$ (from $\partial E_2'$, respectively).
Suppose first that $l$ determines a word containing both $xy$ and $(xy^{-1})^{\pm 1}$. We can assume that there are two subarcs $\alpha_+$ and $\alpha_-$ of $l \cap \Sigma'$ such that $\alpha_+$ connects ${e_1'}^+$ and ${e_2'}^-$, and $\alpha_-$ connects ${e_1'}^+$ and ${e_2'}^+$ as in Figure \ref{fig:arcs_1}.
\begin{center} \begin{overpic}[width=3.5cm,clip]{arcs_1.eps} \linethickness{3pt} \put(-2, 80){${e_1'}^+$} \put(-2, 3){${e_1'}^-$} \put(90, 80){${e_2'}^-$} \put(90, 3){${e_2'}^+$} \put(33, 64){\color{red} \small $\alpha_-$} \put(45, 80){\color{red} \small $\alpha_+$} \put(33, 24){\color{red} \small $\beta_-$} \put(45, 11){\color{red} \small $\beta_+$} \end{overpic} \captionof{figure}{The arcs $\alpha_\pm$ and $\beta_\pm$ on $\Sigma'$.} \label{fig:arcs_1} \end{center}
Since $|~l \cap {e_1'}^+| = |~l \cap {e_1'}^-|$ and $|~l \cap {e_2'}^+| = |~l \cap {e_2'}^-|$, we must have two other arcs $\beta_+$ and $\beta_-$ of $l \cap \Sigma'$ such that $\beta_+$ connects ${e_1'}^-$ and ${e_2'}^+$, and $\beta_-$ connects ${e_1'}^-$ and ${e_2'}^-$. See Figure \ref{fig:arcs_1}. Consequently, there exists no arc component of $l \cap \Sigma'$ whose endpoints lines on the same boundary component of $\Sigma'$. That is, any word corresponding to $l$ contains neither $x^{\pm1} x^{\mp1}$ nor $y^{\pm1} y^{\mp1}$, and hence, it is cyclically reduced.
Since that word contains both $x$ and $x^{-1}$ (or $y$ and $y^{-1}$), $l$ cannot represent (a positive power of) a primitive element of $\pi_1 (W)$. The case where $l$ determines a word containing both $xy$ and $(x^{-1}y)^{\pm 1}$ can be proved in the same way.
Next suppose that $l$ determines a word containing both $x^2$ and $y^2$. Then there are two arcs $\alpha_+$ and $\alpha_-$ of $l \cap \Sigma'$ such that $\alpha_+$ connects ${e_1'}^+$ and ${e_1'}^-$, and $\alpha_-$ connects ${e_2'}^+$ and ${e_2'}^-$. By a similar argument to the above, we must have two other arcs $\beta_+$ and $\beta_-$ of $l \cap \Sigma'$ such that $\beta_+$ connects ${e_1'}^+$ and ${e_2'}^\pm$, say ${e_2'}^-$, and $\beta_-$ connects ${e_1'}^-$ and ${e_2'}^+$. See Figure \ref{fig:arcs_2}. \begin{center} \begin{overpic}[width=3.5cm,clip]{arcs_2.eps} \linethickness{3pt} \put(-2, 80){${e_1'}^+$} \put(-2, 3){${e_1'}^-$} \put(90, 80){${e_2'}^-$} \put(90, 3){${e_2'}^+$} \put(8, 54){\color{red} \small $\alpha_+$} \put(82, 54){\color{red} \small $\alpha_-$} \put(45, 77){\color{red} \small $\beta_+$} \put(45, 10){\color{red} \small $\beta_-$} \end{overpic} \captionof{figure}{The arcs $\alpha_\pm$ and $\beta_\pm$ on $\Sigma'$.} \label{fig:arcs_2} \end{center} Then we see again that the word corresponding to $l$ is cyclically reduced. Since that word contains both of $x^2$ and $y^2$, $l$ cannot represent (a positive power of) a primitive element. \end{proof}
\begin{lemma} With a suitable choice of orientations of $\partial E_1'$ and $\partial E_2'$, if a word in $\{x^{\pm 1}, y^{\pm 1} \}$ corresponding to $l$ contains a term of the form $x y^{\epsilon_{1}} y^{\epsilon_{2}} \cdots y^{\epsilon_{l}} x^{-1}$, where $\epsilon_{i} = \pm 1$ $(i=1, 2, \ldots, l)$ and $\sum_{i=1}^{l} \epsilon_{i} \neq 0$, then the element of $\pi_1 (W)$ represented by $l$ cannot be $($a positive power of $)$ a primitive element. \label{lem:key2} \end{lemma} \begin{proof} Set $w = x y^{\epsilon_{1}} y^{\epsilon_{2}} \cdots y^{\epsilon_{l}} x^{-1}$. Let $\alpha$ the subarc of $l$ corresponding to the subword $w$. By cutting the Heegaard surface $\Sigma$ along $\partial E_1' \cup \partial E_2' $, we get a 4-holed sphere $\Sigma'$. Let $e'^\pm_{1}$ and $e'^\pm_{2}$ be the holes coming from $E_1'$ and $E_2'$ respectively. Without loss of generality we can assume that $\epsilon_1 =1$. If $\epsilon_l =1$, then we get the conclusion by Lemma \ref{lem:key}. Thus, we assume that $\epsilon_l =-1$, this implies that the word $w$ contains the term $y y^{-1}$. Let $\alpha_0$, $\alpha_1$, $\alpha_2$ be the subarcs of $\alpha$ corresponding to the term $xy$, $y y^{-1}$, $y^{-1} x^{-1}$. Note that on the surface $\Sigma'$ the arc $\alpha_0$ connects the two circles $e'^+_1$ and $e'^-_2$, $\alpha_1$ connects the circle $e'^-_2$ to itself, and $\alpha_2$ connects the two circles $e'^+_2$ and $e'^-_1$. Let $E^*$ be the band sum of $E'_1$ and $E'_2$ along $\alpha_0$, that is, $E^*$ is the frontier of a regular neighborhood of the union $E'_1 \cup \alpha_0 \cup E'_2$. Then $E_1'$ and $E^*$ form a new complete meridian system of $W$. See Figure \ref{fig:arcs_6}. \begin{center} \begin{overpic}[width=4cm,clip]{arcs_6.eps} \linethickness{3pt} \put(0, 106){$e'^+_1$} \put(0, 3){$e'^-_1$} \put(100, 3){$e'^+_2$} \put(100, 106){$e'^-_2$} \put(42, 104){\color{red} \small $\alpha_0$} \put(35, 14){\color{red} \small $\alpha_1$} \put(115, 70){\color{red} \small $\alpha_2$} \put(52, 53){\color{blue} \small $E^*$} \end{overpic} \captionof{figure}{The arcs $\Sigma' \cap \alpha$ and the disk $E^*$.} \label{fig:arcs_6} \end{center} We assign the same symbol $y$ to $\partial E^*$. Then the arc $\alpha$ determines a word of the form $x y^{\epsilon'_{1}} y^{\epsilon'_{2}} \cdots y^{\epsilon'_{l'}} x^{-1}$, where $\epsilon'_{i} = \pm 1$ $(i=1, 2, \ldots, l)$, $\sum_{i=1}^{l'} \epsilon'_{i} \neq 0$ and $l' < l$. Applying this argument finitely many times, we end with the case where $w$ is reduced (in particular $\epsilon_1 = \epsilon_l$), thus, the conclusion follows. \end{proof}
\begin{lemma} With a suitable choice of orientations of $\partial E_1'$ and $\partial E_2'$, if a word in $\{x^{\pm 1}, y^{\pm 1} \}$ corresponding to $l$ contains both of terms of the forms $x y^{\epsilon_{1}} y^{\epsilon_{2}} \cdots y^{\epsilon_{l}}x$ and $x y^{\delta_{1}} y^{\delta_{2}} \cdots y^{\delta_{k}}x$, where $\epsilon_{i} = \pm 1$ for $i \in \{1 , 2, \ldots, l\}$, $\delta_{j} = \pm 1$ for $j \in \{1, 2, \ldots, k\}$ and
$| \sum_{i=1}^{l} \epsilon_{i} - \sum_{j=1}^{k} \delta_{j} | \geq 2$, then the element of $\pi_1 (W)$ represented by $l$ cannot be $($a positive power of $)$ a primitive element. \label{lem:key3} \end{lemma} \begin{proof} Set $w_1 = x y^{\epsilon_{1}} y^{\epsilon_{2}} \cdots y^{\epsilon_{l}}x$, $w_2 = x y^{\delta_{1}} y^{\delta_{2}} \cdots y^{\delta_{k}}x$, $m = \sum_{i=1}^{l} \epsilon_{i}$ and $n = \sum_{j=1}^{k} \delta_{j}$. Let $\alpha$ and $\beta$ be the subarcs of $l$ corresponding to the subwords $w_1$ and $w_2$ respectively. By cutting the Heegaard surface $\Sigma$ along $\partial E_1' \cup \partial E_2' $, we get a 4-holed sphere $\Sigma'$. Let $e'^\pm_{1}$ and $e'^\pm_{2}$ be the holes coming from $E_1'$ and $E_2'$ respectively.
Suppose first that both subwords $w_1$ and $w_2$ are reduced, so $w_1 = x y^m x$ and $w_2 = x y^n x$. If both of $m$ and $n$ are non-zero and these have different signs, then we get the conclusion by Lemma \ref{lem:key}. Thus, we assume that $m$ and $n$ have the same sign or one of them is zero. Without loss of generality we can assume that $n > m \geq 0$. Suppose that $m=0$. Then $w_1 = x^2$ and $w_2 = x y^n x$ ($n \geq 2$) and thus, by Lemma \ref{lem:key}, $l$ cannot represent a primitive element of $\pi_1 (W)$. Suppose that $m>0$. Let $\alpha_0$ be the subarc of $\alpha$ corresponding to the term $xy$. Then $\alpha_0$ connects two circles $e'^+_1$ and $e'^-_2$ in $\Sigma'$. Let $E^*$ be the band sum of $E'_1$ and $E'_2$ along $\alpha_0$. Then $E_1'$ and $E^*$ form a new complete meridian system of $W$. See Figure \ref{fig:arcs_4}. \begin{center} \begin{overpic}[width=3.5cm,clip]{arcs_4.eps} \linethickness{3pt} \put(0, 80){$e'^+_1$} \put(0, 3){$e'^-_1$} \put(90, 80){$e'^-_2$} \put(90, 3){$e'^+_2$} \put(35, 81){\color{red} \small $\alpha_0$} \put(45, 44){\color{blue} \small $E^*$} \end{overpic} \captionof{figure}{The arcs $\Sigma' \cap (\alpha \cup \beta)$ and the disk $E^*$.} \label{fig:arcs_4} \end{center} Assigning the same symbol $y$ to $\partial E^*$, the arc $\alpha$ determines a word of the form $xy^{m-1}x$ while $\beta$ determines $xy^{n-1}x$. Applying this argument $m$ times, we finally end with the case of $m=0$, thus, the conclusion follows by induction.
Next suppose that at least one of $w_1$ or $w_2$ is reducible. Without loss of generality we can assume that $w_1$ is reducible and $\epsilon_{1} = 1$. Since $w_1$ contains the term $y y^{-1}$, there is no arc in $\Sigma' \cap l$ connecting $e'^-_{1}$ and $e'^+_{1}$. See Figure \ref{fig:arcs_3}. \begin{center} \begin{overpic}[width=3.5cm,clip]{arcs_3.eps} \linethickness{3pt} \put(0, 90){$e'^+_1$} \put(0, 13){$e'^-_1$} \put(90, 90){$e'^-_2$} \put(90, 13){$e'^+_2$} \put(42, 0){\small $y y^{-1}$} \end{overpic} \captionof{figure}{There are no arcs of $\Sigma' \cap l$ connecting $e'^-_{1}$ and $e'^+_{1}$.} \label{fig:arcs_3} \end{center} This implies that the word represented by $l$ cannot contain the term $x^2$, thus, $k \neq 0$. Further if one of $\epsilon_{l}$, $\delta_1$ and $\delta_{k}$ is $-1$, then $l$ cannot represent a primitive element of $\pi_1 (W)$ by Lemma \ref{lem:key}. Thus, we can assume that $\epsilon_1 = \epsilon_l = \delta_1 = \delta_k = 1$. Let $\alpha$ and $\beta$ be the subarcs of $l$ corresponding to the subwords $w_1$ and $w_2$ respectively. Then $\Sigma' \cap ( \alpha \cup \beta )$ consits of arcs shown in Figure \ref{fig:arcs_5}. Let $\alpha_0$ be the subarc of $\alpha$ correspoinding to the word $xy$ and let $E^*$ be the band sum of $E'_1$ and $E'_2$ along $\alpha_0$. \begin{center} \begin{overpic}[width=4cm,clip]{arcs_5.eps} \linethickness{3pt} \put(0, 106){$e'^+_1$} \put(0, 3){$e'^-_1$} \put(100, 3){$e'^+_2$} \put(100, 106){$e'^-_2$} \put(42, 104){\color{red} \small $\alpha_0$} \put(52, 53){\color{blue} \small $E^*$} \end{overpic} \captionof{figure}{The arcs $\Sigma' \cap (\alpha \cup \beta)$ and the disk $E^*$.} \label{fig:arcs_5} \end{center} Then $E_1'$ and $E^*$ form a new complete meridian system of $W$. Assigning the same symbol $y$ to $\partial E^*$, the arc $\alpha$ determines a word of the form $x y^{\epsilon'_{1}} y^{\epsilon'_{2}} \cdots y^{\epsilon'_{l'}}x$ ($\epsilon'_i = \pm 1$), while $\beta$ determines $x y^{\delta'_{1}} y^{\delta'_{2}} \cdots y^{\delta'_{k'}}x$ ($\delta'_j = \pm 1$). Here note that we have $l' < l$, $k' < k$, $\sum_{i=1}^{l'} \epsilon'_{i}= m$ and $\sum_{j=1}^{k'} \delta'_{j} = n$. We repeat this argument until both words correspoinding to $\alpha$ and $\beta$ become reduced. We claim that this process finishes in finitely many times. Suppose not. Then after repeating this process finitely many times, we finally end with the case where the word correspoinding to one of $\alpha$ and $\beta$ (say $\alpha$) is $x^2$. Then again by Figure \ref{fig:arcs_3} the word corresponding to $\beta$ must be reduced. This is a contradiction. Now the conclusion follows from the argument of the case where both $w_1$ and $w_2$ are reduced. \end{proof}
\subsection{Shells}
Let $(V, W; \Sigma)$ be the genus-$2$ Heegaard splitting of a lens space $L = L(p, q)$ with $1 \leq q \leq p/2$. We briefly review the definition of a {\it shell}, which is a special subcomplex of the star neighborhood of a vertex of a primitive disk in the non-separating disk complex $\mathcal D(V)$, introduced in Section 3.3 of \cite{CK15b}. We call a pair of disjoint, non-isotopic primitive disks in $V$ a {\it primitive pair} in $V$. A non-separating disk $E_0$ properly embedded in $V$ is said to be {\it semiprimitive} if there is a primitive disk $E'$ in $W$ disjoint from $E_0$. A primitive pair and a semiprimitive disks in $W$ can be defined in the same way.
Let $E$ be a primitive disk in $V$. Choose a dual disk $E'$ of $E$. Then we have unique semiprimitive disks $E_0$ and $E'_0$ in $V$ and $W$ respectively that are disjoint from $E \cup E'$. We denote the solid torus $\operatorname{cl}(V- \operatorname{Nbd}(E))$ simply by $V_E$. By a suitable choice of the oriented longitude and meridian of $V_E$, the circle $\partial E'_0$ can be assumed to be a $(p, \bar{q})$-curve on the boundary of $V_E$, where $\bar{q} \in \{q, q'\}$ and $q'$ is the unique integer satisfying $1 \leq q' \leq p/2$ and $qq' \equiv \pm 1 \pmod p$. We say that $E$ is of {\it $(p , \bar{q})$-type} if $\partial E'_0$ is a $(p, \bar{q})$-curve on $\partial V_E$.
Suppose first that $E$ is of $(p , q)$-type. Then the circle $\partial E_0$ is a $(p, q')$-curve on the boundary of the solid torus $\operatorname{cl}(W- \operatorname{Nbd}(E'))$. We construct a sequence of disks $E_0$, $E_1 , \ldots , E_p$ starting at the semiprimitive disk $\partial E_0$ as follows. Choose an arc $\alpha_0$ on $\Sigma$ so that $\alpha_0$ meets each of $E_0$ and $E$ in exactly one point of $\partial \alpha_0$, and $\alpha_0$ is disjoint from $\partial E' \cup \partial E'_0$. Let $E_1$ be a disk in $V$ which is the band sum of $E_0$ and $E$ along $\alpha_0$. Then the disk $E_1$ is disjoint from $E \cup E_0$ and intersects $\partial E'$ in a single point and $\partial E'_0$ in $p$ points. Then inductively, we construct $E_{i+1}$ from $E_i$ until we get $E_p$ by taking the band sum with $E$ along an arc $\alpha_i$ on $\Sigma$ such that $\alpha_i$ meets each of $E_i$ and $E$ in exactly one point of $\partial \alpha_0$, and $\alpha_0$ is disjoint from $\partial E' \cup \partial E'_0$. Figure \ref{fig:sequence_of_disks} illustrates the case when $E$ is of $(7, 3)$-type, and so $\partial E_0$ is a $(7, 2)$-curve on the boundary of the solid torus $\operatorname{cl}(W- \operatorname{Nbd}(E'))$ in the lens space $L(7, 3)$.
\begin{center} \begin{overpic}[width=15cm,clip]{sequence_of_disks.eps}
\linethickness{3pt}
\put(0, 210){$E_0'$}
\put(195, 210){$E'$}
\put(215, 210){$E_0'$}
\put(415, 210){$E'$}
\put(0, 70){$E_0'$}
\put(195, 70){$E'$}
\put(215, 70){$E_0'$}
\put(415, 70){$E'$}
\put(115, 204){\small $\alpha_0$}
\put(330, 205){\small $\alpha_1$}
\put(110, 68){\small $\alpha_2$}
\put(130, 190){$\partial E$}
\put(345, 190){$\partial E$}
\put(130, 50){$\partial E$}
\put(327, 72){$\partial E$}
\put(110, 240){{\color{red} $\partial E_0$}}
\put(320, 240){{\color{red} $\partial E_1$}}
\put(100, 100){{\color{red} $\partial E_2$}}
\put(300, 80){{\color{red} $\partial E_7$}}
\put(180, 150){\Large $W$}
\put(410, 150){\Large $W$}
\put(180, 10){\Large $W$}
\put(410, 10){\Large $W$} \end{overpic} \captionof{figure}{The circles $\partial E_0$, $\partial E_1$, $\partial E_2$ and $\partial E_7$ on the surface $\Sigma = \partial W$ in $L(7, 2)$ for the sequence of disks $E_0, E_1, \ldots, E_7$.} \label{fig:sequence_of_disks} \end{center}
We call the full subcomplex of $\mathcal D(V)$ spanned by the vertices $E_0, E_1, \ldots, E_p$, and $E$ a {\it $(p, q)$-shell} centered at the primitive disk $E$ and denote it simply by $\mathcal S_E = \{E_0, E_1, \ldots, E_p\}$, see Figure \ref{fig:shell} for example.
\begin{center} \begin{overpic}[width=6cm,clip]{shell.eps}
\linethickness{3pt}
\put(80, 4){$E$}
\put(-10, 4){$E_0$}
\put(-10, 53){$E_1$}
\put(16, 90){$E_2$}
\put(55, 100){$E_3$}
\put(100, 100){$E_4$}
\put(145, 87){$E_5$}
\put(171, 53){$E_6$}
\put(171, 4){$E_7$} \end{overpic} \captionof{figure}{A $(7, 3)$-shell in the disk complex $\mathcal D(V)$ for $L(7, 3)$. The disks $E_1$, $E_2$, $E_5$, $E_6$ and $E$ are the only primitive disks by Lemma \ref{lem:sequence}.} \label{fig:shell} \end{center}
A shell is a 2-dimensional subcomplex of $\mathcal{D}(V)$. It is straightforward from the construction that for $0 \leq i < j \leq p$, $E_i \cap E_j$ consists of $j - i -1$ arcs. We note that there are infinitely many choice of such an arc $\alpha_0$, and so of the disk $E_1$. But once we choose $E_1$, the arcs $\alpha_i$ for $i \in \{1, 2, \ldots, p-1\}$ are uniquely determined and so are the successive disks $E_2$, $E_3, \ldots, E_p$.
Assign symbols $x$ and $y$ to the oriented circles $\partial E'$ and $\partial E'_0$ respectively. Then the oriented boundary circles $\partial E$, $\partial E_0$, $\partial E_1, \ldots, \partial E_p$ from the shell $\mathcal S_E = \{E_0, E_1, \ldots, E_p\}$ represent elements of the free group $\pi_1 (W) = \langle x, y \rangle$. We observe that, with a suitable choice of orientation, the circles $\partial E$, $\partial E_0$, $\partial E_1, \partial E_2, \ldots, \partial E_{p-1}, \partial E_p$ determine the words of the form $x$, $y^p$, $x y^{q}, xy^qxy^{p-q}, \ldots, (xy)^{p-1}y, (xy)^p$ respectively.
\begin{lemma}[Lemmas 3.8 and 3.13 in \cite{CK15b}] \label{lem:sequence} Let $\mathcal S_E = \{E_0, E_1, \ldots, E_{p-1}, E_p\}$ be a $(p, q)$-shell centered at a primitive disk $E$ in $V$. Then we have \begin{enumerate} \item $E_0$ and $E_p$ are semiprimitive. \item $E_j$ is primitive if and only if $j \in \{1, q', p-q', p-1\}$ where $q'$ is the unique integer satisfying $qq' \equiv \pm1 \pmod p$ and $1 \leq q' \leq p/2$. \item $E_1$ and $E_{p-1}$ are of $(p,q)$-type while $E_{q'}$ and $E_{p-q'}$ are of $(p,q')$-type. \end{enumerate} \end{lemma}
We have constructed a $(p, q)$-shell $\mathcal S_E$ by assuming $E$ is of a $(p, q)$-type. If $E$ is of $(p, q')$-type, then $\mathcal S_E$ is a $(p, q')$-shell, and Lemma \ref{lem:sequence} still holds by exchanging $q$ and $q'$ in the conclusion.
\begin{lemma}[Lemma 3.6 in \cite{CK15b}] \label{lem:first_surgery} Let $E_0$ be a semiprimitive disk in $V$, and let $E$ be a primitive disk in $V$ disjoint from $E_0$. If a primitive or semiprimitive disk $D$ in $V$ intersects $E \cup E_0$ transversely and minimally, then the disk $E_1$ from surgery on $E \cup E_0$ along $D$ is a primitive disk, which has a common dual disk with $E$. \end{lemma}
By Lemma \ref{lem:first_surgery}, given a primitive disk $E$ and a semiprimitive disk $E_0$ in $V$ disjoint from $E$, any primitive disk $D$ in $V$ determines a unique shell $\mathcal S_E = \{E_0, E_1, \ldots, E_{p-1}, E_p\}$ such that $E_1 = D$ if $D$ is disjoint from $E \cup E_0$, and $E_1$ is the disk from surgery on $E \cup E_0$ along $D$ if $D$ intersects $E \cup E_0$. The following is a generalization of Lemma \ref{lem:first_surgery}.
\begin{lemma}[Lemma 3.10 in \cite{CK15b}] \label{lem:surgery_on_primitive} Let $\mathcal S_E = \{E_0, E_1, \ldots, E_{p-1}, E_p\}$ be a shell centered at a primitive disk $E$ in $V$, and let $D$ be a primitive or semiprimitive disk in $V$. For $j \in \{1, 2, \ldots, p-1\}$, \begin{enumerate} \item if $D$ is disjoint from $E \cup E_j$ and is isotopic to none of $E$ and $E_j$, then $D$ is isotopic to either $E_{j-1}$ or $E_{j+1}$, and \item if $D$ intersects $E \cup E_j$, then the disk from surgery on $E \cup E_j$ along $D$ is isotopic to either $E_{j-1}$ or $E_{j+1}$. \end{enumerate} \end{lemma}
\subsection{Bridges} Let $(V, W; \Sigma)$ be the genus-$2$ Heegaard splitting of a lens space $L = L(p, q)$ with $1 \leq q \leq p/2$ and $p \not\equiv \pm 1 \pmod q$. Throughout the subsection, we fix the followings. \begin{itemize} \item An integer $\bar{q} \in \{q, q' \}$ where $q'$ is the unique integer satisfying $1 \leq q' \leq p/2$ and $qq' \equiv \pm 1 \pmod p$; and \item The integers $m$ and $r$ satisfying $p=\bar{q} m + r$ with $2 \leq r \leq q-2$. \end{itemize}
We recall from Theorem \ref{thm:contractibility} that $\mathcal P(V)$ consists of infinitely many isomorphic tree components in this case. The key of the disconnectivity is that we can find non-adjacent primitive disks $D$ and $E$ in $V$ such that the corridor connecting $D$ and $E$ contains no vertices of primitive disks except $D$ and $E$. Then $D$ and $E$ are contained in different components of $\mathcal{P}(V)$ since the dual complex of $\mathcal{D}(V)$ is a tree.
We call a corridor $\mathcal{C}_{\{ D, E \} } = \{ \Delta_1 , \Delta_2 , \ldots, \Delta_n \}$ in $\mathcal{D}(V)$ a {\it bridge} when it connects vertices $D$ and $E$ of primitive disks, and contains no vertices of primitive disks except $D$ and $E$. In this case, we denote the bridge by $\mathcal B_{\{D, E\}}$ instead of $\mathcal{C}_{\{ D, E \} }$. We recall that the two vertices of the $2$-simplex $\Delta_1$ other than $E$ forms the principal pair $\{E_*, E_{**}\}$ of $E$ with respect to $D$ (Lemma \ref{lem:principal pair of a corridor}).
\begin{lemma}[Lemma 4.1 and Theorem 4.2 in \cite{CK15b}] \label{lem:construction of a bridge} Let $E$ be a primitive disk of $(p,\bar{q})$-type. Let $\mathcal S_E = \{E_0, E_1, \ldots, E_p\}$ be a $(p, \bar{q})$-shell centered at $E$. Then there exists a primitive disk $D$ and a bridge $\mathcal{B}_{\{ D , E \}} = \{ \Delta_1 , \Delta_2 , \ldots , \Delta_n \}$ connecting $E$ and $D$ such that $\Delta_1 = \{ E, E_m , E_{m+1} \}$. \end{lemma} \begin{proof} Actually, the proof of Theorem 4.2 in \cite{CK15b} introduces an algorithmic way to produce a bridge under the condition $p \not\equiv \pm 1 \pmod q$. We provide a sketch of the proof. Assigning symbols $x$ and $y$ to $\partial E'$ and $\partial E'_0$ respectively, with appropriate orientations, any oriented simple closed curve on $\Sigma$ represents an element of the free group $\pi_1 (W) = \langle x, y \rangle$. Then we may assume that $\partial E_m$ and $\partial E_{m+1}$ represents the elements $(xy^{\bar{q}})^{m-1} x y^{\bar{q}+r}$ and $(xy^{\bar{q}})^{m} x y^{r}$ respectively. By cutting $\Sigma = \partial V$ along $\partial E_m \cup \partial E_{m+1}$, we get a $4$-holed sphere $\Sigma_*$ shown in Figure \ref{fig:L_or_R-replacement}. In the figure, $D_1 = E_m$ and $D_2 = E_{m+1}$, and $m_1 = m -1$ and $m_2 = m$. We have two possibilities of the patterns of $\partial E' \cap \Sigma_*$ as in the figure.
\begin{center} \begin{overpic}[width=15cm,clip]{L_or_R-replacement.eps}
\linethickness{3pt}
\put(7, 143){\small $m_1 + 1$}
\put(140, 155){\small $m_1 + 1$}
\put(160, 137){\small $m_2 - m_1$}
\put(260, 150){\small $m_1 + 1$}
\put(395, 140){\small $m_1 + 1$}
\put(375, 158){\small $m_2 - m_1$}
\put(90, 157) {${\color{blue}l'}$}
\put(90, 18) {${\color{blue}l''}$}
\put(15, 75) {${\color{blue}l^*_1}$}
\put(167, 75) {${\color{blue}l^*_2}$}
\put(-3, 125){\large$\partial D_1$}
\put(170, 125){\large $\partial D_2$}
\put(237, 125){\large $\partial D_1$}
\put(410, 125){\large $\partial D_2$}
\put(-3, 50){\large $\partial D_1$}
\put(170, 50){\large $\partial D_2$}
\put(237, 50){\large $\partial D_1$}
\put(410, 50){\large $\partial D_2$}
\put(170, 90){\large $\partial D_*$}
\put(410, 90){\large $\partial D_*$}
\put(90, 0){\large (a)}
\put(330, 0){\large (b)} \end{overpic} \captionof{figure}{The $4$-holed sphere $\Sigma_*$. There are two patterns of $\partial E' \cap \Sigma_*$.} \label{fig:L_or_R-replacement} \end{center}
For each of the two cases, we can show that the boundary circle of the horizontal disk $E_\emptyset$, which is $\partial D_*$ in the figure, represents the word $(xy^{\bar{q}})^{2m} x y^{2r}$. We call $\{ E_m , E_{\emptyset} \}$ ($\{ E_{\emptyset} , E_{m+1} \}$, respectively) the pair obtained from the pair $\{ E_m , E_{m+1} \}$ by {\it $R$-replacement} ({\it $L$-replacement}, respectively). By cutting $\partial V$ along $ \partial E_m \cup \partial E_{\emptyset} $ ($ \partial E_{\emptyset} \cup \partial E_{m+1}$, respectively) we get again a $4$-holed sphere, denoted by $\Sigma_*$ again, and the two possibilities of the patterns of $\partial E' \cap \Sigma_*$ shown in Figure \ref{fig:L_or_R-replacement}. We denote by $E_R$ ($E_L$, respectively) the horizontal disk in the figure. The boundary of $E_R$ ($E_L$, respectively) represents the word $(xy^{\bar{q}})^{3m} x y^{3r}$ ($(xy^{\bar{q}})^{3m+1} x y^{3r - \bar{q}}$, respectively). In the figure, $D_* = E_R$, $D_1 = E_m$, $D_2 = E_{\emptyset}$, $m_1 = m -1$ and $m_2 = 2m$. ($D_* = E_L$, $D_1 = E_{\emptyset}$, $D_2 = E_{m+1}$, $m_1 = 2m$ and $m_2 = m$, respectively.) We call $\{ E_m , E_R \}$ and $\{ E_R , E_{\emptyset} \}$ ($\{ E_{\emptyset} , E_L \}$ and $\{ E_L , E_{m+1} \}$, respectively) the pair obtained from the pair $\{ E_m , E_{\emptyset} \}$ ($\{ E_{\emptyset} , E_{m+1} \}$, respectively) by $R$-replacement and $L$-replacement respectively. In this way, for each positive word $w = w(L, R)$ on the set of letters $\{ L, R \}$, we can define the disk $E_w$ inductively. We call the full subcomplex $\mathcal{X}$ of the disk complex $\mathcal{D}(V)$ spanned by the set of vertices \[\{ E, E_m , E_{m+1} \} \cup \{ E_w \mid \mbox{$w$ is a (possibly empty) positive word in $\{ L, R \}$} \}\] the {\it principal complex generated by} $\mathcal{S}_E$. See Figure \ref{fig:principal_complex}.
\begin{center} \begin{overpic}[width=11cm,clip]{principal_complex.eps}
\linethickness{3pt}
\put(152, 8){$E$}
\put(0, 155){$E_m$}
\put(300, 155){$E_{m+1}$}
\put(4, 207){$E_{RR}$}
\put(290, 207){$E_{LL}$}
\put(38, 251){$E_R$}
\put(264, 251){$E_L$}
\put(75, 286){$E_{RL}$}
\put(217, 286){$E_{LR}$}
\put(152, 302){$E_\emptyset$}
\put(128, 190){$R$}
\put(78, 205){$R$}
\put(180, 190){$L$}
\put(225, 205){$L$}
\put(100, 227){$L$}
\put(205, 225){$R$}
\put(-30, 155){\color{red} \small $\bar{q}+r$}
\put(335, 155){\color{red} \small $r$}
\put(-10, 207){\color{red} \small $4r$}
\put(320, 207){\color{red} \small $4r-2\bar{q}$}
\put(20, 251){\color{red} \small $3r$}
\put(290, 251){\color{red} \small $3r-\bar{q}$}
\put(40, 286){\color{red} \small $5r-\bar{q}$}
\put(245, 286){\color{red} \small $5r-2 \bar{q}$}
\put(152, 315){\color{red} \small $2r$}
\end{overpic} \captionof{figure}{The principal complex generated by $\mathcal{S}_E$.} \label{fig:principal_complex} \end{center}
The words in $\{ x^{\pm 1}, y^{\pm 1} \}$ represented by vertices of the principal complex $\mathcal{X}$ has the following property: \begin{itemize} \item Let $\{D_1, D_2\}$ be a pair of vertices in $\mathcal X$ obtained from $\{ E_m, E_{m+1} \}$ by a series of replacements, and let $D_*$ be the vertex such that $\{D_1, D_*\}$ and $\{D_*, D_2\}$ are the $R$-replacement and $L$-replacement of $\{D_1, D_2\}$ respectively. If $\partial D_1$ and $\partial D_2$ represent the words $(xy^{\bar{q}})^{m_1} x y^{n_1}$ and $(xy^{\bar{q}})^{m_2} x y^{n_2}$ respectively, then the word represented by $\partial D_*$ is $(xy^{\bar{q}})^{m_1 + m_2 + 1} x y^{n_1 + n_2 - \bar{q}}$. See Figure \ref{fig:L_or_R-replacement}. \end{itemize} Using this property, we can find inductively the word corresponding to each vertex in $\mathcal{X}$. Further, we see that every word corresponding to a vertex $E_w$ in $\mathcal{X}$ is a positive power of $x y^{\bar{q}}$ followed by $x y^{n_w}$ for some non-zero integer $n_w$. In Figure \ref{fig:principal_complex}, this number $n_w$ is assigned for each vertex.
By an elementary argument on number theory, we can show that there exists a vertex $E_w$ in $\mathcal{X}$ such that $n_w = \bar{q} \pm 1$. This implies that $E_w$ is a primitive disk. Choose such $w$ so that the length of $w$ is minimal, and set $D = E_w$. Then the corridor connecting $E$ and $D$ is the required bridge. (Here we recall that neither of $E_m$ and $E_{m+1}$ is primitive.) \end{proof}
\begin{lemma} \label{lem:principal pair of a bridge} Let $\mathcal{B}_{ \{ D, E \} }$ be a bridge connecting vertices $D$ and $E$ of primitive disks. Let $\{ E_{*} , E_{**} \}$ be the principal pair of $E$ with respect to $D$. Let $E$ be of $(p , \bar{q})$-type. Then there exists a unique $(p, \bar{q})$-shell $\mathcal S_E = \{E_0, E_1, \ldots, E_p\}$ centered at $E$ such that $\{ E_* , E_{**} \} = \{ E_m , E_{m+1} \}$. \end{lemma}
\begin{proof} The proof will be similar to that of Theorem 3.11 in \cite{CK15b}. Let $C$ be an outermost subdisk of $D$ cut off by $D \cap E$. Any dual disk $E'$ of $E$ determines a unique semiprimitive disk $E_0$ in $V$ disjoint from $E \cup E'$. Among all the dual disks of $E$, choose one, denoted by $E'$ again, so that the resulting semiprimitive disk $E_0$ intersects $C$ minimally. Then, $C$ must intersect $E_0$, otherwise one of the disks of the principal pair $\{E_*, E_{**}\}$ is $E_0$ and the other is primitive by Lemma \ref{lem:first_surgery}, which is impossible since they are the representatives of the vertices of the bridge $\mathcal{B}_{ \{ D, E \} }$ other than $D$ and $E$.
Let $C_0$ be an outermost subdisk of $C$ cut off by $C \cap E_0$. Then one of the disks from surgery on $E_0$ along $C_0$ is $E$, and the other, say $E_1$, is primitive by Lemma \ref{lem:first_surgery} again. Then we have the unique shell $\mathcal S_E = \{E_0, E_1, E_2, \ldots, E_p\}$ centered at $E$. Let $E'_0$ be a semiprimitive disk in $W$ disjoint from $E \cup E'$. The circle $\partial E'_0$ would be a $(p, \bar q)$-curve on the boundary of the solid torus $\operatorname{cl}(V- \operatorname{Nbd}(E \cup E'))$. We will assume $\bar q = q$. That is, $\partial E'_0$ is a $(p, q')$-curve and so $\mathcal S_E$ is a $(p, q)$-shell. The proof is easily adapted for the case of $\bar q = q'$.
{\noindent \sc Claim 1.} There is a disk $E_j$ in the shell for some $1 \leq j < p/2$ that is disjoint from $C$.
{\noindent \it Proof of Claim $1$.}
First, it is clear that there is a disk $E_j$ for some $j \in \{1, 2, \ldots, p-1\}$ disjoint from $C$. Since if $C$ intersects each of $E_1, E_2, \ldots, E_j$, for $j \in \{1, 2, \ldots, p-1\}$, then one of the disks from surgery on $E_j$ by an outermost subdisk $C_j$ of $C$ cut off by $C \cap E_j$ is $E$ and the other one is $E_{j+1}$ by Lemma \ref{lem:surgery_on_primitive}, and we have $|C \cap E_{j+1}| < |C \cap E_j|$. Consequently, we see that $|C \cap E_p| < |C \cap E_0|$, but it contradicts the minimality of $|C \cap E_0|$ since $E_p$ is also a semiprimitive disk disjoint from $E$.
Now, denote by $E_j$ again the first disk in the sequence that is disjoint from $C$. Then the two disks from surgery on $E$ along $C$ are $E_j$ and $E_{j+1}$, hence, $C$ is also disjoint from $E_{j+1}$. Actually they are the only disks in the sequence disjoint from $C$. For other disks in the sequence, it is easy to see that $|C \cap E_{j-k}| = k = |C \cap E_{j+1+k}|$. If $j \geq p/2$, then we have $|C \cap E_0| = j > p-j-1 = |C \cap E_p|$, a contradiction for the minimality condition again. Thus, $E_j$ is one of the disks in the first half of the sequence, that is, $1 \leq j < p/2$.
{\noindent \sc Claim 2.} The disk $E_j$ is $E_m$.
{\noindent \it Proof of Claim $2$.} First, it is clear that $E_j$ is not $E_1$. That is, $C$ must intersect $E_1$, otherwise the principal pair $\{E_*, E_{**}\}$ equals $\{E_1, E_2\}$ and $E_1$ is primitive, which is impossible. Assigning symbols $x$ and $y$ to oriented $\partial E'$ and $\partial E_0'$ respectively, $\partial E_1$, $\partial E_2$, $\partial E_3$ may represent the elements of the form $xy^p$, $xy^qxy^{p-q}$, $xy^qxy^qxy^{p-2q}$ respectively. In general, $\partial E_k$ represents an element of the form $xy^{n_1}xy^{n_2} \cdots xy^{n_k}$ for some positive integers $n_1, \ldots, n_k$ with $n_1+ \cdots + n_k = p$ for each $k \in \{1, 2, \ldots, p\}$. Furthermore, since $C$ is disjoint from $E_j$ and also from $E_{j+1}$, the word determined by the circle $\partial D$ contains the subword of the form $y^{m_1}xy^{m_2} \cdots xy^{m_{j+1}}$ (or its reverse) which is the part of $\partial C$ when $\partial E_{j+1}$ represents an element of the form $xy^{m_1}xy^{m_2} \cdots xy^{m_{j+1}}$.
Suppose that $2 \leq j \leq m-1$. Then an element represented by $\partial E_{j+1}$ has the form $xy^q \cdots xy^q xy^{p-jq}$, and so an element represented by $\partial D$ contains $xy^qx$ and $y^{p-jq}$, which lies in the part of $\partial C$. Since $(p-jq)-q = q(m-1-j)+r \geq 2$, $D$ cannot be primitive by Lemma \ref{lem:property_of_primitive_elements2}, a contradiction.
Suppose that $m+1 \leq j \leq q'-2$. We may write a word of $\partial E_{q'}$ as $xy^{n_1}xy^{n_2} \cdots xy^{n_{q'}}$ where each $n_k \in \{n, n+1\}$ for some positive integer $n$ since $\partial E_{q'}$ is primitive. Then, by a similar consideration to the above, an element represented by $\partial D$ contains $xy^{m_1}x$ and $y^{m_2}$ for some positive integers $m_1$ and $m_2$ with $|m_1 - m_2|\geq2$, which lies in the part of $\partial C$. Thus, $D$ cannot be primitive by Lemma \ref{lem:key3}, a contradiction again.
Suppose that $q'-1 \leq j \leq q'$. Then the principal pair $\{E_*, E_{**}\}$ contains $E_{q'}$ which is primitive by Lemma \ref{lem:sequence}. This is impossible.
Finally, suppose that $q'+1 \leq j < p/2$. Then, considering an element represented by $\partial E_{j+1}$, we observe that an element represented by $\partial D$ contains $xyx$ which lies in the part of $\partial C$. Furthermore, when we write a word of the part of $\partial C$ as $xy^{n_1}xy^{n_2} \cdots xy^{n_{j+1}}$, at least two of $n_1, n_2, \ldots, n_{j+1}$ are $1$ and so at least one of $n_1, n_2, \ldots, n_{j+1}$ is greater than $2$. Again, $D$ cannot be primitive by Lemma \ref{lem:property_of_primitive_elements2}, a contradiction.
From {\sc Claim 2}, the outermost subdisk $C$ is disjoint from $E_m$, and hence, the principal pair $\{E_*, E_{**}\}$ equals $\{E_m, E_{m+1}\}$. \end{proof}
\begin{lemma} Let $E$ be a primitive disk of $(p , \bar{q})$-type, and let $\mathcal S_E = \{E_0, E_1, \ldots, E_p\}$ be a $(p, \bar{q})$-shell centered at $E$. Let $D$ be a primitive disk and let $\mathcal{B}_{ \{ D, E \} } = \{ \Delta_1 , \Delta_2 , \ldots, \Delta_n \} $ be the bridge with $\Delta_1 = \{E, E_m, E_{m+1}\}$, given by Lemma $\ref{lem:construction of a bridge}$. Given a primitive disk $F$ and a corridor $\mathcal{C}_{ \{ F, E \} } = \{ \Delta'_1 , \Delta'_2 , \ldots, \Delta'_{n'} \} $, if $\Delta'_1 = \Delta_1$, then the corridor $\mathcal C_{\{F, E\}}$ contains the bridge $\mathcal B_{\{D, E\}}$. \label{lem:words for disks in a bridge} \end{lemma}
\begin{proof} We assume that each of the disks $\partial D$, $\partial E$, $\partial F$ intersects $\partial E' \cup \partial E'_0$ transversely and minimally. Assigning symbols $x$ and $y$ to $\partial E'$ and $\partial E'_0$ respectively, with appropriate orientations, a word for $\partial E$ is $x$ while a word for $\partial D$ is a positive power of $x y^{\bar{q}}$ followed by $x y^{\bar{q}\pm 1}$ as in the proof of Lemma \ref{lem:construction of a bridge}.
For each $k \in \{1, 2, \ldots, n-1\}$, set $\Delta_{k} \cap \Delta_{k+1} = \{ D_k, D'_k \}$ and $\Delta_{k+1} = \{ D_{k+1}, D_k, D'_k \}$. To show the corridor $\mathcal C_{\{F, E\}}$ contains the bridge $\mathcal B_{\{D, E\}}$, it suffices to show that the disk from surgery on $D_k \cup D'_k$ along $F$ is $D_{k+1}$ by Lemma \ref{lem:two corridors}. We use the induction on $k$. If $k = 1$, the conclusion holds immediately since we assumed $\Delta'_1 = \Delta_1$.
Let $k \geq 2$ and assume the conclusion holds for all $i < k$. We simply write $\Delta_{k} = \{ D_0, D_1, D_2 \}$ and $\Delta_{k+1} = \{ D_1, D_2, D_* \}$ from now on, and will show that the disk from surgery on $D_1 \cup D_2$ along $F$ is $D_*$. By the construction of a bridge in the proof of Lemma \ref{lem:construction of a bridge}, we may assume that the circles $\partial D_1$ and $\partial D_2$ represent the elements of the forms $(xy^{\bar{q}})^{m_1} xy^{n_1}$ and $(xy^{\bar{q}})^{m_2} xy^{n_2}$ respectively, where $n_1 > \bar{q} > n_2 > 0$. By cutting the Heegaard surface $\Sigma$ along $\partial D_1 \cup \partial D_2$, we get a $4$-holed sphere $\Sigma_*$. We denote by $d^\pm_{1}$ and $d^\pm_{2}$ the holes coming from $\partial D_1$ and $\partial D_2$ respectively. There are two patterns of $\partial E' \cap \Sigma_*$ as in Figure \ref{fig:L_or_R-replacement}, and the boundary of the disk $D_*$ is the horizontal circle in the figure. We only consider the case of Figure \ref{fig:L_or_R-replacement} (a). The argument for the case (b) will be the same.
Let $C$ be an outermost subdisk of $F$ cut off by $D_1 \cup D_2$. Then $\alpha = \Sigma_* \cap C$ is an arc whose end points lie in the same component of $\partial \Sigma_*$. We consider only the case where $\partial \alpha$ lies in $d^-_{1}$. The argument for other cases will be the same. Let $\widetilde{\Sigma}_*$ be the covering space of $\Sigma_*$ such that \begin{enumerate} \item $\widetilde{\Sigma}_*$ is the plane $\mathbb R^2$ with an open disk of radius at most $1/8$ removed from each point with integer coordinates; \item the components of the preimage of $l^*_1$ ($l^*_2$, respectively) are the vertical lines with even (odd, respectively) integer $x$-coordinate; and \item the components of the preimage of $l'$ ($l''$, respectively) are the horizontal lines with even (odd, respectively) integer $y$-coordinate. \end{enumerate} We put a lift of $d^-_{1}$ at the origin. Denote by $\mathbb{Q}_{\mathrm{odd}}$ the set of irreducible rational numbers with odd denominators. Then the set $\mathbb{Q}_{\mathrm{odd}}$ one-to-one corresponds to the set of the (isotopy classes of) essential arcs $\alpha = \alpha_{r/s}$ on $\Sigma_*$ such that both endpoints of $\alpha$ lie in $d^-_{1}$ and $\alpha$ cuts off an annulus one of whose boundary circle is either $d^+_2$ or $d^-_2$, as follows. For each $r/s \in \mathbb{Q}_{\mathrm{odd}}$, let $\widetilde{\beta} = \widetilde{\beta}_{r/s}$ be the line segment of the slope $r/s$ connecting the origin and the point $(s, r)$ in $\mathbb R^2$ such that $\widetilde{\beta} \cap \widetilde{\Sigma}_*$ is a single arc properly embedded in $\widetilde{\Sigma}_*$ (by assuming the open disks removed are sufficiently small). Then the image $\beta$ of the segments $\widetilde{\beta} \cap \widetilde{\Sigma}_*$ is a simple arc in $\Sigma_*$ connecting $d_{1}^-$ and one of $d^+_2$ or $d^-_2$, say $d^+_2$. Then the corresponding essential arc $\alpha = \alpha_{r/s}$ is the frontier of a regular neighborhood of the union $\beta \cup d^+_2$.
Recall that our arc $\alpha \subset \Sigma_*$ is the intersection of $\Sigma_*$ and an outermost subdisk $C$ of $F$ cut off by $D_1 \cup D_2$. Let $r/s$ ($s>0$) be the element of $\mathbb{Q}_{\mathrm{odd}}$ corresponding to $\alpha$. We shall prove that $r/s = 0$, which implies that the disk from surgery on $D_1 \cup D_2$ along $F$ is $D_*$. To show this, we will check all the cases of $r \neq 0$ violate the assumption that $F$ is primitive.
Figure \ref{fig:parameters} illustrates the patterns of $\Sigma_* \cap (\partial E' \cup \partial E'_0)$ on the 4-holed sphere $\Sigma_*$. \begin{center} \begin{overpic}[width=5cm,clip]{parameters.eps}
\linethickness{3pt} \put(5, 135){$d^+_1$} \put(5, 3){$d^-_1$} \put(125, 135){$d^+_2$} \put(125, 3){$d^-_2$} \put(10, 45){\small $k_1$} \put(37, 23){\small $k_2$} \put(138, 75){\small $k_3$} \put(108, 75){\small $k_4$} \put(90, 59){\small $k_5$} \put(65, 35){\small $k_6$} \put(54, 122){\small $k_7$} \end{overpic} \captionof{figure}{The patterns of $\Sigma_* \cap (\partial E' \cup \partial E'_0)$.} \label{fig:parameters} \end{center}
In the figure, each oriented bold arc, we denote by $\gamma'$, with a weight $k_i$ ($i \in \{ 1, 2, \ldots, 7 \} $) indicates the union of some arcs $\gamma'_1$, $\gamma'_2 , \ldots, \gamma'_l$ of $\Sigma_* \cap \partial E'_0$ on $\Sigma_*$ parallel to $\gamma'$ such that $k_i = \sum_{i=j}^l [\gamma'_j]$ in $H_1(\Sigma_* , \partial \Sigma_*; \mathbb{Z})$, after equipping the orientation of $\gamma'_j$ compatible with that of $\partial E'_0$. Between the arcs with weights $k_1$ and $k_2$ (and also between the arcs with weights $k_5$ and $k_6$), we have $m_1 + 1$ oriented arcs $\gamma_1$, $\gamma_2, \ldots, \gamma_{m_1 +1}$ of $\Sigma_* \cap \partial E'$, and between $\gamma_i$ and $\gamma_{i+1}$ for $i \in \{1, 2, \ldots, m_1\}$, we have exactly $\bar{q}$ parallel arcs of $\Sigma_* \cap \partial E_0'$ on $\Sigma_*$ in the same direction. Similarly, between the arcs with weights $k_3$ and $k_4$, we have $m_2 - m_1$ oriented arcs of $\Sigma_* \cap \partial E'$, and between any two consecutive arcs of them, we have exactly $\bar{q}$ parallel arcs of $\Sigma_* \cap \partial E_0'$ on $\Sigma_*$ in the same direction. By Figure \ref{fig:L_or_R-replacement} we have the following linear equations: \begin{eqnarray} \label{eq:solution of the linear equation} \left\{ \begin{array}{ll} k_1 + k_2 = n_1\\ k_5 + k_6 = n_1\\ k_1 + k_3 + k_7 = n_2\\ k_4 + k_6 + k_7 = n_2\\ k_2 + k_4 = \bar{q}\\ k_3 + k_5 = \bar{q} \end{array} \right. \end{eqnarray} Solving these equation, we see that there exist non-negative integers $a$ and $b$ such that $k_1 = a$, $k_2 = n_1 - a$, $k_3 = n_2 - a - b$, $k_4 = \bar{q} - n_1 + a$, $k_3 = \bar{q} - n_2 + a + b$, $k_6 = n_1 + n_2 - \bar{q} - a - b$, $k_7 = b$.
\noindent \textit{Case }1. $r > s$.
In this case with a suitable choice of an orientation of $\alpha$ the word corresponding to the arc $\alpha$ contains both of the terms $x y^{\bar{q}} x$ and $x y^{n_2} x$ after canceling pairs of $y$ and $y^{-1}$ if necessary. See Figure \ref{fig:subarcs_1}. \begin{center} \begin{overpic}[width=12cm,clip]{subarcs_1.eps} \linethickness{3pt} \put(10, 140){$d^+_1$} \put(10, 15){$d^-_1$} \put(120, 140){$d^+_2$} \put(120, 15){$d^-_2$} \put(98, 75){\color{red} $\alpha$} \put(60, 156){\small $x y^{n_2}x$} \put(212, 140){$d^+_1$} \put(212, 15){$d^-_1$} \put(322, 140){$d^+_2$} \put(322, 15){$d^-_2$} \put(299, 83){\color{red} $\alpha$} \put(270, 0){\small $x y^{n_2}x$} \end{overpic} \captionof{figure}{The bold part of $\alpha$ determines the word $x y^{n_2} x$.} \label{fig:subarcs_1} \end{center} By Lemma \ref{lem:key3} this implies that $F$ is not a primitive disk, whence a contradiction.
\noindent \textit{Case }2. $r = s$.
In this case the disk from surgery on $D_1 \cup D_2$ along $F$ is $D_0$, which is impossible by the assumption of the induction.
\noindent \textit{Case }3. $s > r > 0$. Suppose first that $1/2 > r/s$. If $r$ is odd, with a suitable choice of an orientation of $\alpha$ the word corresponding to the arc $\alpha$ contains the term $x y^{-k_3 + k_6} x^{-1}$ after canceling pairs of $y$ and $y^{-1}$ if necessary. See the left-hand side in Figure \ref{fig:subarcs_2}. \begin{center} \begin{overpic}[width=12cm,clip]{subarcs_2.eps} \linethickness{3pt} \put(10, 140){$d^+_1$} \put(10, 15){$d^-_1$} \put(120, 140){$d^+_2$} \put(120, 15){$d^-_2$} \put(93, 71){\color{red} $\alpha$} \put(40, 156){\small $x y^{-k_3 + k_6} x^{-1}$} \put(212, 140){$d^+_1$} \put(212, 15){$d^-_1$} \put(322, 140){$d^+_2$} \put(322, 15){$d^-_2$} \put(296, 83){\color{red} $\alpha$} \put(250, 0){\small $x y^{-k_1 + k_4} x^{-1}$} \end{overpic} \captionof{figure}{Left: the bold part of $\alpha$ determines the word $x y^{-k_3 + k_6} x^{-1}$. Right: the bold part of $\alpha$ determines the word $x y^{-k_1 + k_4} x^{-1}$.} \label{fig:subarcs_2} \end{center} By the solution of the equation $(\ref{eq:solution of the linear equation})$, we have $- k_3 + k_6 = - \bar{q} + n_1 \geq 1$. By Lemma \ref{lem:key2} this implies that $F$ is not a primitive disk, whence a contradiction. If $r$ is even, with a suitable choice of an orientation of $\alpha$ the word corresponding to the arc $\alpha$ contains the term $x y^{-k_1 + k_4} x^{-1}$ after canceling pairs of $y$ and $y^{-1}$ if necessary. See the right-hand side in Figure \ref{fig:subarcs_2}. By the solution of the equation $(\ref{eq:solution of the linear equation})$, we have $- k_1 + k_4 = \bar{q} - n_1 \leq -1$. Thus, by Lemma \ref{lem:key2}, $F$ is not a primitive disk, a contradiction.
Next suppose that $r/s > 1/2$. If $r$ is odd, with a suitable choice of an orientation of $\alpha$ the word corresponding to the arc $\alpha$ contains the term $x y^{k_1 -k_4} x^{-1}$ after canceling pairs of $y$ and $y^{-1}$ if necessary. See the left-hand side in Figure \ref{fig:subarcs_3}. \begin{center} \begin{overpic}[width=12cm,clip]{subarcs_3.eps} \linethickness{3pt} \put(10, 140){$d^+_1$} \put(10, 15){$d^-_1$} \put(120, 140){$d^+_2$} \put(120, 15){$d^-_2$} \put(102, 97){\color{red} $\alpha$} \put(45, 0){\small $x y^{k_1 - k_4} x^{-1}$} \put(212, 140){$d^+_1$} \put(212, 15){$d^-_1$} \put(322, 140){$d^+_2$} \put(322, 15){$d^-_2$} \put(305, 57){\color{red} $\alpha$} \put(250, 156){\small $x y^{k_3 - k_6} x^{-1}$} \end{overpic} \captionof{figure}{Left: the bold part of $\alpha$ determines the word $x y^{k_1 - k_4} x^{-1}$. Right: the bold part of $\alpha$ determines the word $x y^{k_3 - k_6} x^{-1}$.} \label{fig:subarcs_3} \end{center} As above, we have $k_1 - k_4 = - \bar{q} + n_1 \geq 1$, whence a contradiction by Lemma \ref{lem:key2}. If $r$ is even, with a suitable choice of an orientation of $\alpha$ the word corresponding to the arc $\alpha$ contains the term $x y^{k_3 - k_6} x^{-1}$ after canceling pairs of $y$ and $y^{-1}$ if necessary. See the right-hand side in Figure \ref{fig:subarcs_3}. As above we have $k_3 - k_6 = \bar{q} - n_1 \leq 1$, whence a contradiction by Lemma \ref{lem:key2}.
\noindent \textit{Case }4. $0 > r$. In this case with a suitable choice of an orientation of $\alpha$ the word corresponding to the arc $\alpha$ contains the term $x y^{k_1 -k_5 + k_7} x^{-1}$ after canceling pairs of $y$ and $y^{-1}$ if necessary. See Figure \ref{fig:subarcs_4}.
\begin{center} \begin{overpic}[width=12cm,clip]{subarcs_4.eps} \linethickness{3pt} \put(10, 140){$d^+_1$} \put(10, 15){$d^-_1$} \put(120, 140){$d^+_2$} \put(120, 15){$d^-_2$} \put(95, 82){\color{red} $\alpha$} \put(60, 156){\small $x y^{k_1 -k_5 + k_7} x^{-1}$} \put(212, 140){$d^+_1$} \put(212, 15){$d^-_1$} \put(322, 140){$d^+_2$} \put(322, 15){$d^-_2$} \put(299, 70){\color{red} $\alpha$} \put(250, -5){\small $x y^{k_1 -k_5 + k_7} x^{-1}$} \end{overpic} \captionof{figure}{The bold part of $\alpha$ determines the word $x y^{k_1 -k_5 + k_7} x^{-1}$.} \label{fig:subarcs_4} \end{center} As above, we have $k_1 -k_5 + k_7 = - \bar{q} + n_2 \leq -1$, whence a contradiction by Lemma \ref{lem:key2}. \end{proof}
\begin{lemma} \label{lem:primitive disks in a bridge} Let $\mathcal{B}_{ \{ D , E \} }$ be a bridge connecting primitive disks $D$ and $E$. Then $V_D = \operatorname{cl}(V- \operatorname{Nbd}(D))$ is not isotopic to $V_E = \operatorname{cl}(V- \operatorname{Nbd}(E))$ in $L(p,q)$. In particular, $D$ is of $(p,q)$-type if and only if $E$ is of $(p,q')$-type. \end{lemma} \begin{proof} Choose $E'$, $E_0$ and $E_0'$ as in the proof of Lemma \ref{lem:principal pair of a bridge}. Assigning symbols $x$ and $y$ to oriented $\partial E'$ and $\partial E'_0$ respectively, any oriented simple closed curve on $\partial W$ represents an element of the free group $\pi_1 (W) = \langle x, y \rangle$. Note that the natural projection \[ \varphi : H_1(W; \mathbb{Z}) = \mathbb{Z} [x] \oplus \mathbb{Z} [y] \to H_1(L(p,q); \mathbb{Z} ) = \mathbb{Z} / p \mathbb{Z} \] induced by the inclusion $W \hookrightarrow L(p,q)$ satisfies $\varphi ([x]) = 0$ and $\varphi ([y]) = \pm 1$.
By Lemmas \ref{lem:principal pair of a bridge} and \ref{lem:words for disks in a bridge}, the bridge $\mathcal{B}_{ \{ D , E \} }$ is obtained as in Lemma \ref{lem:construction of a bridge}. In particular, by the argument of Lemma \ref{lem:construction of a bridge}, the circle $\partial D$ determines the element of the form $(x y^{\bar{q}})^k xy^{ \bar{q} \pm 1}$ for some $k \in \mathbb{N}$ while $\partial E$ determines $x$ in $\pi_1 (W) = \langle x, y \rangle$.
Consider the exteriors $W_D := \operatorname{cl}( L(p,q ) - V_D)$ and $W_E := \operatorname{cl}( L(p,q ) - V_E)$ of $V_D$ and $V_E$ in $L(p,q)$, and let $l_D$ and $l_E$ be the cores of those solid tori $W_D$ and $W_E$ respectively. It suffices to show that their homology classes $[l_D]$ and $[l_E]$ differs in $H_1 (L(p,q) , \mathbb{Z})$. We regard $H_1(W_D; \mathbb{Z})$ and $H_1(W_E ; \mathbb{Z})$ as subgroups of $H_1(W ; \mathbb{Z})$ in a natural way. It is then easy to see from the construction that $H_1 (W_E ; \mathbb{Z}) = (\mathbb{Z}[x] \oplus \mathbb{Z} [y] )/ \mathbb{Z} [x] = \mathbb{Z} [y]$, which implies that $\pm [l_E] = \pm \varphi ([y]) = \pm 1 \in \mathbb{Z} / p \mathbb{Z}$. On the other hand, we have $H_1 (W_D ; \mathbb{Z}) = (\mathbb{Z} [x] \oplus \mathbb{Z} [y] )/ \mathbb{Z} ( (k+1) ([x] + \bar{q} [y]) \pm [y] ) = \mathbb{Z} ([x] + \bar{q} [y] )$, which implies that $\pm [l_D] = \pm \varphi ( [x] + \bar{q} [y]) = \pm \bar{q} \in \mathbb{Z} / p \mathbb{Z}$. Since $\bar {q} \neq \pm 1 \pmod p$, we have $[l_D] \neq [l_E]$ in $H_1(L(p , q) ; \mathbb{Z})$. This completes the proof. \end{proof}
We recall that the primitive disk complex $\mathcal P(V)$ consists of infinitely many tree components, and given any vertex $E$ of $\mathcal P(V)$, there are infinitely many shells $\mathcal S_E = \{E_0, E_1, \ldots, E_p\}$ centered at $E$ by the choice of a semiprimitive disk $E_0$ and the choice of a primitive disk $E_1$. Thus, by Lemma \ref{lem:construction of a bridge}, for each vertex $E$ of $\mathcal P(V)$ there are infinitely many bridges having $E$ as its end vertex. Further, we have the following description of the bridges.
\begin{lemma} \label{lem:uniqueness of a bridge} Any two bridges are isomorphic to each other. Any two bridges are either disjoint from each other or intersect only in an end vertex. \end{lemma}
\begin{proof} By Lemma \ref{lem:construction of a bridge}, given any $(p, q)$-shell $\mathcal S_E = \{E_0, E_1, \ldots, E_p\}$, there exists a bridge $\mathcal B_{\{D, E\}} = \{\Delta_1, \Delta_2, \ldots, \Delta_n\}$ such that $\Delta_1 = \{E, E_m, E_{m+1}\}$. By Lemma \ref{lem:words for disks in a bridge}, such a bridge is unique. That is, if any bridge has $\Delta_1$ as its first $2$-simplex, then it is exactly $\mathcal B_{\{D, E\}}$. Given any other bridge $\mathcal B_{\{ \bar{D}, \bar{E}\}}$, by Lemma \ref{lem:primitive disks in a bridge}, we may assume that one of $\bar{D}$ and $\bar{E}$, say $\bar{E}$, is of $(p, q)$-type. By Lemma \ref{lem:principal pair of a bridge}, there exists a $(p, q)$-shell $\mathcal S_{\bar{E}}$ containing the first $2$-simplex of $\mathcal B_{\{ \bar{D}, \bar{E}\}}$. Thus $\mathcal B_{\{ \bar{D}, \bar{E}\}}$ is isomorphic to $\mathcal B_{\{D, E\}}$. The second statement is also a direct consequence of Lemmas \ref{lem:principal pair of a bridge} and \ref{lem:words for disks in a bridge}. \end{proof}
\subsection{The tree of trees} We again assume that $(V, W; \Sigma)$ is the genus-$2$ Heegaard splitting of a lens space $L = L(p, q)$ with $1 \leq q \leq p/2$ and $p \not\equiv \pm 1 \pmod q$. So far, we have seen that for any vertex $E$ of $\mathcal P(V)$, there are infinitely many bridges of which $E$ is an end vertex, and further: \begin{itemize} \item any two bridges are isomorphic to each other, \item any two bridges are disjoint from each other or intersect only in an end vertex, \item any bridge connects exactly two tree components of $\mathcal P(V)$, and \item any two tree components of $\mathcal P(V)$ is connected by at most a single bridge. \end{itemize}
\begin{center} \begin{overpic}[width=12cm,clip]{bridge.eps} \linethickness{3pt} \put(85, 48){$D$} \put(248, 48){$E$} \put(74, 0){$\mathcal T_1$} \put(262, 0){$\mathcal T_2$} \end{overpic} \captionof{figure}{The unique bridge connecting the tree components $\mathcal{T}_1$ and $\mathcal{T}_2$ of $\mathcal P(V)$.} \label{fig:bridge} \end{center}
Thus, by shrinking each of the tree components of $\mathcal P(V)$ to a vertex, and each of the bridges to an edge connecting the two end vertices, we have a tree $\mathcal T^{\mathcal T} (V)$, which we call the {\it ``tree of trees''} for the splitting $(V, W; \Sigma)$. We note that each vertex of $\TT (V)$ has infinite valency. Since the action of the Goeritz group $\mathcal{G}$ of the splitting $(V, W; \Sigma)$ preserves the set of vertices of $\mathcal P(V)$ and the set of bridges, the action of $\mathcal{G}$ on $\mathcal{D}(V)$ naturally induces a simplicial action of $\mathcal{G}$ on $\TT(V)$.
\section{Example: the lens space $L(12, 5)$} \label{sec:first_example}
Let $L(p, q)$ be a lens space with $1 \leq q \leq p/2$. Let $(V, W; \Sigma)$ be the genus-$2$ Heegaard splitting of $L(p, q)$. As we have seen in Theorem \ref{thm:contractibility}, the primitive disk complex $\mathcal P(V)$ is contractible if and only if $p \equiv \pm 1 \pmod{q}$. This implies that the primitive disk complex $\mathcal P(V)$ is contractible for every lens space $L(p,q)$ with $p < 12$. In this section, we focus on the lens space $L(12,5)$: the ``smallest" lens space with disconnected primitive disk complex. Recall that in this case, the primitive disk complex $\mathcal{P}(V)$ consists of infinitely many tree components. We describe the combinatorial structure of the primitive disk complex and the briges, and provide an idea to obtain a presentation of the Goeritz group. The argument in this section will be soon generalized for every $L(p,q)$, $1 \leq q \leq p/2$, with $p \not\equiv \pm 1 \pmod{q}$ in the next section.
Let $(V, W; \Sigma)$ be the genus-$2$ Heegaard splitting of $L(12, 5)$. Let $E$ be a primitive disk in $V$. Since $5^2 \equiv 1 \pmod {12}$, $q = q'$ in this case (recall that $q'$ is the unique integer satisfying $1 \leq q' \leq p/2$ and $qq' \equiv \pm 1 \pmod p$), $E$ is always of $(12,5)$-type. Let $\mathcal S_E = \{E_0, E_1, \ldots, E_{12}\}$ be a shell centered at $E$. Let $E'$ be a unique dual disk of $E$ disjoint from $E_0$, and let $E_0'$ be a unique semiprimitive disk in $W$ disjoint from $E$. Lemma \ref{lem:construction of a bridge} says that there exists a bridge $\mathcal{B}_{\{ D , E \}} = \{ \Delta_1 , \Delta_2 , \ldots , \Delta_n \}$ connecting $E$ and a certain primitive disk $D$ in $V$ with $\Delta_1 = \{ E, E_2 , E_3 \}$. We can easily construct that bridge as follows. The left-hand side in Figure \ref{fig:construction_of_a_bridge} depicts the 4-holed sphere $\partial V$ cut off by $E \cup E_2$. We denote by $e^\pm$ and $e_2^\pm$ its boundary circles coming from $\partial E$ and $\partial E_2$ respectively.
\begin{center} \begin{overpic}[width=14cm,clip]{construction_of_a_bridge.eps}
\linethickness{3pt}
\put(100, 127){\small $e^+$}
\put(159, 110){\small $e^-$}
\put(91, 90){$e_2^+$}
\put(20, 30){$e_2^-$}
\put(96, 117){\small $a$}
\put(112, 135){\small $b$}
\put(154, 120){\small $a$}
\put(169, 100){\small $b$}
\put(100, 18){\small {\color{blue}$\partial D$}}
\put(100, 33){\small $\partial E_3$}
\put(100, 46){\small {\color{blue}$\partial D$}}
\put(240, 145){$e_2^+$}
\put(240, 30){$e_2^-$}
\put(370, 145){$e_3^+$}
\put(370, 30){$e_3^-$}
\put(312, 160){$\partial E$}
\put(222, 90){{\color{blue}$\partial D$}}
\put(90, -5){\Large (a)}
\put(305, -5){\Large (b)} \end{overpic} \captionof{figure}{(a) The $4$-holed sphere $\partial V$ cut off by $E_2 \cup E$. (b) The $4$-holed sphere $\partial V$ cut off by $E_2 \cup E_3$.} \label{fig:construction_of_a_bridge} \end{center}
In Figure \ref{fig:construction_of_a_bridge} (a), $\partial E_0'$ separates the 4-holed sphere into 12 rectangles, and $\partial E_2$ appears as 3 segments. Assigning symbols $x$ and $y$ to $\partial E'$ and $\partial E'_0$ with appropriate orientations respectively, the simple closed curves $\partial E_2$ and $\partial E_3$ (with appropriate orientations) represents the elements $xy^5 xy^{12}$ and $(xy^5)^2 xy^2$ respectively, of the free group $\pi_1 (W) = \langle x, y \rangle$. Let $D$ be a disk whose boundary circle is described in the figure. The disk $D$ intersects $E$ transversely in an arc, and the simple closed curve $\partial D$ represents the element $(xy^5)^4 xy^4$ in $\pi_1 (W)$. This is a primitive element in $\pi_1 (W)$, see e.g. Osborne-Zieschang \cite{OZ}. Hence, $D$ is a primitive disk in $V$ by Lemma \ref{lem:primitive_element}. We can see the disk $D$ using Figure \ref{fig:construction_of_a_bridge} (b) as well. The figure illustrates the 4-holes sphere $\partial V$ cut off by $E_2 \cup E_3$ instead of $E \cup E_2$. As a consequence, setting $\Delta_1 = \{ E, E_2, E_3 \}$, $\Delta_2 = \{ E_2, E_3, D \}$, $\{ \Delta_1, \Delta_2 \}$ forms the bridge $\mathcal{B}_{ \{ D, E \} }$ connecting $D$ and $E$, see Figure \ref{fig:bridge_L12_5}.
\begin{center} \begin{overpic}[width=3.5 cm,clip]{bridge_L12_5.eps}
\linethickness{3pt}
\put(-1, 36){$D$}
\put(90, 36){$E$}
\put(45, 72){$E_3$}
\put(45, 0){$E_2$} \end{overpic} \captionof{figure}{The bridge $\mathcal{B}_{ \{ D, E \} }$.} \label{fig:bridge_L12_5} \end{center}
We note that by Lemmas \ref{lem:construction of a bridge}, \ref{lem:principal pair of a bridge} and \ref{lem:uniqueness of a bridge}, every bridge in $\mathcal{D}(V)$ for $L(12,5)$ is constructed in this way. In particular, every bridge is isomorphic to $\mathcal{B}_{ \{ D, E \} }$ above. The subcomplex $\mathcal{P}(V) \cup (\bigcup_{\mathcal{B}} \mathcal{B})$ of $\mathcal D(V)$, where $\mathcal{B}$ runs over all bridges in $\mathcal{D}(V)$, is connected. Moreover, since the dual of the disk complex $\mathcal{D}(V)$ is a tree, $\mathcal{P}(V) \cup (\bigcup_{\mathcal{B}} \mathcal{B})$ is contractible. This allows us to define a tree $\TT(V)$ as follows. The vertices of $\TT(V)$ are the components of $\mathcal{P} (V)$, and two vertices $\mathcal{T}_1$ and $\mathcal{T}_2$ of $\TT(V)$ span an edge if and only if there exists a bridge $\mathcal{B}_{\{ D, E \}}$ with $E \in \mathcal{T}_1$ and $D \in \mathcal{T}_2$. See Figure \ref{fig:TT_L12_5}. Note that $\TT(V)$ is not locally finite.
The action of the Goeritz group $\mathcal{G}$ on the disk complex $\mathcal{D}(V)$ induces, in a natural way, a simplical action of $\mathcal{G}$ on $\TT(V)$ as well. Let $\mathcal{B}_{\{D, E\}}$ be a bridge (so an edge of $\TT(V)$) connecting two tree components $\mathcal{T}_1$ and $\mathcal{T}_2$ with $E \in \mathcal{T}_1$ and $D \in \mathcal{T}_2$. In the next section, we will show the following in a general setting: \begin{enumerate} \item The action of $\mathcal{G}$ on the set of vertices $($edges, respectively$)$ of $\TT(V)$ is transitive (cf. Lemma \ref{lem:transitivity on the vertices of TT}). \item There exists an element $\tau$ of $\mathcal{G}$ that preserves the bridge $\mathcal{B}_{\{ D , E \} }$ but exchanges $D$ and $E$ (cf. Lemma \ref{lem:stabilizers of edges of TT} (\ref{item:stabilizers of edges of TT (1)})). \end{enumerate}
Let $(\TT)'$ be the first barycentric subdivision of $\TT$. By Lemmas \ref{lem:transitivity on the vertices of TT} and \ref{lem:stabilizers of edges of TT}, the Goeritz group $\mathcal{G}$ acts on the set of edges of $(\TT)'$ transitively without inverting edges, and the two endpoints of each edge belong to different orbits of vertices under the action of $\mathcal{G}$. Hence, the quotient of $(\TT)'$ by the action of $\mathcal{G}$ is a single edge with two vertices. Now we can use the Bass-Serre theory (cf. Theorem \ref{thm:theorem by Brown}) for this group action, and hence, we can express $\mathcal{G}$ as the following amalgamated free product: \[ \mathcal{G}_{\{ \mathcal{T}_1 \}} *_{\mathcal{G}_{ \{ \mathcal{T}_1 , \mathcal{T}_2 \} }} \mathcal{G}_{\{ \mathcal{T}_1 \cup \mathcal{T}_2 \}} . \] Here by $\mathcal{G}_{\{ \mathcal{T}_1 \}}$, $\mathcal{G}_{\{ \mathcal{T}_1 \cup \mathcal{T}_2 \}}$ and $\mathcal{G}_{\{ \mathcal{T}_1 , \mathcal{T}_2 \}} $ we mean the isotropy subgroups of $\mathcal{G}$ with respect to the vertex $\mathcal{T}_1$, the unordered pair of $\mathcal{T}_1$ and $\mathcal{T}_2 $, and the ordered pair of $\mathcal{T}_1$ and $\mathcal{T}_2 $ respectively. In the next section, we give finite presentations of these 3 subgroups (cf. Lemmas \ref{lem:stabilizers of vertices of TT} and \ref{lem:stabilizers of edges of TT}). In this way, obtain a presentation of the Goeritz group $\mathcal{G}$.
\begin{center} \begin{overpic}[width=14cm,clip]{TT_L12_5.eps}
\linethickness{3pt}
\put(87, 185){$\mathcal T_2$}
\put(87, 139){$\mathcal T_1$}
\put(95, 173){\small $D$}
\put(95, 151){\small $E$}
\put(50, 20){$\mathcal{P}(V) \cup (\bigcup_{\mathcal{B}} \mathcal{B})$}
\put(300, 20){$\TT(V)$} \end{overpic} \captionof{figure}{The subcomplex $\mathcal{P}(V) \cup (\bigcup_{\mathcal{B}} \mathcal{B})$ of $\mathcal{D}(V)$, and the tree $\TT(V)$.} \label{fig:TT_L12_5} \end{center}
\section{The mapping class groups of the genus-$2$ Heegaard splittings for lens spaces} \label{sec:main_section}
Let $\mathcal G$ be the genus-$2$ Goeritz group of a lens space $L(p,q)$ with $1 \leq q \leq p/2$, and let $(V, W; \Sigma)$ be a genus-$2$ Heegaard splitting of $L(p,q)$. Throughout the section, we will assume that $p \not\equiv \pm 1 \pmod q$, and we will fix the followings: \begin{itemize} \item A primitive disk $E$ of $(p,q)$-type in $V$; \item A $(p, q)$-shell $\mathcal{S}_E = \{E_0, E_1, \ldots, E_p \}$ centered at $E$; \item The unique $(p, q')$-shell $\mathcal{S}_C = \{ C_0, C_1, \ldots, C_p \}$ centered at $C = E_{q'}$ such that $E=C_q$ (cf. Lemma \ref{lem:sequence} (3)); \item The component $\mathcal{T}_1$ of $\mathcal{P}(V)$, which is a tree, that contains $E$ (and so $C$); \item The unique bridge $\mathcal{B}_{\{ D, E \}} = \{ \Delta_1 , \Delta_2, \ldots, \Delta_n \}$, with $\Delta_1 = \{E, E_m, E_{m+1}\}$ where $m$ is the integer satisfying $p = qm + r$ for $2 \leq r \leq q-2$. Note that $D$ is of $(p,q')$-type; \item The component $\mathcal{T}_2$ of $\mathcal{P}(V)$ that contains $D$. \end{itemize} See Figure \ref{fig:setting}.
\begin{center} \begin{overpic}[width=13cm,clip]{setting.eps}
\linethickness{3pt}
\put(132, 218){$D$}
\put(163, 150){\large $\mathcal B_{\{D, E\}}$}
\put(22, 120){$E_0$}
\put(68, 120){$E_1$}
\put(97, 120){$E_m$}
\put(162, 120){$E_{m+1}$}
\put(192, 120){$E_{q'} = C$}
\put(340, 120){$E_p$}
\put(180, 10){$E = C_{q}$} \end{overpic} \captionof{figure}{The primitive disks $E$, $C$, and $D$.} \label{fig:setting} \end{center}
We use the above four primitive disks $E$, $C$, $E_1$, $C_1$ to describe the orbits of the action of the Goeritz group $\mathcal G$ to the set of primitive pairs.
\begin{lemma}[Lemmas 5.2 and 5.3 \cite{CK15b}] \label{lem:number of orbits of primitive disks and pairs}
\begin{enumerate} \item If $q^2 \equiv 1 \pmod p$, the action of the Goeritz group $\mathcal{G}$ on the set of vertices of the primitive disk complex $\mathcal P(V)$ is transitive. Further, the action of $\mathcal{G}$ on the set of edges of $\mathcal P(V)$ has exactly $2$ orbits $\mathcal{G} \cdot \{ E, C \}$ and $\mathcal{G} \cdot \{ E, E_1 \}$. The two end points of each of the edges $\{ E, C \}$ and $\{ E, E_1 \}$ can be exchanged by the action of $\mathcal{G}$. \item If $q^2 \not\equiv 1 \pmod p$, the action of $\mathcal{G}$ on the set of vertices of $\mathcal P(V)$ has exactly two orbits $\mathcal{G} \cdot \{E\}$ and $\mathcal{G} \cdot \{C\}$. Further, the action of $\mathcal{G}$ on the set of edges of $\mathcal P(V)$ has exactly $3$ orbits $\mathcal{G} \cdot \{ E, C \}$, $\mathcal{G} \cdot \{ E, E_1 \}$ and $\mathcal{G} \cdot \{ C, C_1 \}$. The two end points of each of the edges $\{ E, E_1 \}$ and $\{ C, C_1 \}$ can be exchanged by the action of $\mathcal{G}$ whereas those of $\{ E, C \}$ cannot. \end{enumerate} \end{lemma}
\begin{lemma} \label{lem:transitivity on shells} Let $A$, $B$ be primitive disk in $V$. Let $\mathcal{S}_A = \{ A_0, A_1, \ldots , A_p \}$ and $\mathcal{S}_B = \{ B_0, B_1, \ldots , B_p \}$ be shells centered at $A$ and $B$ respectively. Suppose that there exists an element of $\mathcal{G}$ that maps $B$ to $A$. Then there exists an element $\varphi$ of $\mathcal{G}$ satisfying $\varphi (B) = A$ and $\varphi (B_i) = A_i$ $(i \in \{ 0, 1, \ldots , p \})$. \end{lemma}
\begin{lemma} \label{lem:transitivity on the vertices of TT} The action of $\mathcal{G}$ on the set of vertices $($edges, respectively$)$ of $\TT(V)$ is transitive. \end{lemma} \begin{proof} If $q^2 \equiv 1 \pmod p$, the action of $\mathcal{G}$ is transitive on the set of primitive disks by Lemma \ref{lem:number of orbits of primitive disks and pairs} (1), which implies that $\mathcal{G}$ acts transitively on the set of vertices of $\TT(V)$. If $q^2 \not\equiv 1 \pmod p$, each connected component of $\mathcal{P}(V)$ contains vertices of both $(p,q)$ and $(p,q')$-types. Thus, it follows from Lemma \ref{lem:number of orbits of primitive disks and pairs} (2) that $\mathcal{G}$ acts transitively on the set of vertices of $\TT(V)$.
Let $\mathcal{B}_{\{ \bar{D} , \bar{E} \}} = \{ \bar{\Delta}_1 , \bar{\Delta}_2 , \ldots , \bar{\Delta}_n \}$ be an arbitrary bridge. By Lemma \ref{lem:primitive disks in a bridge}, we can assume without loss of generality that $\bar{D} \in \mathcal{G} \cdot D$, $\bar{E} \in \mathcal{G} \cdot E$. Let $\{ E_* , E_{**} \}$ ($\{ \bar{E}_* , \bar{E}_{**} \}$, respectively) be the principal pair of $E$ ($\bar{E}$, respectively) with respect to $D$ ($\bar{D}$, respectively). Then by Lemma \ref{lem:principal pair of a bridge}, we have $\{ E_* , E_{**} \} = \{ E_m , E_{m+1} \}$, and also there exists a $(p, q)$-shell $\mathcal S_{\bar{E}} = \{\bar{E}_0, \bar{E}_1, \ldots, \bar{E}_p\}$ centered at $\bar{E}$ such that $\{ \bar{E}_* , \bar{E}_{**} \} = \{ \bar{E}_m , \bar{E}_{m+1} \}$. By Lemma \ref{lem:transitivity on shells}, there exists an element $\varphi$ of the Goeritz group $\mathcal{G}$ satisfying $g(\bar{E}) = E$ and $g(\bar{E}_i) = E_i$ for $i \in \{ 0 , 1 , \ldots , p \}$. Then $\varphi$ maps the bridge $\mathcal{B}_{\{\bar{D}, \bar{E}\}}$ to another bridge $\varphi (\mathcal{B}_{\{\bar{D}, \bar{E}\}}) = \mathcal{B}_{\{\varphi (\bar{D}), E \}} = \{ \varphi (\bar{\Delta}_1) , \varphi (\bar{\Delta}_2) , \ldots , \varphi (\bar{\Delta}_n) \}$. Since $\Delta_1 = \{ E, E_m , E_{m+1} \} = \{ \varphi (\bar{E}) , \varphi (\bar{E}_m), \varphi (\bar{E}_{m+1}) \} = \varphi (\bar{\Delta}_1)$ we have $\mathcal{B}_{\{ D , E \}} = \varphi (\mathcal{B}_{\{ \bar{D} , \bar{E} \}} )$ by Lemma \ref{lem:uniqueness of a bridge}. This completes the proof. \end{proof}
To obtain a finite presentation of the Goeritz group $\mathcal{G}$, we use the following well-known theorem. \begin{theorem}[Serre \cite{S}] \label{thm:theorem by Brown} Suppose that a group $G$ acts on a tree $\mathcal{T}$ without inversion on the edges. If there exists a subtree $\mathcal{L}$ of $\mathcal{T}$ such that every vertex $($every edge, respectively$)$ of $\mathcal{T}$ is equivalent modulo $G$ to a unique vertex $($a unique edge, respectively$)$ of $\mathcal{L}$. Then $G$ is the free product of the isotropy groups $G_v$ of the vertices $v$ of $\mathcal{L}$, amalgamated along the isotropy groups $G_e$ of the edges $e$ of $\mathcal{L}$. \end{theorem}
In the following we will denote by $\mathcal{G}_{\{X_1 , X_2 , \ldots , X_k \}}$ the subgroup of the genus-2 Goeritz group $\mathcal{G}$ consisting of elements that preserve each of $X_1$, $X_2 , \ldots , X_k$ setwise, where each $X_i$ will be a subcomplex of $\mathcal{D}(V)$.
\begin{lemma} \label{lem:stabilizer of a primitive disks and pairs} \begin{enumerate} \item Let $A$ be an arbitrary primitive disk in $V$. Then we have $\mathcal{G}_{\{ A \}} = \langle \alpha \mid \alpha^2 \rangle \oplus \langle \beta, \gamma \mid {\gamma}^2 \rangle$, where $\alpha$ is the hyperelliptic involution of both $V$ and $W$, $\beta$ is the half-twist along a reducing sphere, and $\gamma$ exchanges two disjoint dual disks of $A$ as described in Figure ${\rm \ref{fig:isotropy_of_a_primitive_disk}}$. \item Let $\{A, B \}$ be an edge of the primitive disk complex $\mathcal P(V)$. Then we have $\mathcal{G}_{ \{ A, B \} } = \langle \alpha \mid \alpha^2 \rangle$. If the two end points of the edge $\{ A, B \}$ can be exchanged by the action of $\mathcal{G}$, then we have $\mathcal{G}_{ \{ A \cup B \} } = \langle \alpha \mid \alpha^2 \rangle \oplus \langle \sigma \mid \sigma^2 \rangle$, where $\sigma$ is an element of $\mathcal G$ exchanging $A$ and $B$. Otherwise, we have $\mathcal{G}_{ \{ A \cup B \} } = \langle \alpha \mid \alpha^2 \rangle$. \end{enumerate} \end{lemma}
\begin{center} \begin{overpic}[width=14cm, clip]{isotropy_of_a_primitive_disk.eps}
\linethickness{3pt}
\put(52,0){(a)}
\put(30,25){\footnotesize $W$}
\put(52,32){\footnotesize {\color{blue}$B'$}}
\put(81,23){\footnotesize {\color{red}$\partial A$}}
\put(109,37){\footnotesize {\color{blue}$A'$}}
\put(135,43){\footnotesize $\pi$}
\put(115,28){\footnotesize $\alpha = \alpha'$}
\put(198,0){(b)}
\put(178,25){\footnotesize $W$}
\put(227,23){\footnotesize {\color{red}$\partial A$}}
\put(255,37){\footnotesize {\color{blue}$A'$}}
\put(281,43){\footnotesize $\pi$}
\put(268,28){\footnotesize $\beta$}
\put(330,0){(c)}
\put(330,15){\footnotesize $W$}
\put(334,43){\footnotesize {\color{blue}$B'$}}
\put(381,43){\footnotesize {\color{blue}$A'$}}
\put(387,60){\footnotesize {\color{red}$\partial A$}}
\put(360,95){\footnotesize $\pi$}
\put(372,84){\footnotesize $\gamma$} \end{overpic} \captionof{figure}{Generators of $\mathcal{G}_{\{ A \}}$.} \label{fig:isotropy_of_a_primitive_disk} \end{center}
\begin{lemma} \label{lem:stabilizers of vertices of TT} \begin{enumerate} \item \label{item:stabilizers of vertices of TT (1)} If $q^2 \equiv 1 \pmod p$, we have $\mathcal{G}_{\{\mathcal{T}_1\}} = \langle \alpha \mid \alpha^2 \rangle \oplus \langle \beta, \gamma, \sigma_1, \sigma_2 \mid {\gamma}^2, {\sigma_1}^2, {\sigma_2}^2 \rangle$. \item \label{item:stabilizers of vertices of TT (2)} If $q^2 \not\equiv 1 \pmod p$, we have $\mathcal{G}_{\{\mathcal{T}_1\}} = \langle \alpha \mid \alpha^2 \rangle \oplus \langle \beta_1, \beta_2, \gamma_1, \gamma_2, \sigma_1, \sigma_2 \mid {\gamma_1}^2, {\gamma_2}^2, {\sigma_1}^2, {\sigma_2}^2 \rangle$. \end{enumerate} \end{lemma} \begin{proof} \noindent (\ref{item:stabilizers of vertices of TT (1)}) Suppose $q^2 \equiv 1 \pmod p$. Since the argument is almost the same as Theorem 5.7 (2)-(c) in \cite{CK15b}, we explain the outline. A local part of $\mathcal{T}_1$ containing vertices $E$, $C$, $E_1$ and $C_1$ is illustrated in Figure \ref{fig:primitive_disk_complex_3} (a).
\begin{center} \begin{overpic}[width=14cm, clip]{primitive_disk_complex_3}
\linethickness{3pt}
\put(56,0){(a)}
\put(40,42){\small $E_1$}
\put(-3,72){\small $C_1$}
\put(69,64){\small $E = C_q$}
\put(40,85){\small $C = E_q$}
\put(65,45){\small $E_{p-1}$}
\put(0,106){\small $C_{p-1}$}
\put(42,111){\small $C_{p-q}$}
\put(97,83){\small $E_{p-q}$}
\put(218,0){(b)}
\put(232,63){$E$}
\put(203,85){$C$}
\put(354,0){(c)}
\put(315,52){$\{E, C\} $}
\put(355,52){$E$}
\put(373,52){$\{ E , E_1 \}$} \end{overpic} \captionof{figure}{(a) The tree component $\mathcal{T}_1$. (b) The tree $\mathcal{T}_1'$. (c) The quotient $\mathcal{T}_1' / \mathcal{G}_{\{\mathcal{T}_1\}}$.} \label{fig:primitive_disk_complex_3} \end{center}
Let $\mathcal{T}_1'$ be the first barycentric subdivision of $\mathcal{T}_1$, which is described in Figure \ref{fig:primitive_disk_complex_3} (b).
By Lemma \ref{lem:number of orbits of primitive disks and pairs}, the quotient of $\mathcal{T}_1'$ by the action of $\mathcal{G}_{\{\mathcal{T}_1\}}$ is the path graph on three vertices as illustrated in Figure \ref{fig:primitive_disk_complex_3} (c). By Theorem \ref{thm:theorem by Brown}, we can express $\mathcal{G}_{\{ \mathcal{T}_1 \}}$ as the following amalgamated free products: \[ \mathcal{G}_{\{ \mathcal{T}_1\} } = \mathcal{G}_{\{ E \cup C \}} *_{\mathcal{G}_{ \{ E, C \} }} \mathcal{G}_{\{ E \}} *_{\mathcal{G}_{ \{ E , E_1 \} }} \mathcal{G}_{\{ E \cup E_1 \}} . \] By Lemma \ref{lem:stabilizer of a primitive disks and pairs}, we obtain the required presentation.
\noindent (\ref{item:stabilizers of vertices of TT (2)}) Suppose $q^2 \not\equiv 1 \pmod p$. In this case, the argument is almost the same as Theorem 5.7 (2)-(d) in \cite{CK15b}. A local part of $\mathcal{T}_1$ containing vertices $E$, $C$, $E_1$, and $C_1$ is illustrated in Figure \ref{fig:primitive_disk_complex_4} (a).
\begin{center} \begin{overpic}[width=14cm, clip]{primitive_disk_complex_4}
\linethickness{3pt}
\put(53,0){(a)}
\put(36,41){\small $E_1$}
\put(-7,72){\small $C_1$}
\put(64,63){\small $E = C_q$}
\put(36,85){\small $C = E_{q'}$}
\put(60,45){\small $E_{p-1}$}
\put(-4,106){\small $C_{p-1}$}
\put(38,111){\small $C_{p-q}$}
\put(94,83){\small $E_{p-q'}$}
\put(200,0){(b)}
\put(213,62){$E$}
\put(185,84){$C$}
\put(339,0){(c)}
\put(285,51){$\{ C , C_1 \}$}
\put(326,51){$C$}
\put(354,51){$E$}
\put(374,51){$\{ E , E_1 \}$} \end{overpic} \captionof{figure}{(a) The tree component $\mathcal{T}_1$. (b) The tree $\mathcal{T}_1'$. (c) The quotient $\mathcal{T}_1' / \mathcal{G}_{\{ \mathcal{T}_1\} }$.} \label{fig:primitive_disk_complex_4} \end{center}
Let $\mathcal{T}_1'$ be the tree obtained from $\mathcal{T}_1$ by adding the barycenter of each edge in $\mathcal{G} \cdot \{ E, E_1 \} \cup \mathcal{G} \cdot \{ D, D_1 \}$). See Figure \ref{fig:primitive_disk_complex_4} (b).
By Lemma \ref{lem:number of orbits of primitive disks and pairs}, the quotient of $\mathcal{T}_1'$ by the action of $\mathcal{G}_{\{ \mathcal{T}_1 \}}$ is the path graph on four vertices as illustrated in Figure \ref{fig:primitive_disk_complex_4} (c). By Theorem \ref{thm:theorem by Brown}, we can express $\mathcal{G}_{ \{ \mathcal{T}_1 \} }$ as the following amalgamated free products: \[ \mathcal{G}_{\{\mathcal{T}_1\}} = \mathcal{G}_{\{ D \cup D_1 \}} *_{\mathcal{G}_{ \{ D, D_1 \} }} \mathcal{G}_{\{ D \}} *_{\mathcal{G}_{ \{ E , D \} }} \mathcal{G}_{\{ E \}} *_{\mathcal{G}_{ \{ E , E_1 \} }} \mathcal{G}_{\{ E \cup E_1 \}} . \] By Lemma \ref{lem:stabilizer of a primitive disks and pairs}, we obtain the required presentation. \end{proof}
\begin{lemma} \label{lem:stabilizers of edges of TT} \begin{enumerate} \item \label{item:stabilizers of edges of TT (1)} If $q^2 \equiv 1 \pmod p$, we have $\mathcal{G}_{\{ \mathcal T_1 \cup \mathcal T_2 \}} = \langle \alpha \mid \alpha^2 \rangle \oplus \langle \tau \mid {\tau}^2 \rangle$ and $\mathcal{G}_{\{\mathcal T_1 , \mathcal T_2 \}} = \langle \alpha \mid \alpha^2 \rangle $, where $\tau$ is an element of $\mathcal{G}$ that exchanges $D$ and $E$. \item \label{item:stabilizers of edges of TT (2)} If $q^2 \not\equiv 1 \pmod p$, we have $\mathcal{G}_{\{ \mathcal T_1 \cup \mathcal T_2 \} } = \mathcal{G}_{\{\mathcal T_1 , \mathcal T_2 \}} = \langle \alpha \mid \alpha^2 \rangle $. \end{enumerate} \end{lemma} \begin{proof} We first show that $\mathcal{G}_{\{\mathcal T_1 , \mathcal T_2 \}} = \langle \alpha \mid \alpha^2 \rangle $ in both cases. Let $\varphi$ be an element of $\mathcal{G}_{\{\mathcal T_1 , \mathcal T_2 \}}$. Since $\mathcal{B}_{ \{ D,E \} }$ is the unique bridge connecting $\mathcal{T}_1$ and $\mathcal{T}_2$, $\varphi$ preserves $\mathcal{B}_{ \{ D,E \} }$, so $D$ and $E$. By Lemma \ref{lem:principal pair of a bridge}, $\varphi$ preserves the shell $\mathcal{S}_{E}$. Further, since $m + 1 < p/2$, we have $\varphi (E) = E$ and $\varphi(E_i) = E_i$ ($i \in \{ 0 , 1 , \ldots, p \}$) by Lemma \ref{lem:principal pair of a bridge}. Let $E'$ be the unique dual disk of $E$ disjoint from $E_0$, and let $E_0'$ be the unique semi-primitive disk disjoint from $E$. It follows from the the uniqueness of the shell that we have then $\varphi(E') = E'$ and $\varphi(E_0') = E_0'$. Since $\{E , E_{m}, E_{m+1}\}$ is a triple of pairwise disjoint disks cutting $V$ into two 3-balls, if $\varphi$ is orientation-preserving on $E$, then so is on each of $E_j$, $E'$ and $E_0'$. Then by Alexander's trick, $\varphi$ is the trivial element of $\mathcal{G}$. If $\varphi$ is orientation-reversing on $E$, then so is on each of $E_j$, $E'$ and $E_0'$. In this case, again by Alexander's trick, $\varphi$ is the hyperelliptic involution $\alpha$.
In the remaining of the proof, we consider the group $\mathcal{G}_{\{ \mathcal T_1 \cup \mathcal T_2 \} }$. If $q^2 \not\equiv 1 \pmod p$, there exists no element of $\mathcal{G}$ that maps $E$ to $D$ by Lemma \ref{lem:primitive disks in a bridge}. Thus, we have $\mathcal{G}_{\{ \mathcal T_1 \cup \mathcal T_2 \} } = \mathcal{G}_{\{ \mathcal T_1 , \mathcal T_2 \} }$, which is our assertion. If $q^2 \not\equiv 1 \pmod p$, then by Lemmas \ref{lem:primitive disks in a bridge} and \ref{lem:transitivity on shells} there exists an element $\tau$ of $\mathcal{G}$ such that $\tau(D) = E$ and $\varphi(D_i) = E_i$ ($i \in \{ 0 , 1 , \ldots, p \}$). By Lemmas \ref{lem:construction of a bridge}, \ref{lem:principal pair of a bridge} and \ref{lem:uniqueness of a bridge}, we have $\tau(E) = D$ and $\varphi(E_i) = D_i$ ($i \in \{ 0 , 1 , \ldots, p \}$). Thus, $\tau$ is in $\mathcal{G}_{\{ \mathcal T_1 \cup \mathcal T_2 \} }$. Now we give orientations on $D$, $D_{m}$ and $D_{m+1}$ so that they come from an orientation of a component $V'$ of $V$ cut off by $D \cup D_{m} \cup D_{m+1}$. We then can give orientations on $E$, $E_{m}$ and $E_{m+1}$ so that they come from an orientation of a component $\tau(V')$ of $V$ cut off by $E \cup E_{m} \cup E_{m+1}$. Under these orientations, both $\tau \mid_D : D \to E$ and $\tau \mid_E : E \to D$ are oreintation-preserving. This implies that $\tau^2 = 1 \in \mathcal{G}$. Therefore, we have $\mathcal{G}_{ \{ \mathcal{T}_1 \cup \mathcal{T}_2 \} } = \langle \alpha \mid \alpha^2 \rangle \oplus \langle \tau \mid \tau^2 \rangle$. \end{proof}
\begin{theorem} \label{thm:presentations of the Goeritz groups for non-connected case} Let $\mathcal G$ be the genus-$2$ Goeritz group of $L(p,q)$ with $p \not\equiv \pm 1 \pmod q$. \begin{enumerate} \item \label{item:presentations of the Goeritz groups for non-connected case (1)} If $q^2 \equiv 1 \pmod p$, the group $\mathcal{G}$ can be expressed as the amalgamated free product $\mathcal{G}_{\{\mathcal{T}_1\}} *_{\mathcal{G}_{\{\mathcal T_1 , \mathcal T_2 \}}} \mathcal{G}_{\{\mathcal T_1 \cup \mathcal T_2 \}}$ and it has the following presentation: \[\langle \alpha \mid \alpha^2 \rangle \oplus \langle \beta, \gamma, \sigma_1, \sigma_2 , \tau \mid {\gamma}^2, {\sigma_1}^2, {\sigma_2}^2, \tau^2 \rangle.\] \item \label{item:presentations of the Goeritz groups for non-connected case (2)} If $q^2 \not\equiv 1 \pmod p$, the group $\mathcal{G}$ can be expressed as the HNN-extension $\mathcal{G}_{\{ \mathcal{T}_1 \} } *_{\mathcal{G}_{\{ \mathcal T_1 , \mathcal T_2 \} }}$ and it has the following presentation: \[\langle \alpha \mid \alpha^2 \rangle \oplus \langle \beta_1, \beta_2, \gamma_1, \gamma_2, \sigma_1, \sigma_2 , \upsilon \mid {\gamma_1}^2, {\gamma_2}^2, {\sigma_1}^2, {\sigma_2}^2 \rangle .\] \end{enumerate} \end{theorem} \begin{proof} \noindent (\ref{item:presentations of the Goeritz groups for non-connected case (1)}) Let $(\TT)'$ be the first barycentric subdivision of $\TT$. By Lemmas \ref{lem:transitivity on the vertices of TT} and \ref{lem:stabilizers of edges of TT}, the Goeritz group $\mathcal{G}$ acts on the set of edges of $(\TT)'$ transitively, and each edge is invertible under the action of $\mathcal{G}$. Thus, the quotient of $(\TT)'$ by the action of $\mathcal{G}$ is a single edge with two vertices. By Theorem \ref{thm:theorem by Brown}, we can express $\mathcal{G}$ as the following amalgamated free product: \[ \mathcal{G} = \mathcal{G}_{\{ \mathcal{T}_1 \}} *_{\mathcal{G}_{ \{ \mathcal{T}_1 , \mathcal{T}_2 \} }} \mathcal{G}_{\{ \mathcal{T}_1 \cup \mathcal{T}_2 \}} . \] By Lemmas \ref{lem:stabilizers of vertices of TT} and \ref{lem:stabilizers of edges of TT} we obtain the required presentation.
\noindent (\ref{item:presentations of the Goeritz groups for non-connected case (2)}) By Lemmas \ref{lem:transitivity on the vertices of TT} and \ref{lem:stabilizers of edges of TT}, the Goeritz group $\mathcal{G}$ acts on the sets of vertices and edges of $\TT$ transitively without inverting edges. Thus, the quotient of $\TT$ by the action of $\mathcal{G}$ is a single edge with one vertex. By Theorem \ref{thm:theorem by Brown}, we can express $\mathcal{G}$ as the following HNN-extension: \[ \mathcal{G} = \mathcal{G}_{\{ \mathcal{T}_1 \} } *_{\mathcal{G}_{\{ \mathcal T_1 , \mathcal T_2 \}}}. \] By Lemmas \ref{lem:stabilizers of vertices of TT} and \ref{lem:stabilizers of edges of TT} we obtain the required presentation. \end{proof}
\section{The space of Heegaard splittings} \label{sec:The space of Heegaard splittings}
Let $M$ be a closed orientable $3$-manifold and $\Sigma$ be a Heegaard surface in $M$. The space $\mathcal{H}(M, \Sigma)$ of Heegaard splittings equivalent to $(M, \Sigma)$ is defined to be the space of left cosets $\mathrm{Diff}(M)/\mathrm{Diff}(M,\Sigma)$, where $\mathrm{Diff}(M)$ denotes the diffeomorphism group of $M$, and $\mathrm{Diff}(M,\Sigma)$ denotes the subgroup of $\mathrm{Diff}(M)$ consisting of diffeomorphisms that preserve $\Sigma$ setwise. The space $\mathcal{H}(M, \Sigma)$ can be regarded as the space of images of $\Sigma$ under diffeomorphisms of $M$. Note that $\pi_0 (\mathcal{H}(M, \Sigma))$ is exactly the set of Heegaard splittings equivalent to $(M, \Sigma)$. By $\mathrm{MCG}(M)$ and $\mathrm{MCG}(M, \Sigma)$, we denote the groups of path components of $\mathrm{Diff}(M)$ and $\mathrm{Diff}(M, \Sigma)$ respectively.
In \cite{JM13} , Johnson and McCullough showed the following: \begin{theorem}[\cite{JM13}] \label{thm:Theorem of Johnson and McCullough} Let $\Sigma$ be a genus-$2$ Heegaard surface of $M$. \begin{enumerate} \item For each integer $q$ greater than $1$, the natural map $\pi_q(\mathrm{Diff} (M)) \to \pi_q (\mathrm{Diff}(M, \Sigma))$ is an isomorphism. \item There exists the following short exact sequence: \[ 1 \to \pi_1 (\mathrm{Diff} (M)) \to \pi_1 (\mathcal{H}(M, \Sigma)) \to G(M, \Sigma) \to 1, \] where $G(M, \Sigma)$ denotes the kernel of the natural map $\mathrm{MCG} (M, \Sigma) \to \mathrm{MCG}(M)$. \end{enumerate} \end{theorem} By Theorem \ref{thm:Theorem of Johnson and McCullough} (1), for any $q \geq 2$, the study of $\pi_q (\mathrm{Diff}(M, \Sigma))$ is nothing else but that of $\pi_q (\mathrm{Diff} (M))$. The following is a direct application of our result on the finite presentability of the Goeritz groups of genus-$2$ Heegaard splittings of lens spaces.
\begin{theorem} \label{cor:the space of Heegaard splittings of a lens space} Let $L$ be $\mathbb{S}^3$ or a lens space $L(p,q)$, where $1 \leq q \leq p/2$. For a genus-$2$ Heegaard surface in $L$, $\pi_1 (\mathcal{H} (L))$ is finitely presented $($up to the Smale Conjecture for $L(2,1)$ when $L = L(2,1))$. \end{theorem} \begin{proof} Since the group $\mathrm{MCG}(L)$ is finite for any $L$ by Bonahon \cite{Bon83}, the group $G(L, \Sigma)$ is isomorphic to the genus-$2$ Goeritz group of $L$ up to finite extensions. By the Smale Conjecture, proved to be correct for all $L$ except $L(2,1)$ in \cite{Hat83} and \cite{HKMR12}, $\pi_1 (\mathrm{Diff} (L)) = \mathbb{Z} / 2 \mathbb{Z}$ if $L = \mathbb{S}^3$, $\pi_1 (\mathrm{Diff} (L)) = (\mathbb{Z} / 2 \mathbb{Z}) \oplus (\mathbb{Z} / 2 \mathbb{Z})$ if $L = L(2,1)$, $\pi_1 (\mathrm{Diff} (L)) = \mathbb{Z}$ if $L = L(p,1)$ (for odd $p \geq 3$), $\pi_1 (\mathrm{Diff} (L)) = \mathbb{Z} \oplus (\mathbb{Z} / 2 \mathbb{Z})$ if $L = L(p,1)$ (for even $p \geq 3$), and $\pi_1 (\mathrm{Diff} (L)) = \mathbb{Z} \oplus \mathbb{Z}$ otherwise. Thus, in particular, $\pi_1 (\mathrm{Diff} (L))$ is finitely presented for any $L$. Now, the assertion follows from the short exact sequence in Theorem \ref{thm:Theorem of Johnson and McCullough} (2). \end{proof}
\noindent {\bf Acknowledgments.} Part of this work was carried out while the authors were visiting National Institute for Mathematical Sciences (NIMS) in Daejeon for the Research in CAMP Program (C-21601) and Korea Institute for Advanced Study (KIAS) in Seoul. They are grateful to the institutes and their staff for the warm hospitality.
\end{document} | arXiv |
IPI Home
Backward problem for a time-space fractional diffusion equation
June 2018, 12(3): 801-830. doi: 10.3934/ipi.2018034
An inverse problem for the magnetic Schrödinger operator on Riemannian manifolds from partial boundary data
Sombuddha Bhattacharyya
Tata Institute of Fundamental Research, Centre for Applicable Mathematics, Bangalore, India
Received September 2016 Revised January 2018 Published March 2018
We consider the inverse problem of recovering the magnetic and potential term of a magnetic Schrödinger operator on certain compact Riemannian manifolds with boundary from partial Dirichlet and Neumann data on suitable subsets of the boundary. The uniqueness proof relies on proving a suitable Carleman estimate for functions which vanish only on a part of boundary and constructing complex geometric optics solutions which vanish on a part of the boundary.
Keywords: Differential geometry, inverse problems, Calderón problem, geometric analysis, partial differential equations.
Mathematics Subject Classification: Primary: 58F15, 31B20; Secondary: 35J25.
Citation: Sombuddha Bhattacharyya. An inverse problem for the magnetic Schrödinger operator on Riemannian manifolds from partial boundary data. Inverse Problems & Imaging, 2018, 12 (3) : 801-830. doi: 10.3934/ipi.2018034
K. Astala and L. Päivärinta, Calderón's inverse conductivity problem in the plane, Ann. of Math. (2), 163 (2006), 265-299. doi: 10.4007/annals.2006.163.265. Google Scholar
A. L. Bukhgeim, Recovering a potential from cauchy data in the two-dimensional case, J. Inverse Ill-Posed Probl., 16 (2008), 19-33. Google Scholar
A. L. Bukhgeim and G. Uhlmann, Recovering a potential from partial cauchy data, Communications in Partial Differential Equations, 27 (2002), 653-668. doi: 10.1081/PDE-120002868. Google Scholar
A. P. Calderón, On an inverse boundary value problem, In Seminar on Numerical Analysis and its Applications to Continuum Physics (Rio de Janeiro, 1980), pages 65-73. Soc. Brasil. Mat., Rio de Janeiro, 1980. Google Scholar
F. J. Chung, Partial data for the neumann-dirichlet magnetic schrödinger inverse problem, Inverse Probl. Imaging, 8 (2014), 959-989. doi: 10.3934/ipi.2014.8.959. Google Scholar
F. J. Chung, A partial data result for the magnetic schrödinger inverse problem, Anal. PDE, 7 (2014), 117-157. doi: 10.2140/apde.2014.7.117. Google Scholar
F. J. Chung, Partial data for the neumann-to-dirichlet map, J. Fourier Anal. Appl., 21 (2015), 628-665. doi: 10.1007/s00041-014-9379-5. Google Scholar
F. J. Chung, M. Salo and L. Tzou, Partial data inverse problems for the hodge laplacian, Anal. PDE, 10 (2017), 43-93, https://arXiv.org/abs/1310.4616 doi: 10.2140/apde.2017.10.43. Google Scholar
D. D. S. Ferreira, C. E. Kenig, M. Salo and G. Uhlmann, Limiting carleman weights and anisotropic inverse problems, Invent. Math., 178 (2009), 119-171. doi: 10.1007/s00222-009-0196-4. Google Scholar
D. D. S. Ferreira, C. E. Kenig, J. Sjöstrand and G. Uhlmann, Determining a magnetic schrödinger operator from partial cauchy data, Comm. Math. Phys., 271 (2007), 467-488. doi: 10.1007/s00220-006-0151-9. Google Scholar
L. Hörmander, Remarks on Holmgren's uniqueness theorem, Ann. Inst. Fourier, 43 (1993), 1223-1251. doi: 10.5802/aif.1371. Google Scholar
O. Y. Imanuvilov, G. Uhlmann and M. Yamamoto, Determination of second-order elliptic operators in two dimensions from partial Cauchy data, Proc. Natl. Acad. Sci. USA, 108 (2011), 467-472. doi: 10.1073/pnas.1011681107. Google Scholar
O. Y. Imanuvilov, G. Uhlmann and M. Yamamoto, The Calderón problem with partial data in two dimensions, J. Amer. Math. Soc., 23 (2010), 655-691. doi: 10.1090/S0894-0347-10-00656-9. Google Scholar
O. Y. Imanuvilov, G. Uhlmann and M. Yamamoto, Inverse boundary value problem by measuring Dirichlet data and Neumann data on disjoint sets, Inverse Problems, 27 (2011), 085007, 26pp. Google Scholar
V. Isakov, On uniqueness in the inverse conductivity problem with local data, Inverse Probl. Imaging, 1 (2007), 95-105. doi: 10.3934/ipi.2007.1.95. Google Scholar
C. E. Kenig and M. Salo, The Calderón problem with partial data on manifolds and applications, Anal. PDE, 6 (2013), 2003-2048. doi: 10.2140/apde.2013.6.2003. Google Scholar
C. E. Kenig, J. Sjöstrand and G. Uhlmann, The Calderón problem with partial data, Ann. of Math. (2), 165 (2007), 567-591. doi: 10.4007/annals.2007.165.567. Google Scholar
K. Knudsen and M. Salo, Determining nonsmooth first order terms from partial boundary measurements, Inverse Probl. Imaging, 1 (2007), 349-369. doi: 10.3934/ipi.2007.1.349. Google Scholar
K. Knudsen, The Calderón problem with partial data for less smooth conductivities, Comm. Partial Differential Equations, 31 (2006), 57-71. doi: 10.1080/03605300500361610. Google Scholar
K. Krupchyk and G. Uhlmann, Inverse problems for magnetic schrödinger operators in transversally anisotropic geometries, ArXiv https://arXiv.org/abs/1702.07974 Google Scholar
A. I. Nachman, Global uniqueness for a two-dimensional inverse boundary value problem, Ann. of Math. (2), 143 (1996), 71-96. doi: 10.2307/2118653. Google Scholar
G. Nakamura, Z. Sun and G. Uhlmann, Global identifiability for an inverse problem for the Schrödinger equation in a magnetic field, Math. Ann., 303 (1995), 377-388. doi: 10.1007/BF01460996. Google Scholar
M. Salo, Semiclassical pseudodifferential calculus and the reconstruction of a magnetic field, Comm. Partial Differential Equations, 31 (2006), 1639-1666. doi: 10.1080/03605300500530420. Google Scholar
M. Salo and L. Tzou, Carleman estimates and inverse problems for dirac operators, Math. Ann., 344 (2009), 161-184. doi: 10.1007/s00208-008-0301-9. Google Scholar
V. A. Sharafutdinov, Integral Geometry of Tensor Fields, Inverse and Ill-posed Problems Series. VSP, Utrecht, Utrecht, the Netherlands, 1994. Google Scholar
Z. Sun, An inverse boundary value problem for Schrödinger operators with vector potentials, Trans. Amer. Math. Soc., 338 (1993), 953-969. Google Scholar
J. Sylvester and G. Uhlmann, A global uniqueness theorem for an inverse boundary value problem, Ann. of Math. (2), 125 (1987), 153-169. doi: 10.2307/1971291. Google Scholar
C. F. Tolmasky, Exponentially growing solutions for nonsmooth first-order perturbations of the Laplacian, SIAM J. Math. Anal., 29 (1998), 116-133 (electronic). doi: 10.1137/S0036141096301038. Google Scholar
Yernat M. Assylbekov. Reconstruction in the partial data Calderón problem on admissible manifolds. Inverse Problems & Imaging, 2017, 11 (3) : 455-476. doi: 10.3934/ipi.2017021
Albert Clop, Daniel Faraco, Alberto Ruiz. Stability of Calderón's inverse conductivity problem in the plane for discontinuous conductivities. Inverse Problems & Imaging, 2010, 4 (1) : 49-91. doi: 10.3934/ipi.2010.4.49
Matteo Santacesaria. Note on Calderón's inverse problem for measurable conductivities. Inverse Problems & Imaging, 2019, 13 (1) : 149-157. doi: 10.3934/ipi.2019008
Oleg Yu. Imanuvilov, Masahiro Yamamoto. Calderón problem for Maxwell's equations in cylindrical domain. Inverse Problems & Imaging, 2014, 8 (4) : 1117-1137. doi: 10.3934/ipi.2014.8.1117
Ulrike Kant, Werner M. Seiler. Singularities in the geometric theory of differential equations. Conference Publications, 2011, 2011 (Special) : 784-793. doi: 10.3934/proc.2011.2011.784
Petteri Piiroinen, Martin Simon. Probabilistic interpretation of the Calderón problem. Inverse Problems & Imaging, 2017, 11 (3) : 553-575. doi: 10.3934/ipi.2017026
Angelo Favini. A general approach to identification problems and applications to partial differential equations. Conference Publications, 2015, 2015 (special) : 428-435. doi: 10.3934/proc.2015.0428
Linghai Zhang. Long-time asymptotic behaviors of solutions of $N$-dimensional dissipative partial differential equations. Discrete & Continuous Dynamical Systems - A, 2002, 8 (4) : 1025-1042. doi: 10.3934/dcds.2002.8.1025
Pedro Caro, Mikko Salo. Stability of the Calderón problem in admissible geometries. Inverse Problems & Imaging, 2014, 8 (4) : 939-957. doi: 10.3934/ipi.2014.8.939
Abdelhai Elazzouzi, Aziz Ouhinou. Optimal regularity and stability analysis in the $\alpha-$Norm for a class of partial functional differential equations with infinite delay. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 115-135. doi: 10.3934/dcds.2011.30.115
Herbert Koch. Partial differential equations with non-Euclidean geometries. Discrete & Continuous Dynamical Systems - S, 2008, 1 (3) : 481-504. doi: 10.3934/dcdss.2008.1.481
Wilhelm Schlag. Spectral theory and nonlinear partial differential equations: A survey. Discrete & Continuous Dynamical Systems - A, 2006, 15 (3) : 703-723. doi: 10.3934/dcds.2006.15.703
Eugenia N. Petropoulou, Panayiotis D. Siafarikas. Polynomial solutions of linear partial differential equations. Communications on Pure & Applied Analysis, 2009, 8 (3) : 1053-1065. doi: 10.3934/cpaa.2009.8.1053
Arnulf Jentzen. Taylor expansions of solutions of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 515-557. doi: 10.3934/dcdsb.2010.14.515
Nguyen Thieu Huy, Ngo Quy Dang. Dichotomy and periodic solutions to partial functional differential equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3127-3144. doi: 10.3934/dcdsb.2017167
Barbara Abraham-Shrauner. Exact solutions of nonlinear partial differential equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 577-582. doi: 10.3934/dcdss.2018032
Paul Bracken. Exterior differential systems and prolongations for three important nonlinear partial differential equations. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1345-1360. doi: 10.3934/cpaa.2011.10.1345
Frank Pörner, Daniel Wachsmuth. Tikhonov regularization of optimal control problems governed by semi-linear partial differential equations. Mathematical Control & Related Fields, 2018, 8 (1) : 315-335. doi: 10.3934/mcrf.2018013
Sergiy Zhuk. Inverse problems for linear ill-posed differential-algebraic equations with uncertain parameters. Conference Publications, 2011, 2011 (Special) : 1467-1476. doi: 10.3934/proc.2011.2011.1467
Jaan Janno, Kairi Kasemets. A positivity principle for parabolic integro-differential equations and inverse problems with final overdetermination. Inverse Problems & Imaging, 2009, 3 (1) : 17-41. doi: 10.3934/ipi.2009.3.17 | CommonCrawl |
\begin{document}
\begin{center} \vspace*{0.3in} {\bf VARIATIONAL PRINCIPLES IN MODELS OF BEHAVIORAL SCIENCES}\\[3ex] T. Q. BAO\footnote{Department of Mathematics $\&$ Computer Science, Northern Michigan University, Marquette, Michigan 49855, USA ([email protected]).}, B. S. MORDUKHOVICH\footnote{Department of Mathematics, Wayne State University, Detroit, Michigan, USA ([email protected]). Research of this author was partially supported by the USA National Science Foundation under grant DMS-1007132.} and A. SOUBEYRAN\footnote{Aix-Marseille University (Aix-Marseille School of Economics), CNRS \& EHESS, Marseille 13002, France ([email protected]).} \end{center} \noindent{\small{\bf Abstract.} This paper develops some mathematical models arising in behavioral sciences, particularly in psychology, which are formalized via general preferences with variable ordering structures. Our considerations are based on the recent ``variational rationality approach" that unifies numerous theories in different branches of behavioral sciences by using, in particular, worthwhile change and stay dynamics and variational traps. In the mathematical framework of this approach, we derive a new variational principle, which can be viewed as an extension of the Ekeland variational principle to the case of set-valued mappings on quasimetric spaces with cone-valued ordering variable structures. Such a general setting is proved to be appropriate for broad applications to the functioning of goal systems in psychology, which are developed in the paper. In this way we give a certain answer to the following striking question: in the world, where all things change (preferences, motivations, resistances, etc.), where goal systems drive a lot of entwined course pursuits between means and ends---what can stay fixed for a while? The obtained mathematical results and new insights open the door to developing powerful models of adaptive behavior, which strongly depart from pure static general equilibrium models of the Walrasian type that are typical in economics.\vspace*{.1in}
\noindent {\bf Mathematics Subject Classification:} 49J53, 90C29, 93J99\vspace*{.1in}
\noindent {\bf Key words:} variable ordering structures, multiobjective optimization, variable preferences, variational principles, variational rationality, stability and change dynamics, variational traps} \newtheorem{Theorem}{Theorem}[section] \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Remark}[Theorem]{Remark} \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Definition}[Theorem]{Definition} \newtheorem{Example}[Theorem]{Example} \newenvironment{Proof}{\noindent {\it Proof.\, }\hspace{1pt}} {\hspace{1pt}
$\triangle$
} \normalsize
\section{Introduction and Motivations}
In this introductory section, we first describe the major features of some stability/stay and change dynamical models in behavioral sciences and the essence of the ``variational rationality" approach to them. Then we show the need for new mathematical developments concerning variational principles and tools of variational analysis for valuable applications to such models. Finally, we discuss the main goals and contributions of this paper, from both viewpoints of mathematics and applications.
{\bf Stability/Stay and Change Dynamics in Behavioral Sciences.} Recent developments on the modeling in various branches of behavioral sciences (including artificial intelligence, economics, management sciences, decision processes, philosophy, political sciences, psychology, sociology) mainly focus on the functioning/behavioral dynamics of agents, groups, and organizations. Analyzing these models, two very simple observations come to mind. First, all these disciplines, except static models in microeconomics via the classical Walrasian general equilibrium approach \cite{w74}, advocate that human behaviors are driven by {\em adaptive processes}. Second, the vast majority of models in these areas (called sometimes ``theories of stability/stay and change"), advocate that we live in a world where at the same time many things ``stay" (e.g., habits and routines, equilibria, traps, etc.) while many other things ``change" (e.g., creations, destructions, learning, innovation, attitudes as well as beliefs formation and revision, self regulation, goal setting, goal striving and revision, breaking and forming habits, etc.). As stated by Bridges \cite{b09}, ``we are always stuck in the middle between a current status quo position and future ends."
For the reader convenience, we present in Appendix~3 below a brief survey on stability and changes theories. This may help convincing the reader that dynamical models inherent in behavior sciences are essentially different from more traditional static equilibrium models of the Walrasian type, and thus they require developing appropriate tools of analysis.
{\bf Variational Rationality in Behavioral Sciences.} Since in behavioral sciences all things change (as is often said: ``the only thing that does not change is change itself"), the main question in models of stability and change dynamics is: why, where, how, and when behavioral processes stop or start to change and how transitions work. To describe these issues, Soubeyran \cite{s09,s10} introduced two main variational concepts: {\em worthwhile changes} and {\em variational traps} as the end points of a succession of worthwhile single changes. The notion of variational traps includes both aspiration points and equilibria and, roughly speaking, reflects the following. Starting from somewhere and not being precisely in a trap, agents want and try approaching such traps in some feasible and acceptable ways (in the case of aspiration points), while being there, prefer to stay than to move away (in the case of equilibria). The notion is crucial in the {\em variational rationality approach} to modelize human behavior suggested in \cite{s09,s10}. This approach helps us to answer the aforementioned main question as well as to unify and modelize various theories of stability/stay and change. It shows how to model the course of human activities as a succession of worthwhile changes and stays, i.e., a succession of actions balancing at each step between the following:\vspace*{0.05in}
{\bf (i)} {\em Motivation to change} involving the utility/pleasure of advantages to change, where these advantages represent the difference between the future payoff generated by a new action and the future payoff generated by the repetition of the past action.
{\bf (ii)} {\em Resistance to change} involving the desutility/pain of inconveniences to change, where these inconveniences are the difference between costs to be able to change and costs to be able to stay.\vspace*{0.05in}
All these concepts, including those of actions, states, transitions, means (resources and capabilities), ends (performances, payoffs, intentions, goals, desires, preferences and values), judgments, attitudes and beliefs, require lengthy and quite intricate discussions to be fully justified in each different discipline, which have its particular points of view.
At each step we say that changes are {\em worthwhile} if the motivation to change is larger than a chosen fraction of resistance to change. This fraction represents an adaptive satisficing-sacrificing ratio, which helps us to choose at each step the current level of satisfaction or accepted sacrifice. As argued by Simon \cite{s55}, being ``bounded rational," the agent is not supposed to optimize during the transition even if he/she can or cannot reach the optimum at the end. The primary aim of the variational rationality approach is to examine the more or less worthwhile to change transition, which can lead to the desired end/goal points via a succession of worthwhile changes and stays. The major questions are as follows:
{\bf(a)} When do such processes make small steps, have finite length, converge?
{\bf(b)} What is the speed of convergence?
{\bf (c)} Do such processes converge in finite time?
{\bf(d)} For which initial points do they converge?
{\bf (e)} Are such processes efficient, i.e, what are the characteristics of end points, which may be critical points, optima, equilibria, Pareto solutions, fixed points, traps, and others?\vspace*{0.05in}
These questions become {\em mathematical} provided that adequate mathematical models within variational rationality approach are created and suitable tools of mathematical analysis are selected. As advocated in the aforementioned papers by Soubeyran, {\em variational analysis}, a relatively new mathematical discipline based on {\em variational principles}, potentially contains an appropriate and powerful machinery to strongly progress in these directions.
{\bf Variational Analysis.} Modern variational analysis has been well recognized as a rapidly developed area of applied mathematics, which is mainly based on {\em variational principles}. It is much related to optimization in a broad sense (being an outgrowth of the classical calculus of variations, optimal control, and mathematical programming), also applying variational principles and optimization techniques to a wide spectrum of problems that may not be of any variational/optimization nature. The reader can find more details on mathematical theories of variational analysis and its many applications in the now classical monograph by Rockafellar and Wets \cite{rw98} as well as in more recent texts by Attouch, Buttazzo and Michaille \cite{abm05}, Borwein and Zhu \cite{bz05}, and the two-volume book by Mordukhovich \cite{m06} with the numerous references therein.
While there are powerful applications of variational analysis to important models in engineering, physics, mechanics, economics\footnote{We particularly refer the reader to the book \cite{m06} and the more recent paper \cite{bm10a} with the vast bibliographies therein for applications of modern techniques of variational analysis and set-valued optimization to models of welfare economics, which are typical in microeconomics modeling.}, etc., not much has been done on applications of variational analysis to psychology and related areas of behavioral science involving human behavior. Within the variational rationality approach, some mathematical results and applications have been recently obtained in the papers \cite{as11,coss13,fls12,ls12,mos11}. However, much more is needed to be done in this direction to capture the {\em dynamical nature} of human behavior reflected in the variational rationality approach. Among the most important dynamical issues, which should be adequately modelized and resolved via appropriate tools of variational analysis, we mention the following settings:\vspace*{0.05in}
{\bf (i)} {\em Periods of the required change} including:
$\bullet$ {\em course of motivation} (e.g., variable preferences, aspirations, hopes, moving goals, goal setting);
$\bullet$ {\em dynamics of resistance to change} (e.g., successive obstacles to overcome, goal striving), which require new concepts of distances, dissimilarity, and spaces of paths because actions can be defined as succession of operations;
$\bullet$ {\em dynamics of adaptation} concerning mainly self-regulation problems such as feedbacks, goal revision, goal pursuit, etc.\vspace*{0.05in}
{\bf (ii)} {\em Periods, where nothing is required to change}, namely: temporary or permanent ends as optima, stationary, equilibrium points, fixed points, traps, habits, routines, social norms, etc.\vspace*{0.05in}
Having these ``dynamical" issues in mind, we need to revisit available principles and techniques of variational analysis and to develop new mathematical methods and results, which could be applied to solve adaptive dynamic problems arising the aforementioned goals systems of behavioral sciences. Then variational rationality and variational analysis can gain to co-evolve. Variational analysis aims to provide the main tools for the study of variational rationality, which in turn offers a variety of valuable applications for variational analysis in behavioral sciences.
{\bf Main Objectives and Contributions of the Paper.} The primary objective of this paper is to study {\em goal systems} in {\em psychology} by using variational rationality approach and developing an adequate {\em dynamic technique} of variational analysis. To achieve this aim, we establish a new and nontrivial extension of the fundamental {\em Ekeland variational principle} (abbr.\ EVP) to a special class of {\em set-valued} mappings on {\em quasimetric} spaces with cone-valued {\em ordering variable structures}, which becomes the key for our applications to psychology.
The EVP, as first formulated by Ekeland \cite{e72} for extended-real-valued lower semicontinuous functions on metric spaces, is one of the most powerful results of variational analysis and its applications. It is worth mentioning that the original proof in the seminal paper by Ekeland \cite{e74} is complicated and not constructive, involving transfinite induction via Zorn's lemma. The much simplified proof of the EVP, presented in \cite{e79} as a personal communication from Michael Crandall, is remarkable for our purposes, since it is given by a {\em dynamical process} that itself (besides the result) contains significant information for applications to behavioral sciences. However, neither the setting and proof of the latter paper nor their subsequent numerous extensions given in the literature fully fit the main objectives of this paper required by applications to goal systems in psychology. To proceed successfully in this direction, we develop the (dynamical) approach to set-valued extensions of the EVP implemented by Bao and Mordukhovich \cite{bm07,bm10} for mappings in metric spaces with constant Pareto-type ordering preferences to the significantly more involved case of variable ordering structures in quasimetric spaces.
Then we establish valuable applications of the obtained mathematical results to the goal systems in psychology using and enriching the framework of variational rationality approach by Soubeyran \cite{s09,s10}. This allows us, in particular, to shed new light on the explanation, via successions of worthwhile actions and variational traps leading to the underlying dynamical relationships between means and ends in psychological goal systems.
{\bf Organization of the Paper.} The rest of the paper is organized as follows. Section~2 is devoted to the qualitative description and mathematical modeling of the major goal system in psychology from the viewpoint of variational rationality. Besides these issues, we justify here the importance of an appropriate extension of the EVP and the purposes we intend to meet in this way.
Section~3 is pure mathematical containing the formulation and detailed proof of the main mathematical result of this paper, which is the variational principle discussed above. We also present here an important consequence of this result used in what follows.
Section~4 is devoted to the major applications of the developed mathematical theory to the psychological goal system under consideration. Here we present psychological interpretations of the obtained mathematical results and show that they lead us to rather striking psychological conclusions largely discussed in this section with adding more mathematical details.
After concluding remarks in Section~5 and the reference list for the main part of the paper, we presents four appendixes for the reader convenience; each of them has its own references list. Appendix~1 concerns practical means-ends rationality in behavioral sciences. Appendix~2 presents a brief survey of the literature on goal systems in psychology. In Appendix~3 we discuss major references on stability and change dynamics. Finally, Appendix~4 contains additional discussions on the preference change dynamics in behavioral sciences, mainly in psychology.
\section{Goal Systems in Psychology} \subsection{Formalization of Goal Systems via Means-End Chain} In what follows, we define a {\em goal system} consisting of {\em four ingredients}; see Appendix~1 and Appendix~3 for more details and discussions.
{\bf (i)} {\em Means} formalized via elements $x\in X$ belonging to the space of means $X$.
{\bf (ii)} {\em Ways} formalized via elements $\omega\in\Omega(x)\subset\overline{\Omega}$ that depend on the given means $x\in X$, where $\Omega(x)$ is a subset of {\em feasible ways} belonging to some space $\overline{\Omega}$.
{\bf (iii)} {\em ``Means-ways of using these means"} pairs formalized as $\phi=(x,\omega)\in X\times\overline{\Omega}=\overline{\Phi}$. Their collection is denoted by $\Phi:=\left\{\phi=(x,\omega)\in\overline{\Phi}|\;\omega\in\Omega(x)\right\}$.
{\bf (iv)} {\em Ends as vectorial payoffs}. Let $P$ be a space of payoffs. These payoffs can be gains $g\in P$ to be increased (e.g., proximal goals like performances, revenues, profits, utilities, and pleasures as well as distal goals like wishes, desires, and aspirations). These payoffs can also be costs, unsatisfied needs, desutility, or pains $f\in P$ to be decreased. For instance, $g\in P$ can be a vector of different gains $g=(g^{1},\ldots,g^{m})\in P=I\!\!R^{m}$, or can be a vector $f=(f^{1},f^{2},\ldots,f^{m})\in P=I\!\!R^{m}$ of unsatisfied needs. We denote by $g:(x,\omega)\in X\times \Omega(x)\longmapsto g(x,\omega )\in P$ a vectorial {\em payoff function} and by $f:(x,\omega)\in X\times\Omega(x)\longmapsto f(x,\omega)\in P$ a vectorial {\em cost or unsatisfied need function}.
Taking the above into account, {\em goal systems} can be modelized as {\em set-valued mappings} of the following type. For {\em gains} we have the mapping $G(\cdot):x\in X\longmapsto G(x)=\left\{g(x,\omega)|\;\omega\in\Omega(x)\right\}\subset P$ whose values are subsets of payoffs the agent can get given a vector of means $x\in X$. Similarly, for {\em unsatisfied needs} we have the mapping $F(\cdot):x\in X\longmapsto F(x)=\left\{f(x,\omega)|\;\omega\in\Omega(x)\right\}\subset P$ whose values are subsets of unsatisfied needs.\vspace*{0.05in}
The simplest example we can imagine for a goal system is the least interconnected one, where the unique interconnection between goals comes from the {\em resources constraint} {\bf(3)} described below. To proceed, consider the following data involving $j=1,\ldots,m$ activities:
{\bf(1)} $x\in X=I\!\!R^{d}$ is a vector of means to be chosen first.
{\bf(2)} $\omega=(\omega^{1},\ldots,\omega^{j},\ldots,\omega^{m})$ is an allocation of the given means $x$, where $\omega^{j}\inI\!\!R^{d}$, $i=1,\ldots,m$, will be chosen later.
{\bf(3)} $\omega^{1}+\ldots+\omega^{j}+\ldots+\omega^{m}=x$ is a resource constraint. It defines the way in which the agent allocates the given means $x$ to each activity, namely: the different allocations of means, which can be identified to ways of using means, $\omega^{j}\in X$, aim to reach the goal $g^{j}$ in the activity $j$. It tells us that this allocation is feasible (without slack). This resource constraint can be written in the form $$ \omega^{1}+\ldots+\omega^{j}+\ldots+\omega^{m}=x\;\Longleftrightarrow\;\omega\in\Omega(x). $$
{\bf(4)} $g=(g^{1},\ldots,g^{j},\ldots,g^{m})\in P=I\!\!R^{m}$ is a vector of goals.
{\bf(5)} $g^{j}=g^{j}(x^{j},\omega^{j})\inI\!\!R$ as $j=1,\ldots,m$ represents, relative to the activity $j$, the goal level function $g^{j}(\cdot,\cdot):(x^{j},\omega^{j})\in X\times\overline{\Omega}\longmapsto g^{j}=g^{j}(x^{j},\omega^{j})\inI\!\!R$. It tells us that the means $\omega^{j}\in X$ help to reach the goal level $g^{j}=g^{j}(x^{j},\omega ^{j})$. Then $G(x)=\left\{g(x,\omega ),\;\omega\in\Omega(x)\right\}$ defines a goal system as the set-valued ``gain function" $G(\cdot):x\in X\longmapsto G(x)\subset P$. Similarly, $F(x)=\left\{f(x,\omega),\;\omega\in\Omega(x)\right\}$ defines a goal system as the set-valued ``costs or unsatisfied needs function" $F(\cdot):x\in X\longmapsto F(x)\subset P$, where $f=(f^{1},\ldots,f^{j},\ldots,f^{m})\in P=I\!\!R^{m}$ and $f^{j}=f^{j}(x^{j},\omega^{j})\inI\!\!R$, $j=1,\ldots,m$.
\subsection{Variational Rationality Model of Human Behavior} {\bf Simplest Adaptive Variational Rationality Model.} The core of the variational rationality approach \cite{s09,s10} can be summarized by the following basic adaptive prototype, which allows a lot of variants and extensions.\\[1ex] {\bf (A) Adaptive processes of worthwhile changes and stays.} Agent's behavior is defined as a succession $\left\{x_{0},\ldots,x_{n},\ldots\right\}\subset X$ of actions entwining possible stays $x_{n}\in X\curvearrowright x_{n+1}\in X$, $x_{n+1}=x_{n}$ and possible changes $x_{n}\in X\curvearrowright x_{n+1}\in X$, $x_{n+1}\ne x_{n}$. This behavior is said to be {\em variational rational} if, at each step $n+1$, the agent chooses to change or to stay, depending on what he accepts to consider as the worthwhile change. Then the agent follows a succession of worthwhile stays and changes $x_{n+1}\in W_{\xi_{n+1}}(x_{n})$, $\xi_{n+1}\in\Upsilon$ as $n\inI\!\!N$. Let us be more precise.
At step $n$, the agent performs the action $x_{n}$, given the degree of acceptability $\xi_{n}\in\Upsilon$ (to be defined later) he/she has chosen before. At step $n+1$, given the past action $x_{n}$ done right before and the previously given degree of acceptability $\xi _{n}\in\Upsilon $, the agent adapts his/her behavior in the following way. He/she chooses a new degree of acceptability $\xi _{n+1}\in\Upsilon$ (which can be the same as before) of a next worthwhile change $x_{n+1}\in W_{\xi_{n+1}}(x_{n})$. This degree of acceptability (satisficing with some tolerable sacrifices) represents how much worthwhile the agent considers that a change must be to accept to change this step, rather than to stay. There are two cases:
{\bf (i)} A {\em temporary worthwhile stay} $x_{n}\curvearrowright x_{n+1}=x_{n}$. It is the case when $W_{\xi_{n+1}}(x_{n})=\left\{ x_{n}\right\}$. Then the agent will choose, in a rational variational way, to stay at $x_{n}=x_{n+1}$ this time. If at the next steps $n+2,n+3,\ldots$, the agent does not change the degree of acceptability, he/she will choose to stay there forever. This defines a ``worthwhile to stay" trap, which is a {\em permanent worthwhile stay}.
{\bf (ii)} {\em A temporary worthwhile change} $x_{n}\curvearrowright x_{n+1}\ne x_{n}$. It is the case if $W_{\xi _{n+1}}(x_{n})\ne\left\{x_{n}\right\}$ and if the agent can find $x_{n+1} \in W_{\xi_{n+1}}(x_{n})$ with $x_{n+1}\ne x_{n}$. Then the agent will choose to move from $x_{n}$ to $x_{n+1}\in W_{\xi_{n+1}}(x_{n})$, and so on.\\[1ex] {\bf(B) Transition phase: the definition of a worthwhile to change step}. Consider step $n+1$, and let $x=x_{n}$ be the preceding action. At step $n+1$, the agent will choose the acceptability ratio $\xi^{\prime}=\xi_{n+1}\inI\!\!R_{+}$ and a new action $x^{\prime}=x_{n+1}$. Let $M(x,x^{\prime})\inI\!\!R$ be the motivation to change from $x$ to $x^{\prime}$, and let $R(x,x^{\prime})\inI\!\!R_{+}$ be the resistance to change from $x$ to $x^{\prime}$. Then the agent will consider that, from his/her point of view, it is worthwhile to move from $x$ to $x^{\prime}$ if the agent's motivation to change is bigger than his/her resistance to change up to the acceptability ratio $\xi_{n+1}$, i.e., under the validity of the condition $M(x,x^{\prime})\ge\xi^{\prime}R(x,x^{\prime})$.
{\em Motivation to change} $M(x,x^{\prime})=U\left[ A(x,x^{\prime})\right]$ is defined as the {\em pleasure} or {\em utility} $U\left[A\right]$ of the advantage to change $A(x,x^{\prime})\inI\!\!R$ from $x$ to $x^{\prime}$. In the simplest (separable) case, {\em advantages to change} are defined as the difference $A(x,x^{\prime})=g(x^{\prime})-g(x)$ between a payoff to be improved (e.g., performance, revenue, profit) $g(x^{\prime})\inI\!\!R$ when the agent performs a new action $x^{\prime}$ and the payoff $g(x)\inI\!\!R$ when he/she repeats a past action $x$ supposing that repetition gives the same payoff as before. On the other hand, advantages to change $A(x,x^{\prime})=f(x)-f(x^{\prime})$ can also be the difference between a payoff $f(x)$ to be decreased (e.g., cost, unsatisfied need) when the agent repeats the same old action $x$ and the payoff $f(x^{\prime})$ the agent gets when he/she performs a new action $x^{\prime}$. The {\em pleasure function} $U\left[\cdot\right]:A\inI\!\!R\longmapsto U\left[A\right]\inI\!\!R$ is strictly increasing with the initial condition $U\left[0\right]=0$.
{\em Resistance to change} $R(x,x^{\prime})=D\left[I(x,x^{\prime})\right]$ is defined as the {\em pain} or {\em disutility} $D\left[I\right]$ of the inconveniences to change $I(x,x^{\prime})=C(x,x^{\prime})-C(x,x)\inI\!\!R_{+}$, which is the difference between the costs to be able to change $C(x,x^{\prime})\inI\!\!R_{+}$ from $x$ to $x^{\prime}$ and the costs $C(x,x)\inI\!\!R_{+}$ to be able to stay at $x$. In the simplest case, costs to be able to stay are supposed to be zero, $C(x,x)=0$ for all $x\in X$, and costs to be able to change are defined as the {\em quasidistances} $C(x,x^{\prime})=q(x,x^{\prime})\inI\!\!R_{+}$ satisfying: {\bf(a)} $q(x,x^{\prime})\ge 0$, {\bf (b)} $q(x,x^{\prime})=0\Longleftrightarrow x^{\prime }=x$, and {\bf(c)} $q(x,x'')\le q(x,x^{\prime})+q(x^{\prime},x'')$ for all $x,x^{\prime},x''\in X$. The {\em pain function} $D\left[\cdot\right]:I\inI\!\!R_{+}\longmapsto D\left[I\right]\inI\!\!R_{+}$ is strictly increasing with the initial condition $D\left[0\right]=0$.
Thus, in this simplest case, the worthwhile to change and stay process satisfies the conditions $g(x_{n+1})-g(x_{n})\ge\xi_{n+1}q(x_{n},x_{n+1})$ or $f(x_{n})-f(x_{n+1})\ge\xi_{n+1}q(x_{n},x_{n+1})$ for each $n$. This yields $$
W_{\xi^{\prime}}(x)=\left\{x^{\prime }\in X\big|\;g(x^{\prime})-g(x)\ge\xi^{\prime}q(x,x^{\prime})\right\}\;\mbox{ or }\;W_{\xi^{\prime}}(x)=\left\{x^{\prime}\in X\big|\;f(x)-f(x^{\prime})\ge\xi^{\prime }q(x,x^{\prime})\right\}. $$ {\bf(C) End points as traps.} Given the final worthwhile to change rate $\xi_{\ast}>0$, we say that the end point $x_{\ast}\in X$ of the process under consideration is a {\em stationary trap} if $W_{\xi_{\ast}}(x_{\ast})=\left\{x_{\ast}\right\}$. In the simplest case above, $x_{\ast}\in X$ is a stationary trap if $g(x^{\prime})-g(x_{\ast})<\xi_{\ast}q(x_{\ast},x^{\prime})$ or $f(x_{\ast})-f(x^{\prime})<\xi _{\ast}q(x_{\ast},x^{\prime})$ for all $x^{\prime}\ne x_{\ast}$. The definition of a {\em variational trap} requires more. It involves the given initial state $x_0$ and requires that the final stationary state (trap) can be reachable from this initial state via a worthwhile and feasible transition of single worthwhile changes and temporary stays.\\[1ex] {\bf(D) Variational rationality problems} include the following major components.
Starting from any given $x_{0}\in X$ and depending on the motivation and resistance to change functions, we want to find a {\em path of worthwhile changes} so that:
{\bf(i)} the steps go to {\em zero} and have {\em finite length};
{\bf (ii)} the corresponding iterations {\em converge to a variational trap};
{\bf(iii)} the {\em convergence rate} and {\em stoping criteria} are investigated;
{\bf (iv)} the {\em efficiency} or {\em inefficiency} of such worthwhile to change processes are studied to clarify whether the worthwhile to change process ends at a {\em critical point}, a {\em local or global optimum}, a {\em local or global equilibrium}, an {\em epsilon-equilibrium}, a {\em Pareto solution}, etc.
\subsection{Ekeland's Variational Principle (EVP) in the Simplest Nonadaptive Model of Variational Rationality} Let us discuss here how the variational rationality approach of \cite{s09,s10} interprets the classical EVP \cite{e74} in the case of the simplest nonadaptive model. First recall the seminal Ekeland's result.\\
\textbf{The classical EVP}. {\em Let $(X,d)$ be a complete metric space, and let $f(\cdot):x\in X\longmapsto f(x)\inI\!\!R\cup\left\{\infty\right\}$ be a lower semicontinuous (l.s.c.) function not identically to $\infty$ and bounded from below. Denote $\underline{f}:=\inf\left\{f(x)|\;x\in X\right\}>-\infty$. Then for every $\varepsilon>0$, $\lambda>0$, and $x_{0}\in X$ with $f(x_{0})<\underline{f}+\varepsilon$ there exists $x_{\ast}\in X$ satisfying the conditions: \begin{itemize} \item[\bf{(a)}] $f(x_{0})-f(x_{\ast})\ge(\varepsilon/\lambda)d(x_{0},x_{\ast})$,
\item[\bf{(b)}] $f(x_{\ast})-f(x^{\prime})<(\varepsilon/\lambda)d(x_{\ast},x^{\prime})$ for all $x^{\prime}\ne x_{\ast}$,
\item[\bf{(c)}] $d(x_{0},x_{\ast})\le\lambda$. \end{itemize}}
By taking $f(x):=-g(x)$, we can immediately reformulate the EVP for the case of {\em maximization} of $g(.):x\in X\longmapsto g(x)\inI\!\!R\cup\left\{-\infty\right\}$. Let us next present a {\em variational rationality} interpretation of the latter result by using terminology and notation of Sections~2.1 and 2.2.\\[1ex] {\bf Variational rationality interpretation of the EVP}. Consider the maximization formulation for a payoff to be improved. Then the EVP variational rationality framework tells us the following. Impose the {\bf assumptions}:
$\bullet$ the worthwhile to change process $x_{n+1}\in W_{\xi _{n+1}}(x_{n})$ is {\em nonadaptive}, which means that the ``satisficing-sacrificing" ratio $\xi_{n+1}$ is constant along the process $\xi_{n+1}\equiv\xi=\varepsilon/\lambda>0$ for all $n\in N$;
$\bullet$ advantages to change are {\em separable}, i.e., $A=A(x,x^{\prime})=g(x^{\prime})-g(x)$, where $g(\cdot):x\in X\longmapsto g(x)\inI\!\!R$ is a {\em payoff function} to be improved (in the sense of maximization);
$\bullet$ costs to be able to change $C=C(x,x^{\prime})\inI\!\!R_{+}$ represent a {\em distance} $C(x,x^{\prime})=d(x,x^{\prime })$, which implies that costs to be able to change are {\em symmetric} $C(y,x)=C(x,y)$, costs to be able to stay are {\em zero} $C(x,x)=0$ for all $x\in X$, and costs to be able to change satisfy the {\em triangular inequality};
$\bullet$ pleasure and pain are identified with {\em advantages to change} and {\em inconveniences to change}, respectively, i.e., $U\left[A\right]=A$ for all $A\inI\!\!R,$ and $D\left[I\right]=I$ for all $I\ge 0$.\\[1ex] Then we have the {\bf conclusions:} \begin{itemize} \item[\bf(a)] There exists an acceptable {\em one step transition} from any initial position $x_0$ to the end $x_{\ast}\in W_{\xi}(x_{0})$. This means that it is {\em worthwhile} to move directly from $x_{0}$ to $x_{\ast}$.
\item[\bf(b)] The end is a {\em stable position}, which means that $W_{\xi}(x_{\ast})=\left\{x_{\ast}\right\}$. In other words, it is {\em not worthwhile} to move from $x_{\ast}$ to any different action $x^{\prime}\ne x_{\ast}$.
\item[\bf(c)] The end can be reached in a {\em feasible way} $C(x_{0},x_{\ast})\le\lambda$. This means that if the agent cannot spend more than the $\lambda>0$ amount in terms of costs, then the move from $x_{0}$ to $x_{\ast}$ is feasible in the model. \end{itemize}
\subsection{Variational Traps and Behavioral Essence of the Ekeland Principle}
As state by Alber and Heward \cite{ah96}, the essence of a trap, given in behavioral terms, is that only ``a relatively simple response is necessary to enter the trap, yet once entered, the trap cannot be resisted in creating general behavior changes." Let us give (among many others) a short list of traps we can find in different disciplines.\\[1ex] {\bf (A)} {\bf Psychology}. Baer and Wolf \cite{bw70} seem to be the first to use the term of behavioral trap in describing ``how natural contingencies of reinforcement operate to promote and maintain generalized behavior changes." Plous \cite{p93} lists five behavioral traps defined as more or less easy to fall into and more or less difficult to get out: investment, deterioration, ignorance, and collective traps. Behavioral traps have been shown to end reinforcement processes \cite{s04}. Ego-depletion can generate behavioral traps due to fatigue costs, in the context of self regulation failures \cite{b02,bh96}. Among several cognitive and emotional traps we can list all-or-nothing thinking, labeling, overgeneralization, mental filtering, discounting the positive, jumping to conclusions, magnification, emotional reasoning, should and shouldn't statements, personalizing the blame, etc.\\[1ex] {\bf (B)} {\bf Economics and decision sciences}. Making traps in decision represents hidden biases, heuristics, and routines; e.g., anchoring, status quo, sunk costs, confirming evidence, framing, estimation, and forecasting traps; see \cite{hkr98} and the references therein.\\[1ex] {\bf(C)} {\bf Management sciences}. The importance of success and failure traps within organizations due to the so-called ``myopia of learning" is emphasized in \cite{lm93,m91}.\\[1ex] {\bf (D)} {\bf Development theory}. To explain the formation of poverty traps, Appadurai \cite{a04} defines {\em aspiration traps}, which describe the inability to aspire of the poor; see also \cite{hm06,r06}.\vspace*{0.05in}
{\em Variational approach} of \cite{s09,s10} shows that, from the viewpoint of behavioral sciences dealing with essentially {\em dynamic models} of human behaviors (contrary to pure static developments in general equilibrium theory of economics), the very {\em essence of the EVP} concerns {\em variational traps}. More precisely, conditions {\bf(a)} and {\bf(c)} of the Ekeland theorem presented above define a variational trap, which is rather easy to reach in an acceptable and feasible way, while is difficult to leave due to condition {\bf(b)}. This corresponds to the intuitive sense of variational traps in behavioral sciences given in \cite{p93}. From this viewpoint, the EVP not only ensures the {\em existence} of variational traps, but also indicates (particularly in its proof) the {\em dynamics} of how to reach a variational trap.
It is worth mentioning that the usage and understanding of the EVP in the variational rationality approach to behavioral sciences is different from those in mathematics. Indeed, in {\em behavioral sciences} (where inertia, frictions, and learning play a major role), natural solutions are variational traps that are reachable in a worthwhile way as {\em maximal elements} of certain {\em dynamic} relationships for {\em worthwhile changes}. In this way, the exact solutions become variational traps, since they include costs to be able to change in their definition. The approximate solution becomes optimum, since they ignore costs to be able to change in their definition.
In {\em mathematics}, the treatment of the EVP is actually {\em opposite}. Variational traps resulting from the EVP are seen as {\em approximate solutions} to the original problem while providing the {\em exact optimum} to another optimization problem, with a small {\em perturbation} term.
\subsection{Variationally Rational Model of Goal Systems} {\bf Variational Rationality Concepts: Worthwhile to Change Payoffs.} In the context of goal systems, we define the following variational concepts following \cite{s09,s10}.
{\bf 1. Changes.} We say that $\phi=(x,\omega)\in\Phi\mathbf{\curvearrowright}\phi^{\prime}=(x^{\prime},\omega^{\prime})\in\Phi$ signifies a {\em change} from the old feasible ``means-way of using these means" pair $\phi\in\Phi$ to the new feasible pair $\phi^{\prime}\in\Phi$, where $$ \Phi:=\left\{\phi=(x,\omega)\in\overline{\Phi}\;\mbox{ such that }\;\omega\in\Omega(x)\right\} $$ stands for the set of all the feasible pairs.
{\bf 2. Advantages to change.} Consider now a change from the present feasible means-ends pair $(x,g)$ with $g\in G(x)$ to the next one $(x^{\prime},g^{\prime})$ with $g^{\prime}\in G(x^{\prime})$. Then $A:=A(\phi,\phi^{\prime})=g(\phi^{\prime})-g(\phi)\in P$ is the {\em advantage} to change from the old feasible pair $\phi\in\Phi$ to the new feasible pair $\phi^{\prime}\in\Phi$.
{\bf 3. Costs to be able to change and costs to be able to stay.} Denote by $C(\cdot,\cdot):(\phi,\phi^{\prime})\in\Phi\longmapsto C\left[\phi,\phi^{\prime}\right]\in P$ the {\em costs to be able to change} from the old feasible pair $\phi\in\Phi$ to the new feasible pair $\phi^{\prime }\in\Phi$. It is worth mentioning here that, in the context of our new version of the EVP for {\em variable ordering structures} developed in Section~3, the costs to be able to change exhibit the following two specific properties:
{\bf(i)} They do {\em not depend} on the ways of using means $C\left[\phi,\phi^{\prime}\right]=C(x,x^{\prime})\in P$. This means that they actually behave as if the ways of using means were free.
{\bf(ii)} They have a {\em directional shape} $C(x,x^{\prime})=q(x,x^{\prime})\xi$, where $\xi\in P$ and $q(x,x^{\prime})\inI\!\!R_{+}$ is a scalar {\em quasidistance}, which represents the total cost to be able to change from the old means $x$ to the new means $x^{\prime}$. In the case where $P=I\!\!R^{J}$, the vector $\xi=(\xi^{1},\ldots,\xi^J)\in P$ with $\xi^j\inI\!\!R_{+}$, $j=1,\ldots,J$, and $\left\Vert\xi\right\Vert =1$ represents the {\em internal shares} of this scalar total cost $q(x,x^{\prime})$ among different activities. In general these shares $\xi^{j}=\xi^{j}(x,x^{\prime})>0$ can change along the process.
The detailed justification that the total costs to be able to change can be modelized as a quasidistance $q(x,x^{\prime})\inI\!\!R_{+}$ is given in \cite{s09}. To save space, let us just mention that this comes from the definition of the costs to be able to change as the infimum of the costs to be able to perform a succession of operations of deletions, conservations, and acquisitions. The fact that the costs to be able to stay satisfy $C(x,x)=0$ for all $x\in X$ must also be carefully justified. In the general case, the costs to be able to change modelize inertia, i.e., the resistance to change. There are two extreme cases. {\em Strong resistance to change} is modelized by the costs to be able to change as scalars or cone quasidistances. This is the case of {\em variational principles} of Ekeland's type. On the other hands, {\em weak resistance} to change is modelized by the costs to be able to change via {\em convex increasing functions} of scalar or cone quasidistances. This is the case of {\em proximal algorithms}; see \cite{bs13,mos11} for more details and discussions.
{\bf 4. Inconveniences to change.} They represent the difference $I(\phi,\phi^{\prime})=C(x,x^{\prime})-C(x,x)$ between the costs to be able to change $C(x,x^{\prime })$ and the costs to be able to stay $C(x,x)$.
{\bf 5. Worthwhile to change payoffs.} Consider the difference between the advantages to change and the costs to be able to change given by $$ \Delta:=\Delta\left[(x,\omega),(x^{\prime},\omega^{\prime})\right]=\Delta\left[\phi,\phi^{\prime}\right]=A(\phi,\phi^{\prime})-\xi I(\phi,\phi^{\prime})=\big(g(\phi^{\prime})-g(\phi)\big)-\xi q(x,x^{\prime})\in P. $$ This defines the worthwhile to change payoff for the change $\phi=(x,\omega)\curvearrowright\phi^{\prime }=(x^{\prime},\omega^{\prime})$, where $\phi,\phi^{\prime}\in\Phi$. Then the {\em change $\phi:=(x,\omega)\curvearrowright\phi^{\prime}=(x^{\prime},\omega ^{\prime })$ is worthwhile} if $\Delta\left[\phi,\phi^{\prime}\right]\ge_{K\left[f(\phi)\right]}\mathbf{0}$.
{\bf 6. Pleasure and pain.} To simplify our model of goal systems in this paper, we will not consider the pleasure and pain functions in full generality, i.e., defined as the utilities $U\left[A(\phi,\phi^{\prime})\right]\in{\bf U}$ of the advantages to change and the pains $D\left[I(\phi,\phi^{\prime})\right]\in{\bf D}$ as the disutilities of inconveniences to be able to change. We simply identify the pleasures with the advantages to change $U\left[A\right]=A$ and the pains with the inconveniences to be able to change $D\left[I\right]=I$. However, the variable cones $K\left[f(\phi)\right]$ or $K\left[g(\phi)\right]$ represent these variable pleasures and pains feelings. They define {\em variable preferences} in the payoff space $P$. Then the change $\phi=(x,\omega)\curvearrowright\phi^{\prime }=(x^{\prime},\omega^{\prime})$ is {\em worthwhile} if $\Delta\left[\phi,\phi^{\prime}\right]\ge_{K\left[f(\phi)\right]}\mathbf{0}$. This defines the corresponding {\em variable preference on feasible ``means-way of using these means" pairs} in the following way, respectively: $$ \phi''\ge_{\phi}\phi^{\prime}\Longleftrightarrow\Delta\left[\phi,\phi''\right]\ge_{K\left[f(\phi)\right]}\Delta\left[\phi,\phi^{\prime}\right], $$ $$\phi''\ge_{\phi}\phi^{\prime}\Longleftrightarrow\Delta\left[\phi,\phi''\right]\geq_{K\left[g(\phi)\right]}\Delta\left[\phi,\phi^{\prime}\right], $$ where the reference point is the current feasible pair $\phi=(x,\omega)\in\Phi$.
\section{Variational Principle for Variable Ordering Structures}
The preceding section describes in detail the primary adaptive psychological model of this paper and also discusses the importance of {\em variational analysis} (particularly an appropriate variational principle of the Ekeland type) as the main mathematical tool of our study and applications. In comparison with the original version of the EVP presented above, the following {\em three requirements} are absolutely mandatory for an appropriate extension of the EVP for its possible application to the psychological model under consideration:
{\bf (a)} {\em vectorial} (actually {\em set-valued}) nature of the cost function;
{\bf (b)} {\em quasimetric} (instead of metric) structure of the topological space of arguments;
{\bf (c)} {\em variable preference} structure of ordering on the space of values.\vspace*{0.05in}
By now, a great many of numerous versions and extensions of the EVP are known in the literature; see, e.g., \cite{bm07,bm10,bz05,grtz03,ln11,m06,q12} and the references therein for more recent publications. Some of them address the above issues {\bf (a)} and {\bf (b)} while {\em none} of them, to the best of our knowledge, deals with {\em variable structures} in {\bf (c)}, which is the main issue required for applications to {\em adaptive} psychological models as well as to adaptive models in other branches of behavioral sciences.
Note that problems with variable preferences have drawn some attention in recent publications (e.g., \cite{bm13,bb13,cy02,e11,eh13,ls12}), but not from the viewpoint of variational principles as in this paper.\vspace*{0.05in}
In this section, we derive a general variational principle that addresses all the three issues {\bf(a)}--{\bf(c)} listed above. Furthermore, we consider a general {\em parametric} setting when the mapping in the variational principle depends on a {\em control} parameter, which allows us to take into account the ``ways of using means" providing in this way a kind of {\em feedback} in adaptive psychological models; see Section~4 for more discussions. Our approach and main result extend those (even in nonparametric settings of finite-dimensional spaces) from the papers by Bao and Mordukhovich \cite{bm07,bm10}, which dealt with nonparametric mappings between Banach spaces in the standard (not variable) preference framework. Addressing the new challenges in this paper requires a significant improvement of the previous techniques, which is done below.\vspace*{0.05in}
To describe the class of variable preferences invoking in our main result, take vectors $p_1,p_2\in P$ from some linear topological {\em decision space} $P$, denote $d:=p_1-p_2$, and say that $p_2$ is {\em preferred} by the decision maker to $p_1$ with the {\em domination factor} $d$ for $p_1$. The set of all the domination factors for $p_1$ together with the zero vector ${\bf 0}\in P$ is denoted by $K[p_1]$. Then the set-valued mapping $K:P\tto P$ is called a {\em variable ordering structure}. We define the {\em ordering relation} induced by the variable ordering structure $K$ by \begin{eqnarray*} p_2\;\le_{K[p_1]}p_1\;\mbox{ if and only if }\;p_2\in p_1-K[p_1] \end{eqnarray*} and say that $p_\ast\in\Xi$ is {\em Pareto efficient/minimal} to the set $\Xi$ in $P$ with respect to the ordering structure $K$ if there is no other vector $p\in \Xi\setminus\{p_\ast\}$ such that $p\le_{K[p_\ast]}p_\ast$, i.e., $$ \big(p_\ast-K[p_{\ast}]\big)\cap\Xi=\{p_\ast\}. $$ It follows from the definition that $p_\ast\in\mbox{\rm Min}\,(\Xi;K[p_{\ast}])$ in the sense of set optimization with the ordering cone $K[p_\ast]$; see, e.g., \cite{grtz03}. This order reduces to the one in set optimization when $K[p]\equiv\Theta$ for some convex ordering cone $\Theta\subset P$.
Fixing a {\em direction} $\xi\in P$ and a {\em threshold/accuracy} $\varepsilon>0$, we say that $p_\ast$ is an {\em approximate $\varepsilon\xi$-minimal point} of $\Xi$ with respect to $K$ if \begin{eqnarray*} \big(p_{\ast}-K[p_{\ast}]-\varepsilon\xi\big)\cap\Xi=\emptyset. \end{eqnarray*}
Next we recall the definition of quasimetric spaces and the corresponding notions of closedness, compactness, and completeness in such topological spaces.
\begin{Definition} {\bf (quasimetric spaces).} A pair $(X,q)$ with the collection of elements $X$ and the function $q:X\times X\longmapstoI\!\!R$ on $X\times X$ is said to be a {\sc quasimetric space} if the following hold: \begin{itemize} \item[\bf(i)] $q(x,x^{\prime})\ge 0$ for all $x,x^\prime\in X$; \item[\bf(ii)] $q(x,x^{\prime})=0$ if and only if $x^{\prime}=x$ for all $x,x^\prime\in X$; \item[\bf(iii)] $q(x,x'')\le q(x,x^{\prime})+q(x^{\prime },x'')$ for all $x,x^{\prime},x''\in X$. \end{itemize} \end{Definition}
In what follows, we consider only those complete quasimetric spaces, where each Cauchy sequence converge to the {\em unique} limit, which is of course automatic in metric spaces. This is a typical setting for applications of quasimetric spaces in behavioral sciences and other disciplines. The reader can find more discussions on such quasimetric spaces and sufficient conditions for the aforementioned uniqueness property in the recent papers \cite{bs13,ft13} and the references therein.
\begin{Definition}{\bf (left-sequential closedness).} A subset $S\subset X$ is said to be {\sc left-sequentially closed} if for any sequence $\left\{x_{n}\right\}\subset X$ converging to $x_{\ast}\in X$ in the sense that the numerical sequence $\left\{q(x_{n},x_{\ast})\right\}$ converges to zero, the limit $x_{\ast}$ belongs to $S$. \end{Definition}
\begin{Definition}{\bf (left-sequential completeness).} A sequence $\left\{x_{n}\right\}\subset X$ is said to be {\sc left-sequential Cauchy} if for each $k\inI\!\!N$ there exists $N_{k}$ such that $$ q(x_{n},x_{m})<1/k\;\mbox{ for all }\;m\ge n\ge N_{k}. $$ A quasimetric space $(X,q)$ is said to be {\sc left-sequentially complete} if each left-sequential Cauchy sequence is convergent. \end{Definition}
Let $f:T\rightarrow P$ be a mapping from a quasimetric space $(T,q)$ to an ordered vector space $P$ equipped with a variable ordering structure $K: P\rightrightarrows P$, and let $S\subset T$. Then:
$\bullet$ $f$ is (left-sequentially) {\em level-closed with respect to} $K$ if for any $p\in P$ the $p$-level set of $f$ with respect to $K$ defined by \begin{eqnarray*}
\mbox{\rm lev}_{p}(f,K):=\left\{t\in X\big|\;f(t)\le_{K\left[p\right]}p\right\}=\left\{t\in X\big|\;f(t)\in p-K\left[ p\right]\right\} \end{eqnarray*} is left-sequentially closed in $(T,q)$.
$\bullet$ $f$ is {\em quasibounded from below} on $S\subset\mbox{\rm dom}\, f:=\{t\in T|\;f(t)\in P\}$ {\em with respect to a cone} $\Theta$, or it is {\em $\Theta$-quasibounded from below} for short, if there is a bounded subset $M\subset P$ such that $f(S)\subset M+\Theta$ for the image set $f(S):=\cup\{f(t)\in P|\;t\in S\}$.
$\bullet$ $t^\ast\in S$ is a {\em Pareto minimizer} (resp.\ {\em approximate $\varepsilon\xi$-minimizer}) of $f$ over $S$ with respect to $K$ if $f(t_\ast)$ is the corresponding Pareto minimal point (resp.\ approximate $\varepsilon\xi$-minimal point) of the image set $f(S)\subset P$.\vspace*{0.05in}
Note that our applications in Section~4, $x\in X$ represents actions, states, or some means; $g\in P$ represents vectors of ends to be increased (e.g., performances, payoffs, revenues, profits, utilities, pleasures), and $f\in P$ represents vectors of ends to be decreased (a vector of costs, unsatisfied needs, disutilities, pains, etc.). Until arriving at applications, in the mathematical framework here we consider for definiteness the ``minimization" setting (instead of the ``maximization" one), which is more appropriate for certain applications. Correspondingly, $f\in P$ as a vector of ends to be {\em decreased} and $K\left[f\right]\subset P$ is the cone of vectorial costs {\em lower} than the given cost vector $f$. We say that the vectorial payoff $f_{2}$ (a vector of payoffs to be decreased) is {\em smaller} than $f_{1}$ with respect to $K$ and write $f_{2}\le_{K\left[f_{1}\right]}f_{1}$ if $f_{2}\in f_{1}-K\left[ f_{1}\right]$.\vspace*{0.05in}
Consider now a set-valued mapping $\Omega:X\tto\overline\Omega$ from a {\em quasimetric} space $(X,q)$ to a {\em compact} subset $\overline\Omega\subset Y$ of a {\em Banach} space $Y$. Let $P$ be a {\em linear topological} space (of {\em payoffs}) equipped with some variable {\em cone-ordering} structure $K:P\tto P$ (called {\em variable preference on payoffs}), and let $\emptyset\ne\Theta\subset P$ be a cone. Our standing assumptions are as follows.
Now let us formulate our {\em standing assumptions} on the initial data of the problem under consideration needed for the proof of the new variational principle in Theorem~\ref{EVP-VOS}. Recall that a cone $K\subset P$ is {\em proper} if we have $K\ne\left\{\mathbf{0}\right\}$ and $K\ne P$.
{\bf(H1)} The quasimetric space $(X,q)$ is {\em left-sequentially complete}. Furthermore, the quasimetric $q(x,\cdot)$ is (left-sequentially) {\em l.s.c.} with respect to the second variable for all $x\in X$.
{\bf(H2)} All the values of $K\colon P\tto P$ are {\em closed, convex}, and {\em pointed} subcones of $P$. Furthermore, the {\em common ordering cone} of $K$, denoted by $\Theta_{K}:=\cap_{f\in P}K\left[f\right]$, also has these properties.
{\bf(H3)} The mapping $K\colon P\tto P$ enjoys the {\em transitivity property} in the sense that $$ \Big(f_{1}\in f_{0}-K\left[f_{0}\right],\;f_{2}\in f_{1}-K\left[f_{1}\right]\Big)\Longrightarrow\Big(f_{2}\in f_{0}- K\left[f_{0}\right]\Big). $$
{\bf(H4)} The mapping $\Omega:X\rightrightarrows\overline{\Omega}$ is (left-sequentially) {\em closed-graph}.
{\bf(H5)} The cone $\Theta\subset P$ is {\em closed} and {\em convex}.\vspace*{0.05in}
It is easy to check that the relation $f_{1}\le_{K\left[f_{0}\right]}f_{0}$ implies that $K\left[f_{1}\right]\subset K\left[f_{0}\right]$ with the equality $K\left[f_{1}\right]+K\left[f_{0}\right]=K\left[f_{0}\right]$ under assumption {\bf (H2)}.
\begin{Theorem}{\bf (parametric variational principle for mappings with variable ordering).}\label{EVP-VOS} Let $f:X\times\overline\Omega\longmapsto P$ be a mapping with $\mbox{\rm dom}\, f=\mbox{\rm gph}\,\Omega$ in the setting described above. In addition to the standing assumptions {\bf(H1)}--{\bf(H5)}, suppose that:
{\bf(A1)} $f$ is quasibounded from below on $\mbox{\rm gph}\,\Omega$ with respect to the cone $\Theta$.
{\bf(A2)} $f$ is (left-sequentially) level-closed with respect to $K$ on $\mbox{\rm gph}\,\Omega$.
{\bf(A3)} $f(x,\cdot)$ is continuous for each $x\in\mbox{\rm dom}\,\Omega$.\\[1ex] Then for any $\varepsilon>0$, $\lambda>0$, $(x_{0},\omega_{0})\in\mbox{\rm gph}\,\Omega$, and $\xi\in\Theta_{K}\setminus(-\Theta-K\left[f_0\right])$ with $\left\Vert\xi\right\Vert=1$ and $f_0:=f(x_{0},\omega_{0})$ there is a pair $(x_{\ast},\omega_{\ast})\in\mbox{\rm gph}\,\Omega$ with
$f_{\ast}:=f(x_{\ast},\omega_{\ast})\in\mbox{\rm Min}\,\big(F(x_{\ast});K\left[f_{\ast}\right]\big)$ and $F(x_{\ast}):=\cup\{f(x_{\ast},\omega)|\;\omega\in\Omega(x_{\ast})\}$ satisfying the relationships \begin{eqnarray}\label{0.1} f_{\ast}+(\varepsilon/\lambda)q(x_{0},x_{\ast})\xi\le_{K\left[f_{0}\right]}f_{0}, \end{eqnarray} \begin{eqnarray}\label{0.2} f+(\varepsilon/\lambda)q(x_{\ast},x)\xi\nleq_{K\left[f_{\ast}\right]}f_{\ast}\;\mbox{ for all }\;(x,\omega)\in\mbox{\rm gph}\,\Omega\;\mbox{ with }\;f:= f(x,\omega)\ne f_{\ast}. \end{eqnarray} If furthermore $(x_{0},\omega_{0})$ is an approximate $\varepsilon\xi$--minimizer of $f$ over $\mbox{\rm gph}\,\Omega$ with respect to $K\left[f_{0}\right]$, then $x_{\ast}$ can be chosen so that in addition to \eqref{0.1} and \eqref{0.2} we have \begin{eqnarray}\label{0.3} q(x_{0},x_{\ast})\le\lambda. \end{eqnarray} \end{Theorem} {\bf Proof.} Without loss of generality we assume that $\varepsilon=\lambda=1$. Indeed, the general case can be easily reduced to this special one by applying the latter to the mapping $\widetilde f(x,\omega):=\varepsilon^{-1}f(x,\omega)$ and the left-sequentially complete quasimetric space $(X,\widetilde q)$ with $\widetilde q(x,x^\prime):=\lambda^{-1}q(x,x^\prime)$.
Define now a set-valued mapping $W:X\times\overline\Omega\rightrightarrows X$ by \begin{eqnarray} \label{0.4}
W(x,\omega):=\left\{x^{\prime}\in X\big|\;\exists\;\omega^{\prime }\in\Omega(x^{\prime})\;\mbox{ with }\;f(x^{\prime},\omega^{\prime})+q(x,x^{\prime})\xi \;\le_{K\,\left[f\right]}f(x,\omega)\right\}, \end{eqnarray} where $f:=f(x,\omega)$. It is easy to check that for such a pair $(x^{\prime },\omega^{\prime})$ satisfying the inequality in (\ref{0.4}) we get, by denoting $f^{\prime}:=f(x^{\prime},\omega^{\prime})$, that \begin{eqnarray}\label{0.5} f(x^{\prime},\omega^{\prime})\;\le_{K\left[f\right]}f(x,\omega)\;\mbox{ and }\;K\left[f^{\prime}\right]\subset K\left[f \right] \end{eqnarray} under the imposed assumptions for $K$. Indeed, the inclusion in \eqref{0.5} follows directly from the inequality therein while the latter is valid by \begin{eqnarray*} &&f(x^{\prime},\omega ^{\prime})+q(x,x^{\prime})\xi\;\le_{K\left[f\right]}f(x,\omega)\\ &\Longleftrightarrow&f(x^{\prime},\omega^{\prime})+q(x,x^\prime )\xi\in f(x,\omega)-K\left[f\right]\\ &\Longleftrightarrow&f(x^{\prime},\omega^{\prime})\in f(x,\omega)-q(x,x^{\prime})\xi-K\left[f\right]\\ &\Longrightarrow&f(x^{\prime},\omega^{\prime})\in f(x,\omega)-K\left[f\right], \end{eqnarray*} where the implication holds due to the choice of $\xi\in\Theta_K\subset K\left[f\right]$ and the convexity of the cone $K\left[f\right]$. In fact $K\left[f\right]+ q(x,x^{\prime})\xi\subset K\left[f\right]+K\left[f\right]=K\left[f\right]$.\vspace*{.05in}
Next we list some important properties of the set-valued mapping $W$ used in what follows.\vspace*{0.05in}
$\bullet$ The sets $W(x,\omega )$ are {\em nonempty} for all $(x,\omega)\in\mbox{\rm gph}\,\Omega $ due to $(x,\omega)\in W(x,\omega )$.\vspace*{0.05in}
$\bullet$ The sets $W(x,\omega )$ are {\em left-sequentially closed} in $(X,q)$ for all $(x,\omega )\in\mbox{\rm gph}\,\Omega $. To verify this property, it is sufficient to show that the limit of any convergent sequence $\{x_{k}\}\subset W(x,\omega)$ with $x_{k}\rightarrow x_{\ast}$ as $k\rightarrow\infty $ belongs to the set $W(x,\omega )$. By definition of $W$, find a sequence $\{\omega_{k}\}\subset\overline{\Omega}$ with $\omega_{k}\in\Omega(x_{k})$ for all $k\inI\!\!N$ satisfying \begin{eqnarray*} f(x_{k},\omega_{k})+q(x,x_{k})\xi\in f(x,\omega)-K\left[f\right]. \end{eqnarray*} Since $\overline{\Omega}$ is a compact set, extract (without relabeling) a convergent subsequence from $\{\omega_{k}\}$ that converges to some $\omega_{\ast}\in\overline{\Omega}$. This gives us $(x_{\ast},\omega_{\ast})\in\mbox{\rm gph}\,\Omega$ by the (left-sequential) closedness assumption on $\mbox{\rm gph}\,\Omega$. Then passing to limit with taking into account the level-closedness and lower semicontinuity assumptions imposed on $f$ and $q$ tells us that \begin{eqnarray*} f(x_{\ast},\omega_{\ast})+q(x,x_{\ast})\xi\in f(x,\omega)-K\left[f\right],\;\mbox{ i.e., }\;x_{\ast}\in W(x,\omega). \end{eqnarray*}
$\bullet$ The sets $W(x,\omega )$ are {\em bounded from below} with respect to $\Theta+K\left[f\right]$ for all $(x,\omega)\in\mbox{\rm gph}\,\Omega$, where $f=f(x,\omega)$. Indeed, it follows from \begin{eqnarray*} W(x,\omega)\subset\left\{x^{\prime}\in X\;\mbox{ such that }\;q(x,x^{\prime})\xi\in f(x,\omega )-M-\Theta-K\left[f\right]\right\}, \end{eqnarray*} where the bounded set $M$ is taken from the definition of the assumed quasiboundedness from below of the mapping $f$ with respect to the cone $\Theta $.\vspace*{0.05in}
$\bullet$ We have the inclusion $W(x^{\prime},\omega^{\prime})\subset W(x,\omega)$ for all $x^{\prime}\in W(x,\omega)$ and $\omega^{\prime}\in\overline{\Omega}$ with \begin{eqnarray*} f(x^{\prime},\omega^{\prime})+q(x,x^{\prime})\xi\le_{K\left[f\right]}f(x,\omega) \end{eqnarray*} To verify it, pick $x''\in W(x^{\prime},\omega^{\prime})$ and by construction of $W(x,\omega )$ in (\ref{0.4}) find $\omega''\in\Omega (x")$ satisfying the inequality $f(x'',\omega'')+q(x^{\prime},x'')\xi\le_{K\left[f^{\prime}\right]}f(x^{\prime},\omega^{\prime})$. Summing up the last two inequalities and taking into account the triangle inequality for the quasimetric $q(x,x'')\le q(x,x^{\prime})+q(x^{\prime},x'')$, the choice of $\xi\in\Theta_{K}$ as well as the transitivity and convexity properties of $K$ ensuring that $K\left[f\right]+K\left[f^\prime\right]+\Theta_{K}=K\left[f\right]$, we get the relationships \[ \begin{array}{lll} &&f(x'',\omega'')+q(x,x'')\xi\\&=&\big(f(x^{\prime},\omega^{\prime})+q(x,x^{\prime})\xi\big)+\big(f(x'',\omega'')+q(x^{\prime},x'')\xi\big)\\ &&\hspace*{.01in}+\big(q(x,x'')-q(x,x^{\prime})-q(x^{\prime},x'')\big)\xi-f(x^{\prime},\omega^{\prime})\\ &\in&f(x,\omega)-K\left[f\right]+f(x^{\prime},\omega^{\prime})-K\left[f^{\prime}\right]-\Theta_{K}-f(x^{\prime},\omega^{\prime})\\ &\subset& f(x,\omega)-K\left[f\right]. \end{array} \] This clearly implies that $f(x'',\omega'')+q(x,x'')\xi\le_{K\left[f\right]}f(x,\omega)$, i.e., $x''\in W(x,\omega)$. Since $x''$ was chosen arbitrarily in $W(x^{\prime},\omega^{\prime})$, we conclude that $W(x^{\prime},\omega^{\prime})\subset W(x,\omega)$.\vspace*{0.05in}
To proceed further, let us inductively construct a sequence of pairs $\{(x_{n},\omega_{n})\}\subset\mbox{\rm gph}\,\Omega$ and denote $f_{n}:=f(x_{n},\omega_{n})$ for all $n\inI\!\!N\cup\{0\}$ by the following {\em iterative procedure}: starting with $(x_{0},\omega_{0})$ given in the theorem and having the $n$-iteration $(x_{n},\omega_{n})$, we select the next one $(x_{n+1},\omega_{n+1})$ by \begin{eqnarray}\label{0.6} \left\{\begin{array}{ll} x_{n+1}\in W(x_{n},\omega_{n}),&\\ q(x_{n},x_{n+1})\displaystyle\ge\sup_{x\in W(x_{n},\omega_{n})}q(x_{n},x)-\displaystyle\frac{1}{n+1},&\\ \omega_{n+1}\in\Omega(x_{n+1}),\quad f(x_{n+1},\omega_{n+1})+q(x_{n},x_{n+1})\xi\;\le_{K\left[f_{n}\right]}f(x_{n},\omega_{n})\;& \end{array} \right. \end{eqnarray} It follows from construction (\ref{0.4}) of the sets $W(x,\omega)$ and their properties listed above that this iterative procedure is {\em well defined}. By (\ref{0.5}) the sequence $\{f_{n}\}$ with $f_{n}:=f(x_{n},\omega_{n})$ is {\em nonincreasing} with respect to the ordering structure $K$ in the sense that $f_{n+1}\le_{K\left[f_{n}\right]}f_{n}$ for all $n\inI\!\!N\cup\left\{0\right\}$. Furthermore, the cone sequence $\{K(f_{n})\}$ is {\em nonexpansive}, i.e., \[ K\left[f_{n+1}\right]\subset K\left[f_{n}\right]\;\mbox{ for all }\;n\inI\!\!N\cup\left\{0\right\}, \] which implies together with the convex-valuedness of $K$ that \[ \sum_{n=0}^{m}K\left[f_{n}\right]=K\left[f_{0}\right]\;\mbox{ for all }\;m\inI\!\!N\cup\{0\}. \] Summing up the last inequality in (\ref{0.6}) from $n=0$ to $m$, we get with $t_{m}:=\sum_{n=0}^{m}q(x_{n},x_{n+1})$ that \begin{eqnarray}\label{0.7} t_{m}\xi\in f_{0}-f_{m+1}-K\left[f_{0}\right]\subset f_{0}-M-\Theta-K\left[f_{0}\right]\;\mbox{ for all }\;m\inI\!\!N\cup\{0\}. \end{eqnarray}
Let us next prove by passing to the limit in \eqref{0.7} as $m\rightarrow\infty$ that \begin{eqnarray}\label{0.8} \sum_{n=0}^{\infty}q(x_{n},x_{n+1})<\infty. \end{eqnarray} Arguing by contradiction, suppose that (\ref{0.8}) does not hold, i.e., the increasing sequence $\left\{t_{m}\right\}$ tends to $\infty $ as $m\longrightarrow\infty$. By the first inclusion in (\ref{0.7}) and the boundedness of the set $M$ taken from the quasiboundedness of $f$ from below, find a bounded sequence $\left\{w_{m}\right\}\subset f_{0}-M$ satisfying \[ t_{m}\xi-w_{m}\in-\Theta-K\left[f_{0}\right],\;\mbox{ i.e., }\;\xi -w_{m}/t_{m}\in-\Theta-K\left[f_{0}\right],\quad m\inI\!\!N. \] Passing now to the limit as $m\longrightarrow\infty$ and taking into account the closedness of $\Theta$, the boundedness of $\left\{w_{m}\right\}$, and that $t_{m}\rightarrow\infty $, we arrive at $\xi\in-\Theta-K\left[ f_{0}\right]$ in contradiction to the choice of $\xi\in\Theta_{K}\setminus(-\Theta-K\left[ f_{0}\right])$. Thus (\ref{0.8}) holds and allows us for any $\varepsilon>0$ find a natural number $N_{\varepsilon}\inI\!\!N$ so that $t_{m}-t_{n}=\sum_{k=n}^{m-1}q(x_{k},x_{k+1})\le\varepsilon$ whenever $m\ge n\ge N_{\varepsilon}$. Hence \[ q(x_{n},x_{m})\le\sum_{k=n}^{m-1}q(x_{k},x_{k+1})\le\varepsilon\;\mbox{ for all }\;m\ge n\ge N_{\varepsilon}, \] which means that $\left\{x_{k}\right\}$ is a (left-sequential) {\em Cauchy sequence} in the quasimetric space $(X,q)$. Since $X$ is left-sequentially complete, there is $x_{\ast}\in X$ such that $x_{k}\longrightarrow x_{\ast}$ as $k\longrightarrow\infty$. Taking into account that $W(x_{k+1},\omega_{k+1})\subset W(x_{k},\omega_{k})$ and the choice of $x_{k+1}$, we get the estimate \begin{eqnarray*} \mbox{\rm radius\,}W(x_{k},\omega_{k}):=\sup_{x\in W(x_{k},\omega_{k})}q(x_{k},x)\le q(x_{k},x_{k+1})+\frac{1}{k+1} \end{eqnarray*} ensuring that $\mbox{\rm radius\,}W(x_{k},\omega _{k})\downarrow 0$ as $k\rightarrow\infty$. It follows from the left-sequential completeness of $X$ and the left-sequential closedness of $W(x_{k},\omega_{k})$ that \begin{eqnarray}\label{0.9} \bigcap_{k=0}^{\infty }W(x_{k},\omega_{k})=\left\{x_{\ast}\right\}\;\mbox{ for some}\;x_{\ast}\in X. \end{eqnarray}
Now we justify the existence of $\omega_{\ast}\in\Omega(x_{\ast})$ such that $f_{\ast}:=f(x_{\ast},\omega _{\ast})\in\mbox{\rm Min}\,\big(F(x_{\ast}),K\left[f_{\ast}\right]\big)$ satisfies the relationships in (\ref{0.1}) and (\ref{0.2}). For each pair $(x_{k},\omega_{k})\in\mbox{\rm gph}\,\Omega$, define a subset of $\overline{\Omega}$ by \begin{eqnarray}\label{0.10}
R(x_{k},\omega_{k}):=\left\{\omega\in\Omega(x_{\ast})\big|\;f(x_{\ast},\omega)+q(x_{k},x_{\ast})\xi\le_{K\left[f_k \right]}f(x_{k},\omega_{k})\right\},\quad k\inI\!\!N. \end{eqnarray} Then we have the following properties:\vspace*{0.05in}
$\bullet$ The set $R(x_{k},\omega_{k})$ is {\em nonempty} and {\em closed} for any $k\inI\!\!N\cup\{0\}$ under the assumptions made. Indeed, the nonemptiness follows directly from $x_{\ast}\in W(x_{k},\omega_{k})$ and the definition of $W$ in (\ref{0.4}). The closedness property holds since $R(x_{k},\omega_{k})$ is the $(f(x_{k},\omega_{k})-q(x_{k},x_{\ast})\xi )$-level set of the mapping $f(x_{\ast},\cdot)$ with respect to the closed and convex cone $K\left[f_k\right]$ and since $f(x_\ast,\cdot)$ is assumed to be continuous. Furthermore, it follows from the inclusion $R(x_{k},\omega_{k})\subset\Omega(x_{\ast})\subset\overline{\Omega}$ and the compactness of $\overline\Omega$ that $R(x_{k},\omega_{k})$ is a compact subset as well.\vspace*{0.05in}
$\bullet$ The sequence $\left\{R(x_{k},\omega_{k})\right\}$ is {\em nonexpansive}. To verify it, pick any $w\in R(x_{k+1},\omega_{k+1})$ and get \[ f(x_{\ast},w)+q(x_{k+1},x_{\ast})\xi\in f(x_{k+1},\omega_{k+1})+K\left[f_{k+1}\right]. \] Combining this with (\ref{0.6}) and then using the quasimetric triangle inequality together with the equality $K\left[f_{k+1}\right]+K\left[f_{k}\right]=K\left[f_{k}\right]$ tell us that \begin{eqnarray*} f(x_{\ast},w)+q(x_{k},x_{\ast})\xi\in f(x_{k},\omega_{k})+K\left[f_{k}\right],\;\mbox{ i.e., }\;w\in R(x_{k},\omega _{k}), \end{eqnarray*} which therefore justifies that $w\in R(x_{k+1},\omega_{k+1})\subset R(x_{k},\omega_{k})$.\vspace*{0.1in}
It follows from the properties of $R(\cdot,\cdot)$ established above and the compactness of $\overline{\Omega}$ that there exists $\overline{\omega}\in\overline{\Omega}$ satisfying the inclusion \begin{eqnarray}\label{0.11} \bar\omega\in\bigcap_{k=0}^{\infty}R(x_{k},\omega_{k}). \end{eqnarray} Denoting $f_{\bar\omega}:=f(x_{\ast},\bar\omega)$ and forming the $f_{\bar\omega}$--level set of $f(x_{\ast},\cdot)$ over $\Omega(x_{\ast})$ by \begin{eqnarray}\label{0.12}
\Xi:=\left\{\omega\in\Omega(x_{\ast})\big|\;f_\omega:=f(x_{\ast},\omega)\le_{K\left[f_{\bar\omega}\right]}f(x_{\ast}, \bar\omega)=:f_{\bar\omega}\right\}, \end{eqnarray} we obviously have that $\Xi$ is compact with $\bar\omega\in\Xi $. Employ now \cite[Corollary~5.10]{l89}, which ensures in our setting the existence of $\omega_{\ast}\in\Xi$ such that \begin{eqnarray*} f_{\ast }=f(x_{\ast},\omega_{\ast})\in\mbox{\rm Min}\,\big(f(x_{\ast },\Xi ),K\left[f_{\bar\omega}\right])\;\mbox{ with }\;f(x_{\ast},\Xi):=\bigcup_{\omega\in\Xi} \left\{f_{\omega}:=f(x_{\ast},\omega )\in P\right\}. \end{eqnarray*} This reads by the definition of Pareto efficiency that \[ \big(f_{\ast}-K\left[f_{\bar\omega}\right]\big)\cap f(x_{\ast},\Xi)=\left\{f_{\ast}\right\}. \] Since $f_\ast\le_{K\left[f_{\bar\omega}\right]}f_{\bar\omega}$, we have $K\left[f_{\ast}\right]\subset K\left[f_{\bar\omega}\right]$; cf.\ the justifications for (\ref{0.5}). Thus we get \[ (f_{\ast}-K\left[f_{\ast}\right])\cap f(x_{\ast},\Xi)=\left\{f_{\ast}\right\},\;\mbox{ i.e., }\;f_{\ast}\in\mbox{\rm Min}\,(f(x_{\ast },\Xi ),K\left[f_{\ast}\right]). \] Actually the following stronger conclusion holds: \begin{eqnarray}\label{str} f_{\ast}\in\mbox{\rm Min}\,(F(x_{\ast}),K\left[f_{\ast}\right])\;\mbox{ with }\;F(x_{\ast})=f\big(x_{\ast},\Omega(x_{\ast})\big)\supset f(x_{\ast},\Xi). \end{eqnarray} Arguing by contradiction, suppose that \eqref{str} does not hold and find $\omega\in\Omega(x_{\ast})\setminus\Xi$ such that $f_{\omega}\le_{K\left[f_{\ast}\right]}f_{\ast}$. Since $\omega_{\ast}\in\Xi$, we have $f_{\ast}\le_{K\left[f_{\bar\omega}\right]}f_{\bar\omega}$. Then the transitivity assumption {\bf(H3)} ensures that $f_{\omega}\le_{K\left[f_{\bar\omega}\right]}f_{\bar\omega}$, and so $\omega\in\Xi$ contradicting the choice of $\omega\in\Omega(x_{\ast})\setminus\Xi$. This justifies \eqref{str}.\vspace*{.05in}
Now we are ready to show that the pair $(x_{\ast},\omega_{\ast})$ satisfies the conclusions (\ref{0.1}) and (\ref{0.2}) of our variational principle. The inequality in (\ref{0.1}) immediately follows from $\omega_{\ast}\in R(x_{0},\omega_{0})$. To verify (\ref{0.2}), suppose the contrary and find a pair $(x,\omega)\in\mbox{\rm gph}\,\Omega$ with $f(x,\omega ) \neq f(x_{\ast},\omega_{\ast })$ satisfying \begin{eqnarray}\label{0.13} f(x,\omega)+q(x_{\ast},x)\xi \in f(x_{\ast},\omega_{\ast})+K\left[f_{\ast}\right]. \end{eqnarray} Fix $k\inI\!\!N\cup\left\{0\right\}$ and sum up the three inequalities: (\ref{0.13}), (\ref{0.12}) with $\omega=\omega_{\ast}$, and (\ref{0.10}) with $\omega=\bar\omega$. This gives us, by taking into account the triangle inequality as well as the relationships $f_{\ast}\le_{K\left[f_{\bar\omega}\right]}f_{\bar\omega}\le_{K\left[f_{k}\right]}f_{k}$, $K\left[f_{\ast}\right]\subset K\left[f_{\bar\omega}\right]\subset K\left[f_{k}\right]$, and $K\left[f_{\ast}\right]+K\left[f_{\omega}\right]+K \left[f_{k}\right]=K\left[f_{k}\right]$, that \[ f(x,\omega)+q(x_{k},x)\xi\in f(x_{k},\omega_{k})-K\left[ f_{k}\right],\;\mbox{ i.e., }\;x\in W(x_{k},\omega_{k}),\quad k\inI\!\!N. \] This means that $x$ belongs to the set intersection in (\ref{0.9}), and thus $x=x_{\ast}$. Substituting it into (\ref{0.13}), we obviously get $f(x_{\ast},\omega)+q(x_{\ast},x_{\ast})\xi\in f(x_{\ast},\omega_{\ast})+K\left[f_{\ast}\right]$ and reduce it to $$ (x_{\ast},\omega)\in f(x_{\ast},\omega_{\ast})-K\left[f_{\ast}\right],\;\mbox{ i.e., }\;f(x_{\ast},\omega)\le_{K\left[f_{\ast}\right]}f(x_{\ast},\omega_{\ast}). $$ The latter shows that $f(x_{\ast},\omega)=f(x_{\ast},\omega_{\ast})\in\mbox{\rm Min}\,(F(x_{\ast}),K\left[f_{\ast}\right])$, which contradicts the assumption of $f(x,\omega)\ne f(x_{\ast},\omega_{\ast})$ and hence justifies \eqref{0.2}.\vspace*{0.05in}
To complete the proof of the theorem, it remains to estimate the distance $q(x_{0},x_{\ast})$ in \eqref{0.3} when $(x_{0},\omega_{0})$ is an approximate $\varepsilon\xi$--minimizer of $f$ over $\mbox{\rm gph}\,\Omega$. Arguing by contradiction, suppose that (\ref{0.3}) does not hold, i.e., $q(x_{0},x_{\ast})>\lambda$. Since $x_{\ast}\in W(x_{0},\omega_{0})$, we have \begin{eqnarray*} f(x_{\ast},\omega_{\ast})+\frac{\varepsilon}{\lambda}q(x_{\ast},x_{0})\xi\in f(x_{\ast},\omega_{\ast})-K\left[f_{\ast}\right], \end{eqnarray*} which together with $f(x_{\ast},\omega_{\ast})\le_{K\left[f_0\right]}f(x_0,\omega_0)$ yields $f(x_{\ast},\omega_{\ast})+\varepsilon\xi\in f(x_{\ast},\omega_{\ast})-K\left[f_{0}\right].$ This contradicts the approximate minimality of $(x_{0},\omega_{0})$ and thus ends the proof. $
\triangle$\vspace*{0.05in}
Finally in this section, we present a direct consequence of Theorem~\ref{EVP-VOS} for the case when the mapping $f$ does not depend on the control variable $\omega$, which also provides a new variational principle for systems with variable ordering structures and is used in what follows.
\begin{Corollary}{\bf (variational principle for parameter-independent mappings with respect to variable ordering).}\label{EVP-Cor1} Let $f=f(x)$ be a mapping from $X$ to $P$ with $\mbox{\rm dom}\, f\ne\emptyset$ in the setting of Theorem~{\rm\ref{EVP-VOS}} under the assumptions made therein. Then for any $\varepsilon>0$, $\lambda>0$, $x_{0}\in\mbox{\rm gph}\,\Omega$, and $\xi\in\Theta_{K}\setminus(-\Theta-K\left[f(x_0)\right])$ with $\left\Vert\xi\right\Vert=1$ there is a point $x_{\ast}\in\mbox{\rm dom}\, f$ satisfying the relationships \begin{eqnarray}\label{0.14} f(x_{\ast})+\frac{\varepsilon}{\lambda}q(x_{0},x_{\ast})\xi\le_{K\left[f(x_{0})\right]}f(x_{0}), \end{eqnarray} \begin{eqnarray}\label{0.15} f(x)+\frac{\varepsilon}{\lambda}q(x_{\ast},x)\xi\nleq_{K\left[f(x_{\ast})\right]}f(x_{\ast})\;\mbox{ for all }\;x\in\mbox{\rm dom}\, f\;\mbox{ with }\;f(x)\ne f(x_{\ast}). \end{eqnarray} If furthermore $x_{0}$ is an approximate $\varepsilon\xi$--minimizer of $f$ with respect to $K\left[f(x_0)\right]$, then $x_{\ast}$ can be chosen so that in addition to \eqref{0.14} and \eqref{0.15} we have the estimate \eqref{0.3}. \end{Corollary} {\bf Proof.} It follows from Theorem~\ref{EVP-VOS} applied to the mapping $\widetilde{f}:X\times\overline{\Omega}\rightarrow P$ with $\widetilde{f}(x,\omega)=f(x)$ and a (compact) set $\overline{\Omega}$ consisting of just one point, say $\left\{\omega_{\ast}\right\}$. $
\triangle$
\section{Applications to Goal Systems in Psychology} \subsection{What Our Variational Principle Add to Goal System Theory} {\bf (A) Variational rationality via variational analysis.} In the context of the variational rationality framework of \cite{s09,s10}, the new variational principle of Theorem~\ref{EVP-VOS} shows that, considering an {\em adaptive goal system} endowed with {\em variable cone-valued preferences} in the payoff space and a {\em quasimetric} on the space of means under (fairly natural) hypotheses of the theorem and starting from any feasible ``means-way of using these means" pair $\phi_{0}=(x_{0},\omega_{0})\in\Phi$, there exists a {\em succession of worthwhile changes} $\phi_{n+1}\in W(\phi_{n})$ with $n\inI\!\!N$, which ends at some {\em variational trap} $\phi_{\ast}=(x_{\ast},\omega_{\ast})\in\Phi$, where the agent {\em prefers to stay than to move}. The meaning of this is as follows; see the notation and psychological description in Section~2.
{\bf (i) Reachability and acceptability aspects along the transition}: we have $\phi_{\ast}\in W(\phi_{0})$, i.e., it {\em is worthwhile} to move directly from the starting means-ends pair $\phi_{0}$ to the ending one $\phi_{\ast}$.
{\bf (ii) Stability aspect at the end}: we have $W(\phi_{\ast})=\left\{\phi_{\ast}\right\}\Longleftrightarrow\phi\notin W(\phi_{\ast})$ for any $\phi\in\Phi$, $\phi\ne\phi_{\ast}$, meaning that it is {\em not worthwhile} to move from the means-ends pair $\phi_{\ast }$ to a different one.
{\bf (iii) Feasibility aspect along the transition}: if $\phi_{0}=(x_{0},\omega_{0})\in\Phi$ is any $\varepsilon\xi$--{\em approximate minimizer} to $G(\cdot)$, then $x_{\ast}$ can be chosen such that in addition to {\bf(i)} and {\bf (ii)} we have $C(x_{0},x_{\ast})\le\lambda$.
{\bf (iv) The end is efficient as a Pareto optimal solution}. This is shown in the proof of Theorem~\ref{EVP-VOS} and discussed above.\\[1ex] {\bf (B) When proof says more than statement.} Analyzing the statement and the proof of our variational principle in Theorem~\ref{EVP-VOS}, we can observe that--besides the variational trap interpretations, which are discussed above in {\bf(A)} and follow from the {\em statement} of the theorem--the {\em proof} itself offers much more from the psychological point of view. Indeed, the statement of Theorem~\ref{EVP-VOS} is an {\em existence result} while the proof provides a constructive {\em dynamical process}, which leads us to a solution. From the mathematical viewpoint, the situation is similar to the classical Ekeland principle with the proof given in \cite{e79}. From the psychological viewpoint, this is in accordance with the message popularized by Simon \cite{s55}: {\em decision and making process matters and can determine the end}. It is also a major point of the variational rationality approach \cite{s09,s10}: {\em to explain human desirable ends requires to exhibit human behavioral processes that can lead to them}. It senses that desirable ends must be reachable in an acceptable way by using feasible means. In other words, if the agent starts from any ``mean-ways of using these means" pair, pursues his/her goals by exploring enough each step and performing a succession of worthwhile changes or stays, then he/she will end in a strong behavioral trap, i.e., a Pareto solution more preferable to stay than to move even without any resistance to change. The given proof of Theorem~\ref{EVP-VOS} reveals at least {\em four} very important points discussed below in the rest of this subsection.\\[1ex] {\bf(C) Worthwhile to change processes}. Parallel to \cite{s09} with using \cite{e79} in the case of nonadaptive models, the proof of Theorem~\ref{EVP-VOS} for the {\em adaptive} psychological models under consideration shows how the agent explicitly forms at each step a ``consideration set" to evaluate and balance his/her current motivation and resistance to change ``exploring enough" within the current worthwhile to change set trying to ``improve it enough" by inductively constructing a sequence of feasible pairs $\{(x_{n},\omega_{n})\}\subset\mbox{\rm gph}\,\Omega$. This nicely fits the famous concept of ``consideration sets" in marketing sciences defined first as ``evoked sets" by Howard and Sheth \cite{hs69}. The idea is that, at any given consumption occasion, consumers do not consider all the brands available while the current consideration/relevant set represents ``those brands that the consumer considers seriously when making a purchase and/or consumption decision" as discussed, e.g., in \cite{b80,bl81}. The size of the consideration set is usually {\em small} relative to the total number of brands, which the consumer could evaluate. Then, using various heuristics, the consumer tries to simplify his/her decision environment.\\[1ex] {\bf(D) Variational traps as desirable ends}. The worthwhile to change dynamical process given in the proof of Theorem~\ref{EVP-VOS} allows the agent to reach a {\em variational trap} by a succession of worthwhile changes. In this model, it is a {\em Pareto ``means-way of using these means" pair}. Such a variational trap is related to two important concepts, {\em aspiration points} and {\em efficient points}, at the individual and collective levels discussed as follows.
{\bf (a) Aspiration points and Pareto points}. The Pareto point achieved in Theorem~\ref{EVP-VOS} is an aspiration point as defined in \cite{s09,s10} and then further studied and applied in \cite{fls12,ls12}. An {\em aspiration point} is such that, starting from any point of the worthwhile to change process, it is worthwhile to move directly (in an acceptable way) to the given point of aspiration. It represents the ``{\em rather easy to reach}" aspect of a variational trap while the other one (``difficult to leave") is more traditional as an equilibrium or stability condition.
{\bf (b) Optimal solutions}. The proof developed in Theorem~\ref{EVP-VOS} allows to study other types of {\em optimal solutions/minimal points}; compare, e.g., \cite{bm07,bm10,grtz03,q12} for various notions of this kind employing iterative procedures to derive for them variational principles of the Ekeland type.
{\bf (c) Individual or collective aspects: agents versus organizations}. In this paper we focus on the individual aspects of goal systems. The case of organizations requires to consider {\em bilevel optimization problems} with leaders and followers. This will be a subject of our future research.\\[1ex] {\bf (E) Variable preferences and efficiency for course pursuit processes}. Variable preferences can take different forms; we discuss it in more details in Appendix~4. In this paper we pay the main attention to ordering structures defined by {\em variable cone-valued preferences}. Among other types of variable preferences we mention {\em attention based preferences} discussed in \cite{b13,bg12}. Such variable preferences can be modelized and resolved by the approach developed in the proof of Theorem~\ref{EVP-VOS}.\\[1ex] {\bf (F) Habituation processes as ends of course pursuit problems.} Our results help to modelize the emergence of {\em multiobjective habituation processes} with variable preferences. Such a formulation can represent agents who follow a habituation process with {\em multiple goals} as well as an organization, where each agent can have different goals. Then the procedure developed in the proof of Theorem~\ref{EVP-VOS} ends in a variational trap, which is a goal system habit for agents or a bundle of routines for organizations. This represents a habituation process in various areas of life, which is characterized by several properties such as repetitions, automaticity, control and economizing, etc.; see \cite{b94,s09,s10} for more more details and discussions.
\subsection{When Costs to Be Able to Change ``Ways of Using Means" Do Matter}
Consider a more general problem to change from a ``means-way of using these means" feasible pair $\phi=(x,\omega)$ with $\omega\in\Omega(x)\subset\overline{\Omega }$ to a new pair $\phi^{\prime}=(x^{\prime},\omega^{\prime})$ with $\omega^{\prime}\in\Omega(x^{\prime})\subset\overline{\Omega}$. In this general behavioral case, the full costs $C\left[(x,\omega),(x^{\prime},\omega^{\prime})\right]=C\left[\phi,\phi^{\prime}\right]$ to be able to change from a feasible pair $\phi=(x,\omega)$ with $\omega\in\Omega(x)$ to another feasible pair $\phi^{\prime}=(x^{\prime},\omega^{\prime})$ with $\omega^{\prime}\in\Omega(x^{\prime })$ must include the {\em two kinds of costs} in the sum: $C\left[(x,\omega),(x^{\prime},\omega^{\prime})\right]=C_{X}(x,x^{\prime})+C_{\Omega}(\omega,\omega^{\prime})\in P$.
Suppose now in the line of Theorem~\ref{EVP-VOS} that such vectorial costs are {\em proportional} to a vector $\xi\in P$, i.e., $C\left[(x,\omega),(x^{\prime},\omega ^{\prime})\right]=q\left[\phi,\phi^{\prime}\right]\xi$, where $q\left[\phi,\phi^{\prime}\right]\inI\!\!R_{+}$ is a quasidistance on $\overline{\Phi}:=X\times\overline{\Omega}$. This quasidistance modelizes the {\em total costs} to be able to change from one pair to another.
Let $\Phi:=\left\{(x,\omega),\omega\in\Omega (x)\right\}\subset\overline{\Phi }$ be the subset of feasible ``means-way of using these means" pairs. Then the {\em worthwhile to changes preference} over all the ``means-way of using these means" pairs $\phi=(x,\omega)\in\Phi$ is \begin{eqnarray*} \phi^{\prime}\ge_{K\left[f(\phi)\right]}\phi&\Longleftrightarrow&(x^{\prime},\omega^{\prime})\ge_{K\left[f(x,\omega)\right]}(x,\omega)\\ &\Longleftrightarrow&f(x^{\prime},\omega^{\prime})+C\left[(x,\omega),(x^{\prime},\omega^{\prime})\right]\le_{K\left[f(x,\omega)\right]}f(x,\omega)\\ &\Longleftrightarrow&f(\phi^{\prime})+C\left[\phi,\phi^{\prime}\right]\le_{K\left[f(\phi)\right]}f(\phi), \end{eqnarray*} where $C\left[(x,\omega),(x^{\prime},\omega^{\prime})\right]=q\left[(x,\omega),(x^{\prime},\omega^{\prime})\right]\xi$ while the pairs $\phi=(x,\omega)\in\Phi \subset\overline{\Phi}$ and $\phi^{\prime }=(x^{\prime},\omega^{\prime})\in\Phi\subset\overline{\Phi}$ are feasible, i.e., $\omega\in\Omega(x)\subset\overline{\Omega}$ and $\omega^{\prime}\in\Omega(x^{\prime})\subset\overline{\Omega}$. In this general case, the previous worthwhile to change sets read as follows: \begin{eqnarray*}
W(x,\omega)&=&\left\{x^{\prime}\in X\big|\;\exists\;\omega^{\prime}\in\Omega(x^{\prime})\;\mbox{ with }\; (x^{\prime},\omega^{\prime})\ge_{K\left[f(x,\omega)\right]}(x,\omega)\right\}\\
&=&\left\{x^{\prime}\in X\big|\;\exists\;\omega^{\prime}\in\Omega(x^{\prime})\;\mbox{ with }\;f(x^{\prime},\omega^{\prime})+C\left[(x,\omega),(x^{\prime},\omega^{\prime})\right]\le_{K\left[f(x,\omega)\right]}f(x,\omega )\right\}. \end{eqnarray*} Instead, we consider now the {\em new worthwhile to change set} defined by \begin{eqnarray*}
W(\phi):&=&\left\{\phi^{\prime}\in\Phi\big|\;\phi^{\prime}\ge_{K\left[f(\phi)\right]}\phi\right\}=\left\{\phi^{\prime}\in\Phi\big|\;f(\phi^{\prime})+q\left[\phi, \phi^{\prime}\right]\xi\le_{K\left[f(\phi)\right]}f(\phi)\right\}\\
&=&\left\{(x^{\prime},\omega^{\prime})\in\Phi\big|\;f(x^{\prime},\omega^{\prime})+q\left[(x,\omega),(x^{\prime},\omega^{\prime})\right]\xi \le_{K\left[f(x,\omega)\right]}f(x,\omega)\right\}. \end{eqnarray*}
In this setting, we can apply Corollary~\ref{EVP-Cor1}, where we replace the means $x\in X$ by the ``means-way of using these means" pairs $\phi= (x,\omega)\in\overline{\Phi}\subset X\times\overline{\Omega}$ to modelize such a situation. This variant has the following {\em two advantages}: {\bf(i)} it helps to modelize goal systems, where changing the ways of using means is costly; {\bf(ii)} it allows us to {\em drop the compactness} assumption on the set $\overline\Omega$ of ways as in Theorem~\ref{EVP-VOS}. Now the state space is that of pairs $\phi=(x,\omega)\in\overline{\Phi}$, the vectorial payoff mapping is that of unsatisfied needs $f:\phi\in\overline{\Phi}\longmapsto f(\phi)\in P$, and the real function $q(\phi,\phi^{\prime})\inI\!\!R_{+}$ denoted the quasidistance between two pairs of ``means-way of using these means." Then, in this context of ``means-way of using means" pairs, we reformulate Corollary~\ref{EVP-Cor1} as follows. \begin{Corollary}{\bf (variational principle in ``means-way of using these means" setting).}\label{cor2} Let $(\overline{\Phi},q)$ be a left-sequentially complete quasimetric space, and let $K:P\rightrightarrows P$ be a cone-valued ordering structure satisfying assumptions {\bf(H2)} and {\bf(H3)}. Consider a mapping $f:\overline\Phi\rightarrow P$ with $\mbox{\rm dom}\, f\ne\emptyset$ being a left-sequentially closed subset of $\overline{\Phi}$. Assume also that:
{\bf(A1)} $f$ is quasibounded from below on $\mbox{\rm dom}\, f$ with respect to a convex cone $\Theta$.
{\bf(A2)} $f(\cdot)+\frac{\varepsilon}{\lambda}q(\phi,\cdot)$ is level-closed with respect to $K(\cdot)$ for all $\phi\in\overline{\Phi}$ and $\varepsilon,\lambda>0$.\\[1ex] Then for any $\varepsilon$, $\lambda>0$, $\phi_{0}\in\mbox{\rm dom}\, f$, and $\xi\in\Theta_{K}\setminus(-\Theta-K\left[f_{0})\right])$ with $\left\Vert\xi\right\Vert=1$ and $f_{0}:=f(\phi_{0})$ there is a point $\phi_{\ast}\in\mbox{\rm dom}\, f$, satisfying the relationships \[ f(\phi_{\ast})+\frac{\varepsilon}{\lambda}q(\phi_{0},\phi_{\ast})\xi\leq_{K\left[f_{0}\right]}f(\phi_{0}), \] \[ f(\phi)+\frac{\varepsilon}{\lambda}q(\phi_{\ast},\phi)\xi\nleq_{K\left[f_{\ast}\right]}f(\phi_{\ast})\;\mbox{ for all }\;\phi\in\mbox{\rm dom}\, f\;\mbox{ with }\;f(\phi)\ne f(\phi_{\ast}), \] where $f_{\ast}:=f(\phi_{\ast})$. If furthermore $\phi_{0}$ is an approximate $\varepsilon\xi$--minimizer of $f$ with respect to $K\left[f_{0}\right]$, then $\phi_{\ast}$ can be chosen so that in addition to \eqref{0.14} and \eqref{0.15} we have the estimate \eqref{0.3}. \end{Corollary} \textbf{Comments.} From the psychological point of view, Corollary~\ref{cor2} can be interpreted as follows. Starting from any ``means-way of using these means" pair $\phi_{0}\in\overline{\Phi}$, the agent who manages several goals by enduring costs to be able to change both the means used and the way of using them and whose next preference over the relative importance of each goal changes with the current pair $\phi$, can reach, in {\em only one worthwhile to change step}, a certain {\em variational trap} $\phi_{\ast}$, where it is not worthwhile to move. Moreover, given the desirability level $\varepsilon>0$ of the initial pair $\phi_{0}\in\overline{\Phi}$ and the size $\lambda>0$ of the limited resource, the agent accomplishes this worthwhile change in a {\em feasible way}, since the costs to be able to change $q(\phi_{\ast},\phi)$ are lower than the resource constraint $\lambda$.
\section{Conclusion} The main mathematical result of this paper, Theorem~\ref{EVP-VOS}, as well as its consequences provide a far-going extension of the Ekeland variational principle aimed, first of all, to cover multiobjective problems with variable ordering structures. This major feature allows us to obtain new applications to adaptive psychological models within the variational rationality approach. Following this way, we plan to develop in our future research further applications of variational analysis to qualitative and algorithmic aspects of adaptive modeling in behavior sciences. One of our major attention in this respect is to extend the variational rationality approach and the corresponding tools variational analysis to decision making problems, where ``all things can be changed", i.e., with changeable decision sets, payoffs, goals, preferences, and contexts/parameters. Note that in a different setting, where decision sets and parameters can change along some Markov chain, another approach to similar issues has been developed in the context of {\em habitual domain theory}; see \cite{ly09,ly11,ly12,yc10,yl09}. A detailed comparison between the variational rationality approach and that of habitual domain theory has been recently given in \cite{bs13}.
\small\rm\begin{thebibliography}{99}
\bibitem{ah96} Alber, S., Heward, W.: Twenty-five behavior trapsguaranteed to extend your students' academic and social skills, {\em Interven. School Clinic} {\bf 31}, 285--289 (1996)
\bibitem{a04} Appadurai, A.: The capacity to aspire: culture and the terms of recognition, in {\em Culture and Public Action} (eds. V. Rao and M. Walton), pp.\ 59--84, The World Bank Publishers (2004)
\bibitem{abm05} Attouch, H., Buttazzo, G., Michaille, G.: {\em Variational Analysis in Sobolev and BV Spaces}, SIAM Publications (2005)
\bibitem{as11} Attouch, H., Soubeyran, A.: Local search proximal algorithms as decision dynamics with costs to move, {\em Set-Valued Var. Anal.} {\bf 19}, 157--177 (2011)
\bibitem{bw70} Baer, D., Wolf, M.: The entry into natural communities of reinforcement, in: {\em Control of Human Behavior} (eds. R. Ulrich, T. Stachnick, J. Mabry), pp.\ 319--324, Glenview, IL, Scott Foresman (1970)
\bibitem{b94} Barg, J.: The four horsemen of automaticity: awareness, intention, efficiency, and control in social cognition, {\em Handbook of Social Cognition} {\bf 1}, 1-40 (1994
\bibitem{bm07} Bao, T.Q., Mordukhovich, B.S.: Variational principles for set-valued mappings with applications to multiobjective optimization, {\em Control Cyber.} {\bf 36}, 531--562 (2007)
\bibitem{bm10} Bao, T.Q., Mordukhovich, B.S.: Relative Pareto minimizers for multiobjective problems: existence and optimality conditions, {\em Math. Program.} {\bf 122}, 301--347 (2010)
\bibitem{bm10a} Bao, T.Q., Mordukhovich, B.S.: Set-valued optimization in welfare economics, {\em Adv. Math. Econ.} {\bf 13}, 113-153 (2010)
\bibitem{bm13} Bao, T.Q., Mordukhovich, B.S.: Necessary nondomination conditions for set and vector optimization with variable structures, {\em J. Optim. Theory Appl.}, DOI 10.1007/s10957-013-0332-6 (2013)
\bibitem{b02} Baumeister, R.: Ego-depletion and self-control failure: an energy model of the self's executive function, {\em Self Identity} {\bf 1}, 129--136 (2002)
\bibitem{bh96} Baumeister, R., Heatherton, T.: Self regulation failure: an overview, {\em Psychol. Inquiry} {\bf 7}, 1--15 (1996)
\bibitem{bb13} Bello Cruz, J., Bouza Allende, G.: A steepest descent-like method for variable order vector optimization problems, {\em J. Optim. Theory Appl.}, to appear (2013)
\bibitem{bs13} Bento, G, Soubeyran, A.: Generalized inexact proximal algorithms: Habit's formation with resistance to change, following worthwhile changes, {\em J. Optim. Theory Appl.}, to appear (2013)
\bibitem{b13} Bhatia, S.: Associations and the sccumulation of preference, {\em Psychol. Rev.}, to appear( 2013)
\bibitem{bg12} Bhatia, S., Golman, R.: Attention and reference dependence, Department of Social \& Decision Sciences, Carnegie Mellon University (2012)
\bibitem{bz05} Borwein, J.M., Zhu, Q.J.: {\em Techniques of Variational Analysis}, Springer (2005)
\bibitem{b09} Bridges, W.: {\em Managing Transitions: Making the Most of Change}, Nicholas Brealey Publishing, 3rd edition (2009)
\bibitem{b80} Brisoux, J.: {\em Le Ph\'{e}nom\`{e}ne des Ensembles \'{e}Voqu\'{e}s: Une \'{E}tude Empirique des Dimensions Contenu et Taille}, Th\`{e}se de doctorat, Universit\'{e} Laval (1980)
\bibitem{bl81} Brisoux, J.E., Laroche, M.: Evoked set formation and composition: an empirical investigation under a routinized response behavior situation, in: {\em Advances in Consumer Research} (ed. K.B. Monroe), pp.\ 357--361, Ann Arbor, MI, Association for Consumer Research (1981)
\bibitem{cy02} Chen, G.Y., Yang, X.Q.: Characterizations of variable domination structures via nonlinear scalarization, {\em J. Optim. Theory Appl.} {\bf 112}, 97--110 (2002)
\bibitem{coss13} Cruz Neto, J.X., Oliveira, P.R., Soares Jr., P.A., Soubeyran, A.: Learning how to play Nash, potential games and alternating minimization method for structured nonconvex problems on Riemannian manifolds, {\em J. Convex Anal.} {\bf 20}, 395--438 (2013)
\bibitem{e11} Eichfelder, G.: Optimal elements in vector optimization with a variable ordering structure, {\em J. Optim. Theory Appl.} {\bf 151}, 217--240 (2011)
\bibitem{eh13} G. Eichfelder and T.X.D. Ha, Optimality conditions for vector optimization problems with variable ordering structures, {\em Optimization}, to appear (2013)
\bibitem{e72} Ekeland, I.: Sur les probl\'emes variationnels, {\em C. R. Acad. Sci. Paris} {\bf 275}, 1057--1059 (1972)
\bibitem{e74} Ekeland, I.: On the variational principle, {\em J. Math. Anal. Appl.} {\bf 47}, 324--353 (1974)
\bibitem{e79} Ekeland, I.: Nonconvex minimization problems, {\em Bull. Amer. Math. Soc.} {\bf 1}, 432--467 (1979)
\bibitem{fls12} Flores-Bazan, F., Luc, D.T., Soubeyran, A.: Maximal elements under reference-dependent preferences with applications to behavioral traps and games, {\em J. Optim. Theory Appl.} {\bf 155}, 883--901 (2012)
\bibitem{ft13} Farokhinia, A., Taslim, L.: On assymmetrc distance, {|em J. Anal. Num. Theo.} {\bf 1}, 11--14 (2013)
\bibitem{grtz03} G\"{o}pfert, A., Riahi, H., Tammer, C., Z\u{a}linescu, C.: {\em Variational Methods in Partially Ordered Spaces}, Springer (2003)
\bibitem{gjn12} Guti\'{e}rrez, C., Jim\'{e}nez, B., Novo, V.: A set-valued Ekeland's variational principle in vector optimization, {\em SIAM J. Control Optim.} {\bf 47}, 883--903 (2008)
\bibitem{hkr98} Hammond, J.S., Keeney, R.L., Raiffa, H.: The hidden traps in decision making, {\em Harvard Business Review} (1998)
\bibitem{hm06} Heifetz, A., Minelli, E.: Aspiration traps, Discussion Paper 0610, Universit\'{a} degli Studi di Brescia (2006)
\bibitem{hs69} Howard, J.A., Sheth, J.N.: {\em The Theory of Buyer Behavior (Marketing)}, John Wiley \& Sons (1969)
\bibitem{kq11} Khanh, P.Q., Quy D.N.: On generalized Ekeland's variational principle and equivalent formulations for set-valued mappings, {\em J. Global Optim.} {\bf 49}, 381--396 (2011)
\bibitem{ly09} Larbani, M., Yu, P.L.: Two-person second-order games, Part II: Restructuring operations to reach a win-win profile, {\em J. Optim. Theory
Appl.} {\bf 141}, 641--659 (2009)
\bibitem{ly11} Larbani, M., Yu, P.L.: $n$-Person second-order games: A paradigm shift in game theory, {\em J. Optim. Theory Appl.} {\bf 149}, 447--473 (2011)
\bibitem{ly12} Larbani M., Yu P.L.: Decision making and optimization in changeable spaces, a new paradigm, {\em J. Optim. Theory Appl.} {\bf 155}, 727--761 (2012)
\bibitem{lm93} Levinthal, D., March, J.: The myopia of learning, {\em Strateg. Manag. J.} {\bf 14}, 95--112 (1993)
\bibitem{ln11} Liu, C.G., Ng, K.F.: Ekeland's variational principle for set-valued functions, {\em SIAM J. Optim.} {\bf 21}, 41--56 (2011)
\bibitem{l89} Luc, D.T.: {\em Theory of Vector Optimization}, Springer (1989)
\bibitem{ls12} Luc, D.T., Soubeyran, A.: Variable preference relations: existence of maximal elements, {\em J. Math. Econ.}, to appear (2013)
\bibitem{m91} March, J.G.: Exploration and exploitation in organizational learning, {\em Organiz. Science} {\bf 2}, 71--87 (1991)
\bibitem{m06} Mordukhovich, B.S.: {\em Variational Analysis and Generalized Differentiation, I: Basic Theory, II: Applications}, Springer (2006)
\bibitem{mos11} Moreno, F.G., Oliveira, P.R., Soubeyran, A.: A proximal algorithm with quasidistance. Application to Habit's Formation, {\em Optimization} {\bf 61}, 1383--1403 (2011)
\bibitem{p93} Plous, C.: {\em The Psychology of Judgment and Decision Making}, Mcgraw-Hill Book Company (1993)
\bibitem{q12} Qiu, J.H.: On Ha's version of set-valued Ekeland's variational principle, {\em Acta Math. Sinica} (English Series) {\bf 28}, 717--726 (2012)
\bibitem{r06} Ray, D.: Aspirations, poverty and economic change, in: {\em What We Have Learnt about Poverty} (eds. A. Banerjee, R. Benabou, D. Mookherjee), Oxford University Press (2006)
\bibitem{rw98} Rockafellar, R.T., Wets, J.-B.: {\em Variational Analysis}, Springer (1998)
\bibitem{s55} Simon, H.: A behavioral model of rational choice, {\em Quarterly J. Econom.} {\bf 69}, 99--188 (1955)
\bibitem{s09} Soubeyran, A. : Variational rationality, a theory of individual stability and change, worthwhile and ambidextry behaviors, Preprint, GREQAM, Aix-Marseille University (2009)
\bibitem{s10} Soubeyran, A.: Variational rationality and the unsatisfied man: routines and the course pursuit between aspirations, capabilities and beliefs, Preprint, GREQAM, Aix-Marseille University (2010)
\bibitem{s04} Stephen, F.: {\em The Power of Reinforcement}, SUNY Press (2004)
\bibitem{w74} Walras, L.: {\em El\'{e}ments d'\'{E}conomie Politique Pure}, Lausanne, Corbaz Publishers (1874)
\bibitem{yc10} Yu, P.L., Chen,Y.C.: Dynamic MCDM, habitual domains and competence set analysis for effective decision making in changeable spaces, in {\em Trends in Multiple Criteria Decision Analysis}, (eds. M. Ehrgott, J.R.F. Salvatore Greco), Chapter~1, Springer (2010)
\bibitem{yl09} Yu, P.L., Larbani M.: Two-person second-order games, Part 1: Formulation and transition anatomy, {\em J. Optim. Theory Appl.} {\bf 141}, 619--639 (2009) \end{thebibliography}
\section{Appendix~1. Practical Means-Ends Rationality} {\bf (A) Substantive rationality in economics and mathematics.} Simon \cite{s55} defines {\em rationality} as an {\em adequation between preestablished ends and some means to reach them}. A behavior (action or sequence of actions) is substantively rational when it allows the realization of some given desirable ends subject to given conditions and constraints. {\em Substantive rationality} represents perfect or global rationality, optimization evaluating the fit between desirable ends and feasible means in a comprehensive way. This includes the following {\em three steps}: {\bf(i)} the listing of all the possible alternatives/actions; {\bf(ii)} the determination of all the consequences that occur if the agent plans to adopt each of these alternatives in a deterministic or probabilistic way; {\bf(iii)} the evaluation of the consequences when the agent adopts these alternatives according to the preestablished ends, e.g., payoff functions, like costs, revenues, profits, and utilities. Then the agent is able to specify all the ends, to weight all of them, to examine and evaluate all the possible sets of means, to evaluate how well each given set of means achieves each end, to have the ability and resources to perform these evaluations, and finally to choose the set of means with the highest weighted score.\\[1ex] {\bf(B) Incremental rationality: ``muddling through" in political sciences.} In administrative sciences, Lindblom \cite{l59-1} considers incremental rationality via a non-global analysis. In this case we have:
$\bullet$ Ends and means are determined simultaneously since the agent knows his/her ends by considering the means, which the agent has in mind.
$\bullet$ To save the agent's limited resources (e.g., time, energy, money), many consequences are ignored since a full analysis is too costly. Then the evaluation of only major consequences should be provided. In the same vein, only a few means should be considered. Furthermore, evaluation of each of the considered means is incomplete: the consideration is only ``serious enough."
$\bullet$ The agent departs at each step from the status quo while not too much from it; hence the name ``branch" or ``muddling through" method. This means that he/she does not compare two new alternatives, but compares a new alternative to the old one (status quo) making small steps. Thus the agent, instead of using a rational comprehensive method, makes a finite succession of limited comparisons with respect to the status quo. In this way complex problems and decisions are significantly simplified.
$\bullet$ The choice among the means is determined by the agreement among interested parties when it concerns a group of agents. The focus is on {\em incremental objectives} while social objectives may have different value weights in different circumstances. Individuals may be unable to rank their own values when they are conflicting. Participants can also disagree on weights of critical values and even on sub-objectives.\\[1ex] {\bf(C) Bounded and procedural rationality in decision theory.} In decision theory, Simon \cite{s55-1} considers {\em choice problems} instead of production problems, where agents, being bounded rational (they have limited resources and information), choose to be {\em procedural rational}. They search until these goals are satisfied by using heuristics for practical reasons and ignoring some alternatives to focus attention on a smaller subset of potential promising ones. In this way agents try to simplify their choice problems to economize their limited cognitive resources by taking into account their incomplete and inaccurate knowledge about the consequences of their actions. To simplify their choice, they accept to just {\em satisfy}, instead of to {\em optimize}, trying to find a course of action that is ``good enough," instead of being the best one. Since the goal to satisfy is less demanding, this relaxation procedure limits the sequential search for satisficing alternatives. Global/substantive rationality is only concerned with what is the result of the choice. Procedural rationality focuses on how the choice is done via a sequential search process, which stops when a satisfactory alternative is found. In a complex situation (e.g., too many alternatives and a long list of criteria required for a sufficiently precise evaluation), the process the agent chooses--to be able at the second stage selecting a solution among so many alternatives--will change the final decision. Hence the final choice of a suitable alternative depends of the choice process, which uses algorithms, procedures and computations. A major goal in \cite{s55-1} is to discover the {\em symbolic processes} that people use in thinking, using an analogy between the computer and the human mind. Bounded rationality and procedural rationality are seen as complementarities. It is written in \cite{s55-1} that ``...bounded rationality does the critical part while procedural rationality does the assertive one..."\\[1ex] {\bf (D) Problem solving in cognitive psychology.} In this paragraph, we take benefit of an excellent survey in Wikipedia (see ``Cognitive psychology and cognitive neuroscience. Problem solving from an evolutionary perspective"), which considers problem solving, problem finding, and problem shaping as interrelated processes. Let us focus our attention to problem solving, which deals with any given situation that differs from a desired goal. This concerns the following major issues.
$\bullet$ {\bf Problem solving concept.} It is written in Wikipedia about this concept: ``Every problem is composed of an initial state, intermediate states, and a goal state (also: desired or final state) while the initial and goal states characterize the situations before and after solving the problem. The intermediate states describe any possible situation between initial and goal state. The set of operators builds up the transitions between the states. A solution is defined as the sequence of operators which leads from the initial state across intermediate states to the goal state."
$\bullet$ {\bf Difficult problems.} Again from there: ``Difficult problems have some typical characteristics that can be summarized as follows: intransparency (lack of clarity of the situation), commencement opacity, continuation opacity, polytely (multiple goals), inexpressiveness, opposition, transience, complexity (large numbers of items, interrelations and decisions), innumerability, connectivity (hierarchy relation, communication relation, allocation relation), heterogeneity, dynamics difficulties (time considerations), temporal constraints, temporal sensitivity, phase effects, dynamic unpredictability...The resolution of difficult problems requires a direct attack on each of these characteristics that are encountered."
$\bullet$ {\bf Well defined and ill defined problems.} Wikipedia says: ``{\em Well-defined} problems are such that it is possible to find an algorithmic solution. They can be properly formalized as: {\bf(i)} problems having clearly defined given states; {\bf (ii)} there is a finite set of operators, i.e., rules the agent may apply to given states; and {\bf (iii)} problems having clear goal states. For {\em ill-defined} problems (involving creativity) it is not possible to clearly define a given state and a goal state. Nevertheless, they often involve sub-problems that can be totally well-defined. Gestalt psychologists considered problem solving in situations requiring some novel means of attaining goals. In this context, problem solving requires a representation in a person's mind, and a reorganization or restructuring of this representation. Problem representations mean to model the situation as experienced by the agent to analyze it and split it into separate components: objects, predicates, state space, operators, selection criteria. In a goal-oriented situation, either the agent reproduces the response to the given problem from past experience (reproductive thinking), or the agent needs something new and different (insight) to achieve the goal. In this case, prior learning is of little help (productive thinking). Sometimes, previous experience or familiarity can even make problem solving more difficult. This is the case whenever habitual directions get in the way of finding new directions--an effect called fixation. ``...Functional fixedness concerns the solution of object-use problems. The basic idea is that when the usual way of using an object is emphasized, it will be far more difficult for a person to use that object in a novel manner." Also ``...mental fixedness represents a person's tendency to respond to a given task in a manner based on past experience (mental set)..." This list goes on and on.
$\bullet$ {\bf Problem-solving strategies}. The aforementioned Wikipedia paper details a long list of methods to solve a problem. ``The simplest method is to search for a solution by just trying one possibility after another." This method is often called {\em trial and error}. Means-end analysis provides yet another approach. It ``reduces the difference between initial state and goal state by creating subgoals until a subgoal can be reached directly." Problem-solving strategies include analogies, which ``describe similar structures and interconnect them to clarify and explain certain relations..."
Furthermore, ``...problem-solving strategies are the steps that one would use to find the problem(s) that are in the way to getting to one's own goal... In this cycle one will recognize the problem, define the problem, develop a strategy to fix the problem, organize the knowledge of the problem, figure-out the resources at the user's disposal, monitor one's progress, and evaluate the solution for accuracy. Although called a cycle, one does not have to do each step in order to fix the problem, in fact those who don't are usually better at problem solving. The reason it is called a cycle is that once one is completed with a problem another usually will pop up...The following techniques are usually called problem-solving strategies: abstraction, analogy, brainstorming, divide and conquer: breaking down a large complex problem into smaller solvable problems, hypothesis testing, lateral thinking, means-ends analysis, method of focal objects (synthesizing seemingly non-matching characteristics of different objects into something new), morphological analysis (assessing the output and interactions of an entire system), proof (try to prove that the problem cannot be solved; the point where the proof fails will be the starting point for solving it), reduction (transforming the problem into another problem for which solutions exist), research (employing existing ideas or adapting existing solutions to similar problems), root cause analysis (identifying the cause of a problem), trial-and-error (testing possible solutions until the right one is found)..."\\[1ex] {\bf(E) Practical rationality in philosophy and artificial intelligence.} In philosophy, practical rationality is the use of reasons to decide {\em how to act}; namely, whether a prospective course of action is worth pursuing. {\em Practical reasoning} is the reasoning directed towards actions, i.e., deciding what to do; see Bratman \cite{b87-1}. It weighs conflicting considerations for and against competing options relative to the agent's desires, values, and believes. {\em Theoretical/speculative reasoning} is the use of reasons to decide what to believe: the truth of contingent events as well as necessary truths. Finally, {\em productive/technical reasoning} attempts to find the best means for a given end. Means-end rationality advocates that agents have certain interests and rationality consists of acting to promote these interests by using some means. {\em Means-ends reasoning} is concerned with finding the means for achieving goals. The problem solver begins by focusing on the end/final goal and then determines a plan, or a strategy, for attaining the goal starting from his/her current situation. {\em Plan construction} is at the core of this backward consideration process. This method is used in artificial intelligence. It sets up backward smaller sub-goals, which complement the goal and then constantly re-evaluate the performance of those subgoals. By completing them, the agent approaches step by step the final goal. The overall goal is splitting into objectives, which in turn are splitting into individual steps or actions by taking into account that ``every attainable end is in itself a means to a more general end" (see Pollock \cite{p02-1}).
In artificial intelligence the belief-desire-intention models of agency (Bratman \cite{b87-1}, Rao and Georgeff \cite{rg95-1}, Georgeff et al. \cite{gptw99-1}) modelize practical reasoning as the succession of two main activities: {\bf (i)} {\em deliberation}, i.e., deciding what to do and {\bf (ii)} {\em means-ends reasoning}, i.e., deciding how to do it. Deliberation examines what an agent wants to achieve considering preferences, choosing goals, etc. Then deliberation generates intentions, i.e., plans that are turned into actions like the interface between deliberation and means-ends reasoning. Means-ends reasoning is used to determine how the goals are to be achieved, e.g., thinking about suitable actions, resources, and how to organize activity.\\[1ex] {\bf(F) Means-ends control beliefs in psychology.} In the context of the expectancy-valence theory of self regulation, Skinner et al. \cite{scb88-1} define the concept of {\em perceived control} in terms of three independent sets of beliefs, namely: {\em control beliefs} (expectations about the extent to which agents can obtain desired outcomes), {\em means-ends beliefs} (expectations about the extent to which certain potential causes produce outcomes), and {\em agency beliefs} (expectations about the extent to which agents possess potential means).
\small\rm
\end{document} | arXiv |
First Hardy Littlewood Conjecture
The first Hardy Littlewood conjecture, also known as the k-Tuple conjecture is concisely presented here. However, I cannot find a paper explaining how Hardy and Littlewood came to such a conjecture. How is their statement justified? Where can the intuition behind the statement be understood? What paper presents a clear introduction to the conjecture and how it arose?
number-theory reference-request prime-numbers
Romain S
Romain SRomain S
$\begingroup$ Perhaps useful: mathoverflow.net/questions/54223/whence-the-k-tuple-conjecture $\endgroup$
– Matthew Conroy
$\begingroup$ Also: mathoverflow.net/questions/52700/… $\endgroup$
As pointed out in the comment section, you can read the article "Linear Equations in Primes" by Green and Tao, available on the ArXiv here:
http://arxiv.org/abs/math/0606088
in which they mention the works of Dickson as instrumental in the conception of such conjectures. In particular, you might be interested in reference [12].
In general, the intuition behind such conjectures involving specific sets of prime numbers is to observe their asymptotic behaviour using similar arguments as for all the primes, then use other arguments specific to the set in question to refine that asymptotic. For prime $k$-tuples, we take into account the number of open residue classes relative to the primes in each constellation in order to conjure up a multiplicative constant, similar to the twin-prime constant, that represents these residue classes.
In particular, given a prime constellation with $k$ members denoted by $P_k$ and a positive integer $n$,
$$ \pi_{P_k}(n) \sim C_{P_k} \int _{2}^{n}{dt \over (\log t)^{k}}, $$
where $\pi_{P_k}(n)$ denotes the amount of primes $p\leq n$ such that $(p,p+\ldots)\in P_k$ and $C_{P_k}$ is the constant computed using the various open residue classes relative to the constellation.
KlangenKlangen
Not the answer you're looking for? Browse other questions tagged number-theory reference-request prime-numbers or ask your own question.
Heuristic Proof of Hardy-Littlewood Conjecture for 3-term Arithmetic Progressions
The Goldbach Conjecture and Hardy-Littlewood Asymptotic
Proof or disproof of my composite conjecture.
Which heuristic leads to the Hardy-Littlewood conjecture about twin primes?
Twin Prime Related Material
Not-too-slow computation of Euler products / singular series
Doesn't the first Hardy-Littlewood conjecture imply the finiteness of prime constellations?
What do Wikipedia's statements about the twin prime conjecture really mean?
Error Term in Generalized (First) Hardy-Littlewood Conjecture | CommonCrawl |
NABLU
Numerical Analysis Better Left Unsaid (and other musings of interest)
Water rocketry
New approach to screw threads in OpenSCAD
This is an intellectual exercise with a practical end-result. There are better approaches to this problem, but the approach I describe here is interesting.
A screw thread from a stack of discs
Imagine a stack of 50 coins, stacked so that each coin is slightly offset from the one below it, with the offset moving around in a circle with each new coin. Here is what it looks like for fifty coins 20 mm in diameter and 0.5 mm thick, offset by 2 mm from a center of rotation, and stacked with a rotation offset of 30 degrees per coin (click to enlarge any image in this article):
Obviously, this is a spiral stack of coins. But it also approximates screw threads. The thread profile looks like a sinewave rather than the trapezoidal profile of machine screw threads.
Indeed, others have observed this, and used the freeware parametric CAD program OpenSCAD to make screw threads by extruding an offset circle while twisting it, as in this model on Thingiverse, for example.
From a distance, it looks OK...
...but the problem with that approach is that you end up with a thread profile having serrated surfaces, because the edges of any given facet get stretched and twisted way out of plane. Here is that Thingiverse model viewed from the side, using 20-sided circles:
For the purpose of 3D printing, as long as those serrations are much smaller than the resolution of the printer, the end result comes out smooth enough, at the expense of having way too much unnecessary detail in the model. Even so, that's the fastest way to generate threads that I know of.
But what if I could generate smooth threads with this stacked-circle technique?
Smooth threads using stacked discs
To make a smoother version of this wave thread, I wrote an OpenSCAD script to generate a polyhedron made of a spiral stack of polygon circles, with all surfaces created by connecting the same vertices of each circle. Then, because each polygon isn't rotated but just translated in a straight line by some offset on each layer (with the offset changing angular direction each layer), all of the polygons are parallel. That means the final polyhedron is made of flat rectangular faces. This is what it looks like with 20-sided circles:
That's much smoother. No serrations at all. It generates pretty fast, although not as fast as OpenSCAD's native "extrude with twist" operation. Notice how the vertical "seams" connecting the circles' vertices trace a spiral path along the screw shaft.
However, this isn't a true sinewave profile, because the sinewave is in the spiral path of the seams, not the cross-section straight up the middle of the screw shaft. Here's a deep-thread-depth stacked-circle screw shaft cut in half lengthwise, so you can see the actual profile:
Notice that the inside curves are rounder than the outside curves? This is actually better than a true sinewave for a 3D-printed screw thread, because the space between the threads is slightly wider than the actual threads, allowing the threads to mate reasonably well with standard machine threads. If I want a true sinewave thread profile, then I need to stack some sort of non-circular shape to get it.
ISO thread profile using stacked discs
The next obvious question is: why restrict myself to circles? By using other shapes, I should be able to create other thread profiles.
I decided to see if I could create ISO metric screw threads using a some flat cross-sectional shape that would produce the desired thread profile when stacked with a rotation offset.
This drawing describes the thread profile as an x-y plot with pitch as the x axis and radius as the y axis:
If the screw shaft is printed vertically, the 30° face angle may not work well, depending on your printer. The traditional "45 degree rule" for 3D printing says to avoid overhangs less than 45° from horizontal. In my case, using PrusaSlicer 2.1 with my Prusa i3 MK3S, the slicer doesn't identify any part of a standard ISO screw thread as an "overhang perimeter" so it should be OK as is for most modern printers. At less than 30°, however, the slicer does start identifying overhang perimeters, but angles shallower than the ISO standard aren't needed for fitting threads to machine parts. The primary exception that comes to mind would be a worm drive thread, which should have a nearly-flat thread face angle. Even then, only one face of the worm gear thread is under load, so it can be printed face up with a 45° angle on the unloaded underside face of the thread. However, the approach to thread generation described here doesn't work well for horizontal thread face angles, as I explain near the end of this article.
If the 3D printer doesn't work reliably for printing overhang slopes shallower than 45° (or if you're printing really large threads) it may be a good practice to increase the thread face angle from 30° to 45°. Fortunately, the ISO standard defines the parameter \(H\) (and therefore thread depth) in this figure as a function of angle \(\theta\) and thread pitch \(P\):
$$H = \frac{P}{2\tan\theta}\tag{1}$$
When \(\theta=30^\circ\), the threads are ISO threads. Regardless of the angle, the proportions shown in the figure still hold: The screw diameter is always the outer thread edge surface, which always has thickness \(P/8\), the inner diameter surface always has thickness \(P/4\), and the thread depth is always \(5H/8\).
That means I can define a dimensionless profile that defines a peak-to-peak pitch interval, with pitch ranging from 0 to 1, and thread depth ranging from -1 (inner diameter) to +1 (outer diameter).
function ISO_ext_thread_profile() = [
[0, 1], // middle of outer edge
[1/16, 1], // top of outer edge
[0.5-1/8, -1], // bottom of inner edge
[0.5+1/8, -1], // top of inner edge
[1-1/16, 1], // bottom of next higher outer edge
[1, 1] // middle of next higher outer edge
Graphically, it looks like this:
From pitch=0 to pitch=1 is one complete rotation; therefore, the horizontal axis not only represents thread pitch, but also rotation angle from 0° to 360°. The thread generator scales the depth to the desired value, and uses the profile to generate a stack of flat polygons to create a screw thread.
Let's say that we want to make a metric threaded rod 4 mm in diameter. Looking up the coarse thread pitch for 4 mm diameter gives a pitch of 0.7 mm. For a thread with a 30° face angle, the thread depth \(5H/8\) using equation (1) for \(H\) is about 0.38.
And here it is, the cross-section of an ISO 4-mm screw:
It looks almost like a circle. If you look closely, however, you'll see it's made from four curves:
There's a small circle arc on the right, intersecting x=2, with an arc radius equal to the screw radius (2 mm for a 4 mm diameter screw).
There's another circle arc on the left, intersecting x=−1.62 (0.38 from x=−2), with an arc radius equal to the screw's inner radius at the thread depth.
A non-constant radius curve at the top connects the left and right arcs.
Another non-constant radius curve at the bottom connects the left and right arcs.
Those two non-constant radius curves form the sloped faces of the threads. As these polygons stack vertically, they rotate by one angular step for each layer. The angular step size is equal to the angular step size used to generate the polygon shape, so that for each rotational increment, the polygon vertices are always vertically aligned. Because this is an irregular shape, some polygon edges won't be parallel on each rotational step, so we won't end up with nice clean rectangular facets everywhere. We get rectangular facets where the stacked polygon edges are parallel (like the inner and outer radius surfaces), and we'll also get triangles at the transitions between the different surfaces. But that's OK.
Here's how that shape stacks up into a screw thread, using 32 angular steps per full rotation:
Using this algorithm, I can easily create new thread profiles. Here are an ISO thread, a sinewave thread, a sinewave double thread, and a triangular thread:
About that sinewave thread: The stacked discs used to make a true sinewave profile have a cardioid shape. In the picture above, it's roughly like a circle squashed a bit on one side, but at the extreme where the inner radius is zero, it's a cardoid with a cusp. Here are what the cardioids look like for a thread depth of 1/4, 1/2, and 1 times the radius of the rod, with the center of rotation shown as a black dot:
To make the original stacked-circle thread described at the beginning of this article, the profile is based on the generalized polar-coordinate equation of a circle offset from the origin by the thread depth, resulting in the distorted sinewave profile shape seen earlier.
Final touches for 3D printing
I need to add some final touches:
Lead-in taper
Radius adjustment for nonstandard thread face angles
ISO hexagon shapes for nuts and screw heads
Facet reduction
A machine screw thread generally has a taper at the beginning. For about a quarter-turn the thread transitions between minimum and maximum diameter.
For nuts, constructing a lead-in is simple: just bevel a cone out of each side of the nut, so that the cone diameter at the surface of the nut equals the thread outer diameter. Then the thread has a nice lead-in for a full turn on each end.
For screws, the thread depth and outer radius of each stacked disc must shrink slightly in the layers near the tip of the screw shaft until the depth shrinks to zero. Now, with this stacked-disc approach, the resulting tapered thread gets distorted because the thread profile along the screw shaft isn't being shrunk all at once; the profile has a different depth scaling on one end of the pitch interval compared to the other. For an ISO thread, this results in the flat outer edge of the thread transitioning to a rounded edge as the stacked discs get smaller:
Nevertheless, that result is satisfactory for the purpose of a lead-in.
Radius adjustment for nonstandard angles
As I said earlier, it shouldn't be necessary to change the ISO thread face angle for 3D printing. There may be reasons to do this, though; perhaps you have a large threads with big overhangs, or you're using a printing material that tends to sag. In that case, a thread face angle of 45° results in a smaller thread depth than one gets from using the ISO standard 30°. For a 3D printed plastic screw to fit into a metal ISO threaded hole, it is important to retain the the inner diameter, which means the outer diameter must be adjusted smaller to fit the thread face angle.
The adjustment is simple: Just use equation (1) to calculate the thread depth \(5H/8\) for 30° and 45°, and subtract the difference from the overall outer diameter of the screw. That way the inner diameter is retained and the screw can fit into metal holes.
Internal threads (threaded holes) are probably the most common use-case for 3D printing. A threaded hole is constructed by subtracting a threaded rod from a solid object. In this case, the outer diameter is critical for a metal ISO screw to fit into the hole. In this case, the inner diameter must adjust outward by the difference in depth between 30° and 45° threads, while keeping the original outer diameter fixed.
After completing the challenge of constructing a threaded rod, it's easy to put a screw head on it, or subtract the rod from a solid to make a threaded hole. So why not use ISO standards for screw heads and nuts? The website Engineer's Edge has some useful tables for the dimensions of standard metric hex nuts and ISO 4014 hex screw heads that be programmed as a lookup table. OpenSCAD's lookup() function returns an interpolated value if the lookup key isn't present, so we can get a good hex nut or screw head size for any arbitrary shaft diameter.
It turns out that the hexagon diameter, as a function of thread diameter, is the same for both screw hex heads and nuts. The heights are different, though: for a given diameter, a nut is a bit taller than a screw head. Also, both nuts and screws have beveled edges, typically 30°; or less, but I'll use 45° for easier printing (and I think it looks better too).
Here's how it turned out for an M4 screw and nut, showing a comparison between 30° and 45° thread face angles, on an ISO head and nut:
It may be hard to see, but you may notice in the second picture (using 45° threads) that the outer diameter is smaller on the screw, and the inner diameter is larger on the nut. That screw and that nut are intended to mate with a corresponding metal nut and screw, respectively, for a snug fit. Due to the diametric differences, they would fit loosely to one another.
Ideally, the thread geometry should have these desired properties:
regular mesh facets (triangles and quadrilaterals) with no unusually large aspect ratios and facet size differences
minimal number of polygons necessary to reproduce the thread profile
thread geometry compatible with a model needing, say, a threaded hole; that is, the threads shouldn't account for the bulk of the model's facets
One thread pitch interval is equivalent to one rotation. The models shown so far have the same number of vertices along one pitch interval as around the circumference of the screw shaft. For the ISO thread, this results in many more facets than needed on the load-bearing thread surfaces, as well as along the inner shaft surface.
The problem can be mitigated somewhat by retaining the resolution of each stacked disc, but rotating each new disc by multiple resolution steps, and raising the height of each new disc by the same multiple of the original pitch steps.
For example, say we have a polygon disc made from 64 segments. Instead of rotating each successive disc by 1/64 of a circle and raising it by pitch/64 on each iteration, we can rotate each disc 4/64 (1/16) of a circle and raise it by pitch/16. All 64 vertices of each stacked disc still line up so they can be connected as usual.
And with 1/4 of the discs, we also reduced the number of facets to 1/4 of what was originally needed!
Here's a comparison between a facet-reduced sinewave thread and the original sinewave thread. The discs in both images have 64 segments, but the facet-reduced one on the left uses only 16 steps per pitch interval instead of 64:
The facet-reduced model looks more coarse, which is expected, and the polygons aren't terribly elongated. Given that this is a screw shaft with a diameter of 4 mm, the resolution is more than sufficient for 3D printing.
Let's see what happens with the ISO thread, same parameters as the figure above with only the thread profile changed:
Here, the angular transitions in the thread profile are messier, but at this scale it's still adequate for 3D printing.
Limitations of this approach
Making threads by stacking 2-dimensional polygons is an interesting intellectual exercise, as I stated in the beginning. However, this approach has some drawbacks:
Still too many facets
While I managed to create a thread that eliminates the serrated edges of a twisted extrusion, and I reduced the facet count, there are still more facets than needed to describe the shape. Less steps along the pitch interval compared to the circumference did reduce the facet count, but even so, there are still redundant facets on the ISO thread faces. In the previous figure, those surfaces that appear as quadrilaterals on the thread faces are actually made from multiple coplanar facets due to the spacing interval being smaller than the size of the thread face. The spacing interval is limited in size by the smallest feature, in this case the outer edge of the thread. And even at this limit, the slope transitions in the thread profile get messy-looking.
Except for a sinewave thread that continually changes slope, for most threads with flat faces, we don't need so many samples along the pitch of the thread. An ISO thread has a trapezoid profile; it needs only four numbers to define it, but we have more samples than needed because of the equal spacing of the pitch steps, which can be no bigger than the smallest segment of the thread profile. Equal spacing is required because one pitch interval equals one rotation and each pitch step corresponds to a rotation step, and the rotation steps must all be equal size.
Zero degree thread faces aren't possible
Typically a screw thread experiences a load on one face, and none on the other face. A thread profile optimized for 3D printing (not needing ISO features) would have a 0° angle on the load-bearing face and a 45°; (or greater) angle on the the unloaded face. That would be the strongest thread for 3D printing.
A zero-degree face, however, would require a thread profile with a discontinuity in it. The best one could do is approximate it (say 0.1°), but then the profile would need to be tightly sampled to resolve that rapid change from maximum to minimum thread radius. And tight samples are unnecessary everywhere else except for that narrow transition. Resolving that narrow transition means having far too many unnecessary facets elsewhere.
Best appearance has too many stretched facets
The threads look best when using the same number of steps for circumference and pitch. But then the facets have a high aspect ratio because the thread circumference is much larger than the thread pitch. The facets appear more uniform if you make the pitch size approximate the circumference, but then you don't have functional threads, you just have an interesting undulating shape.
I enjoyed this exercise. In the end I have some OpenSCAD code that I can use to generate a screw or nut or threaded hole in a part, easily and quickly, even though the geometry isn't optimal.
The thread generation algorithm described here, with OpenSCAD source code, is available for download in my Nuts&bolt baby dexterity toy on Thingiverse.
For complex models with multiple threaded elements, however, I lean toward the alternate approach of extruding a thread profile along a spiral path, which makes more efficient use of polygons and allows for arbitrary thread profiles that include horizontal thread faces, or even concave faces. There's a helpful article "Generating Nice Threads in OpenSCAD about that subject, including a GitHub repository for PET bottle threads.
Syncing Office 365 Outlook to Google calendar using Power Automate
When my employer switched over from the Google office suite to Microsoft's Office 365, it caused unexpected disruption in my family. I had been sharing my work calendar with my wife (sharing no details, just busy times). It became an integral part of her calendar, to help her coordinate (between her job and my job) who picks up and drops off my son from school, and to let her know when I was free to take a phone call during my work day. Since the switchover to O365, my ability to share a calendar externally was eliminated. Based on documentation I could find, it should be possible, but the option doesn't appear in my employer's implementation of Outlook. I can view my personal Google calendar in my work Outlook calendar, but I can't share it the other way. Microsoft's Power Automate provides a way to solve this problem. It lets you create flows of data, triggered manually or automatically by events, to move data around, not only among the Microsoft products but
The water rocket: Thrust from water
Launch tube thrust Thrust from air Once the rocket has left the launch tube, the water thrust phase of the flight begins. But before we begin those calculations, we need to have equations for ballistic flight, which is the last step described in the introduction . Why? Because during ballistic flight (coasting through the air), the only forces acting on the rocket are gravity and wind drag — but these are acting on the rocket during the thrust phase too. Ballistic flight is exactly the same, just without the thrust. Ballistic flight We need to know how gravity and drag affect acceleration, velocity, and ultimately altitude during all thrust phases as well as the ballistic trajectory afterward, so we'll start with the ballistic flight equations. Air resistance First we need to get the ambient air density. Denser air results in higher drag, and dry air is more dense than moist air. To get the density, we first need to know the partial pressure of water vapor in the air. T
Copyright © 2010-2020 by Alex Matulich
Alex Matulich is a technical program manager with a background in physics and mathematics, who occasionally has ideas related to science, numerical methods, or other interests, and writes about them.
3D printing3 fake data4 fiction1 financial6 gaming6 health1 logic2 O3651 physics7 project management2
random numbers6 signal processing3 statistics7 topology4 water rocket5 | CommonCrawl |
\begin{document}
\newcommand{\xyprot}{}
\title{An ${\rm A}_∞$-structure on the cohomology ring of the symmetric group $\Sy_p$ with coefficients in $ {\mathbb{F}_{\!p}} $} \author{Stephan Schmid}
\maketitle
\begin{small} \begin{quote} \begin{center}{\bf Abstract}\end{center}\vspace*{2mm}
Let $p$ be a prime. Let $ {\mathbb{F}_{\!p}} \!\Sy_p$ be the group algebra of the symmetric group over the finite field $ {\mathbb{F}_{\!p}} $ with $| {\mathbb{F}_{\!p}} |=p$. Let $ {\mathbb{F}_{\!p}} $ be the trivial $ {\mathbb{F}_{\!p}} \!\Sy_p$-module. We present a projective resolution $\pres {\mathbb{F}_{\!p}} $ of the module $ {\mathbb{F}_{\!p}} $ and equip the Yoneda algebra $\Ext^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}( {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )$ with an ${\rm A}_∞$-structure such that $\Ext^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}( {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )$ becomes a minimal model of the dg-algebra $\Hom^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}(\pres {\mathbb{F}_{\!p}} , \pres {\mathbb{F}_{\!p}} )$. \end{quote} \end{small}
\renewcommand{\arabic{footnote}}{\fnsymbol{footnote}} \footnotetext[0]{MSC 2010: 18G15.} \renewcommand{\arabic{footnote}}{\arabic{footnote}}
\tableofcontents
\subsection{Introduction} \paragraph{${\rm A}_∞$-algebras} Let $R$ be a commutative ring. Let $A$ be a $\mathbb{Z}$-graded $R$-module. Let $m_1:A→A$ be a graded map of degree $1$ with $m_1^2=0$, i.e.\ a differential on $A$. Let $m_2:A\otimes A→A$ be a graded map of degree $0$ satisfying the Leibniz rule, i.e.\ \begin{align*} m_1\circ m_2 = m_2\circ(m_1\otimes 1 + 1\otimes m_1). \end{align*}
The map $m_2$ is in general not required to be associative. Instead, we require that for a morphism $m_3:A^{\otimes 3}→A$, the following identity holds. \[m_2\circ (m_2\otimes 1 - 1\otimes m_2) = m_1\circ m_3 + m_3\circ(m_1\otimes 1^{\otimes 2} + 1\otimes m_1 \otimes 1 + 1^{\otimes 2} \otimes m_1)\] Following \name{Stasheff}, cf.\ \cite{St63}, this can be continued in a certain way with higher multiplication maps to obtain a tuple of graded maps $(m_n:A^{\otimes n}→ A)_{n\geq 1}$ of certain degrees satisfying the Stasheff identities, cf.\ e.g.\ \eqref{ainfrel}. The tuple $(A, (m_n)_{n\geq 1})$ is then called an ${\rm A}_∞$-algebra.
A morphism of ${\rm A}_∞$-algebras from $(A',(m'_n)_{n\geq 1})$ to $(A,(m_n)_{n\geq 1})$ is a tuple of graded maps \mbox{$(f_n:A'^{\otimes n}→ A)_{n\geq 1}$} of certain degrees satisfying the identities \eqref{finfrel}. The first two of these are \begin{align*} \eqref{finfrel}[1]: && f_1 \circ m'_1 =\, & m_1 \circ f_1\\ \eqref{finfrel}[2]: && f_1\circ m'_2 - f_2\circ(m'_1\otimes 1 + 1\otimes m'_1) =\,& m_1\circ f_2 + m_2\circ (f_1\otimes f_1). \end{align*}
So a morphism $f=(f_n)_{n\geq 1}$ of ${\rm A}_∞$-algebras from $(A',(m'_n)_{n \geq 1})$ to $(A,(m_n)_{n \geq 1})$ contains a morphism of complexes $f_1:(A',m_1')→(A,m_1)$. We say that $f$ is a quasi-isomorphism of ${\rm A}_∞$-algebras if $f_1$ is a quasi-isomorphism. Furthermore, there is a concept of homotopy for ${\rm A}_∞$-morphisms, cf.\ e.g.\ \cite[3.7]{Ke01} and \cite[Définition 1.2.1.7]{Le03}.
\paragraph{History} The history of ${\rm A}_∞$-algebras is outlined in \cite{Ke01} and \cite{Ke01ad}.
As already mentioned, \name{Stasheff} introduced ${\rm A}_∞$-algebras in 1963.
If $R$ is a field, $\mathbb{F}:=R$, we have the following basic results on ${\rm A}_∞$-algebras, which are known since the early 1980s.
\begin{itemize} \item Each quasi-isomorphism of ${\rm A}_∞$-algebras is a homotopy equivalence, cf.\ \cite{Pr84}, \cite{Ka87}, … \item The minimality theorem: Each ${\rm A}_∞$-algebra $(A,(m_n)_{n\geq 1})$ is quasi-isomorphic to an ${\rm A}_∞$-algebra $(A',\{m'_n\}_{n \geq 1})$ with $m'_1=0$, cf.\ \cite{Ka82}, \cite{Ka80}, \cite{Pr84}, \cite{GuLaSt91}, \cite{JoLa01}, \cite{Me99}, … . The ${\rm A}_∞$-algebra $A'$ is then called a minimal model of $A$. \end{itemize}
Suppose given an $\mathbb{F}$-algebra $B$ and suppose given an $B$-module $M$ together with a projective resolution $\pres M$ of $M$. The homology of the dg-algebra $\Hom^*_B(\pres M,\pres M)$ is the Yoneda algebra $\Ext^*_B(M,M)$. By the minimality theorem, it is possible to construct an ${\rm A}_∞$-structure on $\Ext^*_B(M,M)$ such that $\Ext^*_B(M,M)$ becomes a minimal model of the dg-algebra $\Hom^*_B(\pres M,\pres M)$.
For the purpose of this introduction, we will call such an ${\rm A}_∞$-structure on $\Ext^*_B(M,M)$ the canonical ${\rm A}_∞$-structure on $\Ext^*_B(M,M)$, which is unique up to isomorphisms of ${\rm A}_∞$-algebras, cf.\ \cite[3.3]{Ke01}.
This structure has been calculated or partially calculated in several cases.
Let $p$ be a prime.
For an arbitrary field $\mathbb{F}$, \name{Madsen} computed the canonical ${\rm A}_∞$-structure on $\Ext^*_{\mathbb{F}[α]/(α^n)}(\mathbb{F},\mathbb{F})$, where $\mathbb{F}$ is the trivial $\mathbb{F}[α]/(α^n)$-module, cf.\ \cite[Appendix B.2]{Ma02}. This can be used to compute the canonical ${\rm A}_∞$-structure on the group cohomology $\Ext^*_{ {\mathbb{F}_{\!p}} {\rm C}_{m}}( {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )$, where $m∈\mathbb{Z}_{\geq 1}$ and ${\rm C}_m$ is the cyclic group of order $m$, cf.\ \cite[Theorem 4.3.8]{Ve08}.
\name{Vejdemo-Johansson} developed algorithms for the computation of minimal models~\cite{Ve08}. He applied these algorithms to compute large enough parts of the canonical ${\rm A}_∞$-structures of the group cohomologies $\Ext^*_{ {\mathbb{F}_{\!2}} {\rm D}_8}( {\mathbb{F}_{\!2}} , {\mathbb{F}_{\!2}} )$ and $\Ext^*_{ {\mathbb{F}_{\!2}} {\rm D}_{16}}( {\mathbb{F}_{\!2}} , {\mathbb{F}_{\!2}} )$ to distinguish them, where ${\rm D}_8$ and ${\rm D}_{16}$ denote dihedral groups. He stated a conjecture on the complete ${\rm A}_∞$-structure on $\Ext^*_{ {\mathbb{F}_{\!2}} {\rm D}_8}( {\mathbb{F}_{\!2}} , {\mathbb{F}_{\!2}} )$. Furthermore, he computed parts of the canonical ${\rm A}_∞$-structure on $\Ext^*_{ {\mathbb{F}_{\!2}} {\rm Q}_8}( {\mathbb{F}_{\!2}} , {\mathbb{F}_{\!2}} )$ for the quaternion group ${\rm Q}_8$. He conjecturally stated the minimal complexity of such a structure. Based on this work, there are now built-in algorithms for the Magma computer algebra system. These are capable of computing partial ${\rm A}_∞$-structures on the group cohomology of $p$-groups.
In \cite{Ve082}, \name{Vejdemo-Johansson} examined the canonical ${\rm A}_∞$-structure $(m_n)_{n\geq 1}$ on the group cohomology $\Ext^*_{ {\mathbb{F}_{\!p}} ({\rm C}_k\times {\rm C}_l)}( {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )$ of the abelian group ${\rm C}_k\times {\rm C}_l$ for $k,l\geq 4$ such that $k,l$ are multiples of $p$. He showed that for infinitely many $n∈\mathbb{Z}_{\geq 1}$, the operation $m_n$ is non-zero.
In \cite{Kl10}, \name{Klamt} investigated canonical ${\rm A}_∞$-structures in the context of the representation theory of Lie-algebras. In particular, given certain direct sums $M$ of parabolic Verma modules, she examined the canonical ${\rm A}_∞$-structure $(m'_k)_{k\geq 1}$ on $\Ext^*_{\mathcal{O}^{\mathfrak{p}}}(M, M)$. She proved upper bounds for the maximal $k∈\mathbb{Z}_{\geq 1}$ such that $m'_k$ is non-vanishing and computed the complete ${\rm A}_∞$-structure in certain cases.
\paragraph{The result}
For $n∈\mathbb{Z}_{\geq 1}$, we denote by $\Sy_n$ the symmetric group on $n$ elements.
The group cohomology $\Ext^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}( {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )$ is well-known. For example, in \cite[p.~74]{Be98}, it is calculated using group cohomological methods.
Here, we will construct the canonical ${\rm A}_∞$-structure on $\Ext^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}( {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )$.
We obtain homogeneous elements $ι,χ∈\Hom_{ {\mathbb{F}_{\!p}} \!\Sy_p}^*(\pres {\mathbb{F}_{\!p}} , \pres {\mathbb{F}_{\!p}} )=:A$ of degree $|ι|=2(p-1)=: l$ and $|χ|=l-1$ such that $ι^j,χ\circ ι^j=:χι^j$ are cycles for all $j∈\mathbb{Z}_{\geq 0}$ and such that their set of homology classes $\{\overline{ι^j} \mid j∈\mathbb{Z}_{\geq 0}\}\sqcup \{\overline{χι^j} \mid j∈\mathbb{Z}_{\geq 0}\}$ is an $ {\mathbb{F}_{\!p}} $-basis of $\Ext^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}( {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )={\operatorname{H}}^*A$, cf.\ \cref{pp:iota}.
For all primes $p$, the canonical ${\rm A}_∞$-structure $(m'_n:({\operatorname{H}}^*A)^{\otimes n}→{\operatorname{H}}^*A)_{n\geq 1}$ on ${\operatorname{H}}^*A$ is given as follows.
On the elements $\overline{χ^{a_1}ι^{j_1}}\otimes \cdots \otimes \overline{χ^{a_n}ι^{j_n}}$, $n∈\mathbb{Z}_{\geq 1}$, $a_i∈\{0,1\}$ and $j_i∈\mathbb{Z}_{\geq 0}$ for $i∈\{1,…,n\}$, the maps $m'_n$ are given as follows, cf.\ \cref{defall,bem:comp}.
If there is an $i∈\{1,…,n\}$ such that $a_i=0$, then
\begin{align*} m'_n(\overline{χ^{a_1}ι^{j_1}}\otimes \cdots \otimes \overline{χ^{a_n}ι^{j_n}}) =\,& 0 \hphantom{\overline{χ^{a_1 + a_1}ι^{j_1 + j_2}}} \text{for $n\neq 2$ and}\\ m'_2(\overline{χ^{a_1}ι^{j_1}}\otimes \overline{χ^{a_2}ι^{j_2}}) =\, & \overline{χ^{a_1 + a_1}ι^{j_1 + j_2}}. \end{align*} If all $a_i$ equal $1$, then \begin{align*} m'_n(\overline{χι^{j_1}}\otimes \cdots \otimes \overline{χι^{j_n}}) =\,&0 \hphantom{(-1)^p\overline{ι^{p-1+j_1+… + j_p}}}\text{for $n\neq p$ and }\\ m'_p(\overline{χι^{j_1}}\otimes \cdots \otimes \overline{χι^{j_p}}) =\, & (-1)^p\overline{ι^{p-1+j_1+… + j_p}}.
\end{align*} In particular, we have $m'_n=0$ for all $n∈ \mathbb{Z}_{\geq 1}\setminus\{2,p\}$.
\subsection{Outline} \paragraph{Section 1}
The goal of \cref{secpres} is to obtain a projective resolution of
the trivial $ {\mathbb{F}_{\!p}} \!\Sy_p$-Specht module $ {\mathbb{F}_{\!p}} $. A well-known method for that is "Walking around the Brauer tree", cf.\ \cite{Gr74}.
Instead, we use locally integral methods to obtain a projective resolution in an explicit and straightforward manner.
Over $ℚ$, the Specht modules are absolutely simple. Therefore we have a morphism of $\mathbb{Z}_{(p)}$-algebras
$r:\mathbb{Z}_{(p)}\!\Sy_p→\prod_{λ\dashv p} \End_{\mathbb{Z}_{(p)}} S^λ_{\mathbb{Z}_{(p)}}=:Γ$ induced by the operation of the elements of $\mathbb{Z}_{(p)}\!\Sy_p$ on the Specht modules $S^λ$ for partitions $λ$ of $p$, which becomes an Wedderburn isomorphism when tensoring with $ℚ$. So $Γ$ is a product of matrix rings over $\mathbb{Z}_{(p)}$.
There is a well-known description of $\im r=:Λ$, which we use for $p\geq 3$ to obtain projective $Λ$-modules $\tilde P_k\subseteq Λ$, $k ∈[1,p-1]$, and to construct the indecomposible projective resolution $\pres\mathbb{Z}_{(p)}$ of the trivial $\mathbb{Z}_{(p)}\!\Sy_p$-Specht module $\mathbb{Z}_{(p)}$. The non-zero parts of $\pres\mathbb{Z}_{(p)}$ are periodic with period length $l=2(p-1)$. In \cref{sec:prfp}, we reduce $\pres\mathbb{Z}_{(p)}$ modulo $p$ to obtain a projective resolution $\pres {\mathbb{F}_{\!p}} $ of the trivial $ {\mathbb{F}_{\!p}} \!\Sy_p$-Specht module $ {\mathbb{F}_{\!p}} $.
\paragraph{Section 2}
The goal of \cref{secainf} is to compute a minimal model of the dg-algebra $\Hom_{ {\mathbb{F}_{\!p}} \!\Sy_p}^*(\pres {\mathbb{F}_{\!p}} , \pres {\mathbb{F}_{\!p}} )=: A$ by equipping its homology $\Ext^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}( {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} ) = {\operatorname{H}}^*A$ with a suitable ${\rm A}_∞$-structure and finding a quasi-isomorphism of ${\rm A}_∞$-algebras from ${\operatorname{H}}^* A$ to $A$.
Towards that end, we recall the basic definitions concerning ${\rm A}_∞$-algebras and some general results in \cref{generaltheory}.
While there does not seem to be a substantial difference between the cases $p=2$ and $p\geq 3$, we separate them to simplify notation and argumentation. Consider the case $p\geq 3$. In \cref{subsec:homology}, we obtain a set of cycles $\{ι^j \mid j∈\mathbb{Z}_{\geq 0}\}\cup\{χι^j \mid j∈\mathbb{Z}_{\geq 0}\}$ in $A$ such that their homology classes are a graded basis of ${\operatorname{H}}^*A$.
In \cref{subsec:minmod}, we obtain a suitable ${\rm A}_∞$-structure on ${\operatorname{H}}^*A$ and a quasi-isomorphism of ${\rm A}_∞$-algebras from ${\operatorname{H}}^*A$ to $A$. For the prime $2$, both steps are combined in the short \cref{prime2}.
\subsection{Notations and conventions} \label{sec:not} \paragraph{Stipulations} \begin{itemize} \item \textbf{For the remainder of this document, $p$ will be a prime with $p\geq 3$.} \item \textbf{Write $l:=2(p-1)$.} This will give the period length of the constructed projective resolution of $ {\mathbb{F}_{\!p}} $ over $ {\mathbb{F}_{\!p}} \!\Sy_p$, cf.\ e.g.\ \eqref{omega}, \cref{lem:prfp}. \end{itemize}
\paragraph{Miscellaneous} \begin{itemize} \item Concerning "$∞$", we assume the set $\mathbb{Z}\cup\{∞\}$ to be ordered in such a way that $∞$ is greater than any integer, i.e.\ $∞> z$ for all $z∈\mathbb{Z}$, and that the integers are ordered as usual. \item For $a∈\mathbb{Z}$, $b∈\mathbb{Z}\cup \{∞\}$, we denote by $[a,b] := \{z∈\mathbb{Z} \mid a\leq z\leq b\} \subseteq \mathbb{Z}$ the integral interval. In particular, we have $[a,∞] = \{z∈\mathbb{Z} \mid z\geq a\}\subseteq \mathbb{Z}$ for $a∈\mathbb{Z}$.
\item For $n∈\mathbb{Z}_{\geq 0}$, $k∈\mathbb{Z}$, let the binomial coefficient $\binom{n}{k}$ be defined by the number of subsets of the set $\{1,…,n\}$ that have cardinality $k$. In particular, if $k<0$ or $k>n$, we have $\binom{n}{k}= 0$. Then the formula $\binom{n}{k-1} + \binom{n}{k} = \binom{n+1}{k}$ holds for all $k∈\mathbb{Z}$.
\item For a commutative ring $R$, an $R$-module $M$ and $a,b∈M$, $c∈R$, we write \begin{align*} b&\equiv_c a &:\Longleftrightarrow & & a-b ∈ cM.\end{align*} Often we have $M=R$ as module over itself.
\item Modules are right-modules unless otherwise specified.
\item For sets, we denote by $\sqcup$ the disjoint union of sets.
\item $|\cdot|$: For a homogeneous element $x$ of a graded module or a graded map $g$ between graded modules, we denote by $|x|$ resp.\ $|g|$ their degrees (This is not unique for $x=0$ resp.\ $g=0$). For $y$ a real number, $|y|$ denotes its absolute value. \end{itemize} \paragraph{Symmetric Groups} Let $n∈\mathbb{Z}_{\geq 1}$. We denote the symmetric group von $n$ elements by $\Sy_n$.
For a partition $λ\dashv n$, we denote the corresponding Specht module by $S^λ$.
\paragraph{Complexes} Let $R$ be a commutative ring and $B$ an $R$-algebra. \begin{itemize} \item For a complex of $B$-modules \[\cdots \rightarrow C_{k+1}\xrightarrow{d_{k+1}}C_k\xrightarrow{d_k} C_{k-1}\rightarrow\cdots\hphantom{,},\] its $k$-th boundaries, cycles and homology groups are defined by $\text{B}^k:=\im d_{k+1}$, $\text{Z}^k:=\ker d_{k}$ and ${\operatorname{H}}^k:= \text{Z}^k/\text{B}^k$.
For a cycle $x∈\text{Z}^k$, we denote by $\overline{x}:=x+\text{B}^k∈{\operatorname{H}}^k$ its equivalence class in homology.
\item Let \begin{align*} C=\,&(\cdots \rightarrow C_{k+1}\xrightarrow{d_{k+1}}C_k\xrightarrow{d_k} C_{k-1}\rightarrow\cdots)\\ C'=\,&(\cdots \rightarrow C'_{k+1}\xrightarrow{d'_{k+1}}C'_k\xrightarrow{d'_k} C'_{k-1}\rightarrow\cdots) \end{align*} be two complexes of $B$-modules.
Given $z∈\mathbb{Z}$, let \begin{align*} \Hom_B^z(C,C') := \prod_{i∈\mathbb{Z}} \Hom_B(C_{i+z},C'_i). \end{align*} For an additional complex $C''=(\cdots \rightarrow C''_{k+1}\xrightarrow{d''_{k+1}}C''_k\xrightarrow{d''_k} C''_{k-1}\rightarrow\cdots)$ and maps $h=(h_i)_{i∈\mathbb{Z}}∈\Hom_B^m(C,C')$, $h'=(h'_i)_{i∈\mathbb{Z}}∈\Hom_B^n(C',C'')$, $m,n∈\mathbb{Z}$, we define the composition by component-wise composition as \begin{align*} h'\circ h := (h'_{i}\circ h_{i+n})_{i∈\mathbb{Z}} ∈ \Hom_B^{m+n}(C,C''). \end{align*}
We will assemble elements of $\Hom_B^z(C,C')$ as sums of their non-zero components, which motivates the following notations regarding "extensions by zero" and sums.
For a map $g:C_x→C'_y$\,, we define $\lfloor g\rfloor_x^y∈\Hom_B^{x-y}(C,C')$ by \begin{align*} (\lfloor g\rfloor_x^y)_{i} := \begin{cases} g & \text{ for }i=y\\ 0 & \text{ for } i∈\mathbb{Z}\setminus \{y\} \end{cases}\, . \end{align*}
Let $k∈\mathbb{Z}$. Let $I$ be a (possibly infinite) set. Let $g_i = (g_{i,j})_j∈\Hom_B^k(C,C')$ for $i∈I$ such that $\{i∈I \mid g_{i,j}\neq 0\}$ is finite for all $j∈\mathbb{Z}$.\\* We define the sum $\sum_{i∈I} g_i∈\Hom_B^k(C,C')$ by \begin{align*} \left(\sum\nolimits_{i∈I} g_i\right)_j := \sum_{i∈I,g_{i,j}\neq 0} g_{i,j}\,. \end{align*}
The graded $R$-module $\Hom_B^*(C,C') := \bigoplus_{k∈\mathbb{Z}} \Hom_B^k(C,C')$ becomes a complex via the differential $d_{\Hom_B^*(C,C')}$, which is defined on elements $g∈\Hom_B^k(C,C')$, $k∈\mathbb{Z}$ by \begin{align*}
d_{\Hom_B^*(C,C')}(g) := d' \circ g -(-1)^k g\circ d ∈\Hom_B^{k+1}(C,C'), \end{align*} where $d := (d_{i+1})_{i∈\mathbb{Z}}= \sum_{i∈\mathbb{Z}}\lfloor d_{i+1}\rfloor^{i}_{i+1}∈\Hom_B^1(C,C)$ and analogously $d' := (d'_{i+1})_{i∈\mathbb{Z}}= \sum_{i∈\mathbb{Z}}\lfloor d'_{i+1}\rfloor^{i}_{i+1}∈\Hom_B^1(C',C')$.
An element $h∈\Hom_B^0(C,C')$ is called a complex morphism if it satisfies $d_{\Hom_B^*(C,C')}(h) = 0$, i.e.\ $d'\circ g = g\circ d$.
\end{itemize}
\section{The projective resolution of \texorpdfstring{$ {\mathbb{F}_{\!p}} $}{Fp} over \texorpdfstring{$ {\mathbb{F}_{\!p}} \!\Sy_p$}{FpSp}} \label{secpres} \subsection{A description of \texorpdfstring{$\mathbb{Z}_{(p)}\!\Sy_p$}{ZpSp}} \kommentar{ \label{desczpsp} In this paragraph, we review results found e.g.\ in \cite[Chapter 4.2]{Ku99}. We use the notation of \cite{Ja78}.
Let $n∈\mathbb{Z}_{\geq 1}$.
A partition of the form $λ^k := (n-k+1,1^{k-1})$, $k∈[1,n]$ is called a \textit{hook partition} of $n$.
Suppose $λ\dashv n$, i.e.\ $λ$ is a partition of $n$.
Let $S^λ$ be the corresponding integral Specht module, which is a right $\mathbb{Z}\!\Sy_n$-module, cf. \cite[4.3]{Ja78}. Then $S^{λ}$ is finitely generated free over $\mathbb{Z}$, cf. \cite[8.1, proof of 8.4]{Ja78}, having a standard $\mathbb{Z}$-basis consisting of the standard $λ$-polytabloids. We write $n_λ$ for the rank of $S^λ$.
For a tuple $b=(b_2,b_3,…,b_k)$, $k∈[1,n]$, of pairwise distinct elements of $[1,n]$, let $\langle\hspace{-3pt}\langle b\rangle\hspace{-3pt}\rangle$ be the $λ^k$-polytabloid generated by the $λ^k$-tabloid } \tikzset{rightrule/.style={
execute at end cell={
\draw [line cap=rect,#1] (\tikzmatrixname-\the\pgfmatrixcurrentrow-\the\pgfmatrixcurrentcolumn.north east) -- (\tikzmatrixname-\the\pgfmatrixcurrentrow-\the\pgfmatrixcurrentcolumn.south east);
}
},
bottomrule/.style={
execute at end cell={
\draw [line cap=rect,#1] (\tikzmatrixname-\the\pgfmatrixcurrentrow-\the\pgfmatrixcurrentcolumn.south west) -- (\tikzmatrixname-\the\pgfmatrixcurrentrow-\the\pgfmatrixcurrentcolumn.south east);
}
},
toprule/.style={
execute at end cell={
\draw [line cap=rect,#1] (\tikzmatrixname-\the\pgfmatrixcurrentrow-\the\pgfmatrixcurrentcolumn.north west) -- (\tikzmatrixname-\the\pgfmatrixcurrentrow-\the\pgfmatrixcurrentcolumn.north east);
}
} } \kommentar{ \begin{center} \begin{tikzpicture}[ inner xsep=-0.2mm, inner ysep=0.4mm, cell/.style={rectangle,draw=black}, space/.style={minimum height=1.0em,matrix of nodes,row sep=0.2mm,column sep=1mm}]
\matrix (m) [matrix of nodes,
nodes={ outer sep=0pt},
row 2/.style={bottomrule}, row 3/.style={bottomrule}, row 4/.style={bottomrule}, row 5/.style={bottomrule}
] {
\nasm{\vphantom{b_2}\,*\,} & \nasm{\,\cdots\,\vphantom{b_2}} & \nasm{\,*\vphantom{b_2}} \\
\nasm{b_2} \\
\nasm{b_3} \\
\nasm{\hphantom{\vdots}\vdots\hphantom{\vdots}} \\
\nasm{b_n} \\
};
\draw[decorate,thick] (m-1-1.north west) -- (m-1-3.north east);
\draw[decorate,thick] (m-1-1.south west) -- (m-1-3.south east); \end{tikzpicture}\, , \end{center}
where $*\cdots *$ are the elements of $[1,n]\setminus b$. Any polytabloid of $S^{λ^k}$ can be expressed this way.
For such a tuple $b$ and distinct elements $y_1,…,y_s∈[1,n]\setminus b$, we denote by $(b,y_1,…,y_s)$ the tuple $(b_2,b_3,…,b_k,y_1,…,y_s)$. Recall the notations for manipulation of tuples from \cref{sec:not}.\\*
The $λ^k$-polytabloid $\langle\hspace{-3pt}\langle b\rangle\hspace{-3pt}\rangle$ is standard iff $2\leq b_2 < b_3 < \cdots < b_k \leq n$, cf. \cite[8.1]{Ja78}. This entails the following lemma. \begin{lemma} For $k∈[1,n]$, the rank of $S^{λ^k}$ is given by $n_{λ^k}=\binom{n-1}{k-1}$. \end{lemma} \begin{lemma}[{cf. e.g.\ \cite[Proposition 4.2.3]{Ku99}}] \label[lemma]{lem:boxmorph} Let $k∈[1,n-1]$. We have the $\mathbb{Z}$-linear box shift morphisms for hooks \[\begin{array}{rcl} S^{λ^k} & \overset{f_k}{\longrightarrow} & S^{λ^{k+1}}\\ \langle\hspace{-3pt}\langle b \rangle\hspace{-3pt}\rangle & \longmapsto & \sum_{s∈[2,n]\setminus b} \langle\hspace{-3pt}\langle (b,s)\rangle\hspace{-3pt}\rangle. \end{array}\] For $x∈S^{λ^k}$ and $ρ∈\Sy_n$, we have \begin{align} \label{modn} f_k(x\cdot ρ) \equiv_n f_k(x)\cdot ρ. \end{align} I.e.\ the composite $(S^{λ^k} \xrightarrow{f_k}S^{λ^{k+1}} \xrightarrow{π} S^{λ^{k+1}}/nS^{λ^{k+1}})$, where $π$ is residue class map, is $\mathbb{Z}\!\Sy_n$-linear. \end{lemma}
\begin{lemma}[cf. {\cite[Lemma 2]{Pe71}}, {\cite[Proposition 4.2.4]{Ku99}}] The following sequence of $\mathbb{Z}$-linear maps is exact. \begin{align*} 0 \rightarrow S^{λ^1} \xrightarrow{f_1} S^{λ^2} \xrightarrow{f_2} \cdots \xrightarrow{f_{n-1}} S^{λ^n} \rightarrow 0 \end{align*} \end{lemma} \begin{proof} We show that $\im f_k\subseteq \ker f_{k+1}$ for $k∈[1,n-2]$, i.e.\ that $f_{k+1}\circ f_k = 0$. Let $\langle\hspace{-3pt}\langle b\rangle\hspace{-3pt}\rangle ∈ S^{λ^k}$ be a polytabloid. We obtain \begin{align*} f_{k+1}f_k(\langle\hspace{-3pt}\langle b\rangle\hspace{-3pt}\rangle) =\,& f_{k+1}\left( \sum_{s∈[2,n]\setminus b} \langle\hspace{-3pt}\langle (b,s)\rangle\hspace{-3pt}\rangle\right)
= \sum_{\substack{s,t∈ [2,n]\setminus b,\\s\neq t}} \langle\hspace{-3pt}\langle (b,s,t)\rangle\hspace{-3pt}\rangle\\ =\,& \sum_{\substack{s,t ∈[2,n]\setminus b,\\s<t}} \big(\langle\hspace{-3pt}\langle (b,s,t)\rangle\hspace{-3pt}\rangle + \langle\hspace{-3pt}\langle(b,t,s)\rangle\hspace{-3pt}\rangle\big) \overset{\text{cf. \cite[4.3]{Ja78}}}{=} 0. \end{align*} Now we show the exactness of the sequence. For convenience, we set \mbox{$f_0\colon0→S^{λ^1}$} and $f_n\colon S^{λ^n}→0$. We define $T^k$ for $k∈[1,n]$ to be the tuple of all tuples $b=(b_2,…,b_k)$ such that $2\leq b_2<b_3 < … < b_k\leq n-1$, where $T^k$ is ordered, say, lexicographically.
Then we set $B^k_{\rm b} := ( \langle\hspace{-3pt}\langle b \rangle\hspace{-3pt}\rangle \colon b∈T^k)$, which consists of standard $λ^k$-polytabloids. We set $B^1_{\rm c} := ()$, which is the empty tuple, and for $k∈[2,n]$, \begin{align*} B^k_{\rm c} :=\,& (f_{k-1}(x) \colon x∈B^{k-1}_{\rm b})\\
=\,& \left(\sum_{s∈[2,n]\setminus b} \langle\hspace{-3pt}\langle (b,s)\rangle\hspace{-3pt}\rangle \colon b∈T^{k-1}\right)
= \left(\langle\hspace{-3pt}\langle (b,n)\rangle\hspace{-3pt}\rangle + \sum_{\mathclap{s∈[2,n-1]\setminus b}} \langle\hspace{-3pt}\langle (b,s)\rangle\hspace{-3pt}\rangle \colon b∈T^{k-1}\right). \end{align*} So $B^k_{\rm c} \subseteq \im f_{k-1}$ and thus $f_k(B^k_{\rm c})\subseteq \{0\}$ for $k∈[1,n]$.
By comparing $B^k_{\rm c} \sqcup B^k_{\rm b}$ with the standard basis, we observe that $B^k_{\rm c}\sqcup B^k_{\rm b}$ is a $\mathbb{Z}$-basis of $S^{λ^k}$ for $k∈[1,n]$.
For $k∈[1,n]$, we have \begin{align*}
n_{\rm b}^k :=\,& |B^k_{\rm b}| = \binom{n-2}{k-1}\\
n_{\rm c}^k :=\,& |B^k_{\rm c}| =\left\{\begin{array}{ll} |B^{k-1}_{\rm b}| = \binom{n-2}{k-2} & \text{for }k∈[2,n] \\ 0 = \binom{n-2}{1-2} & \text{for }k=1 \end{array}\right\} = \binom{n-2}{k-2}. \end{align*} For $k∈[1,n-1]$, the morphism $f_k$ maps $\langle B^k_{\rm b}\rangle_{\mathbb{Z}}$ bijectively to $\langle B^{k+1}_{\rm c}\rangle_{\mathbb{Z}}$ and $\langle B^{k}_{\rm c}\rangle_{\mathbb{Z}}$ to zero. So $\ker f_k = \langle B^{k}_{\rm c}\rangle_{\mathbb{Z}}$ and $\im f_k = \langle B^{k+1}_{\rm c}\rangle_{\mathbb{Z}}$. As $B^1_{\rm c} = () = B^n_{\rm b}$, we have also $\im f_0 = \langle B^1_{\rm c}\rangle_{\mathbb{Z}}$ and $\ker f_n = \langle B^n_{\rm c}\rangle_{\mathbb{Z}}$. So the sequence in question is exact. \end{proof}
We equip the Specht modules $S^{λ^k}$ of hook type with the ordered $\mathbb{Z}$-basis $B^k_{\rm c}\sqcup B^k_{\rm b}$. We equip all other Specht modules with the standard $\mathbb{Z}$-basis with an arbitrarily chosen total order. From now on each of these bases will be referred to as \textit{the} basis of the respective Specht module. We define the $\mathbb{Z}$-algebra \begin{align*} Γ^{\mathbb{Z}} := \prod_{λ\dashv n} \mathbb{Z}^{n_λ\times n_λ}. \end{align*} Let $λ\dashv n$ and let $B = (b_1,…,b_{n_λ})$ be the basis of $S^{λ}$. For the multiplication with matrices, we identify $S^λ$ with $\mathbb{Z}^{1\times n_λ}$ via $B$.
Then $S^λ$ becomes a right $Γ^{\mathbb{Z}}$-module via $x\cdot ρ := x\cdot ρ^λ$ for $x∈S^λ$ and $ρ∈Γ^{\mathbb{Z}}$, where $ρ^λ$ is the $λ$-th component of $ρ$.
I.e.\ $ρ∈Γ^{\mathbb{Z}}$ operates by multiplication with the matrix $ρ^λ$ on the right with respect to the basis $B$.
Similarly, $\bigoplus_{λ\dashv n} S^λ$ becomes a right $Γ^{\mathbb{Z}}$-module. Each $\mathbb{Z}$-endomorphism of $\bigoplus_{λ\dashv n} S^λ$ is represented by the operation of a unique element of $Γ^{\mathbb{Z}}$. As the operation of $\mathbb{Z}\!\Sy_n$ defines such endomorphisms (cf. \cite[Corollary 8.7]{Ja78}), we obtain a $\mathbb{Z}$-algebra morphism $r^{\mathbb{Z}}:\mathbb{Z}\!\Sy_n → Γ^{\mathbb{Z}}$ such that $y\cdot r^{\mathbb{Z}}(x) = y\cdot x$ for all $λ\dashv n$, $y∈S^λ$, $x∈\mathbb{Z}\Sy_n$.
As the Specht modules give all irreducible ordinary representations of $\Sy_n$, the map $r^{\mathbb{Z}}$ is injective. Because of \eqref{modn}, the image of $r^{\mathbb{Z}}$ is contained in \begin{align*} Λ^{\mathbb{Z}} := \{ρ∈Γ^{\mathbb{Z}} \mid f_k(xρ) \equiv_n f_k(x)ρ\, ∀_{k∈[1,n-1]}\, ∀_{x∈S^{λ^k}}\} \subseteq Γ^{\mathbb{Z}}. \end{align*} As the basis $B_{\rm c}^k\sqcup B_{\rm b}^k$ of $S^{λ^k}$, $k∈[1,n]$, consists of two parts, we may split each $ρ^{λ^k}$ for a $ρ∈Γ^{\mathbb{Z}}$ into four blocks corresponding to the parts $B^k_{\rm c}$ and $B^k_{\rm b}$:
\begin{equation} \label{eq:blockmotivation} ρ^{λ^k} = \xyprot{
\begin{tikzpicture}[decoration=brace,baseline]
\matrix (m) [matrix of math nodes,left delimiter=(,right delimiter=),
nodes={ outer sep=0pt},
row 1/.style={bottomrule},column 1/.style=rightrule] {
ρ^{λ^k}_{\rm cc}\vphantom{ρ^{λ^k}_{\rm bc}} & ρ^{λ^k}_{\rm bc} \\
ρ^{λ^k}_{\rm cb} & ρ^{λ^k}_{\rm bb} \\
};
\draw[decorate,transform canvas={xshift=1.5em},thick] (m-1-2.north east) -- node[right=2pt] {$n^k_{\rm c}$} (m-1-2.south east);
\draw[decorate,transform canvas={xshift=1.5em},thick] (m-2-2.north east) -- node[right=2pt] {$n^k_{\rm b}$} (m-2-2.south east);
\draw[decorate,transform canvas={yshift=0.5em},thick] (m-1-1.north west) -- node[above=2pt] {$n^k_{\rm c}$} (m-1-1.north east);
\draw[decorate,transform canvas={yshift=0.5em},thick] (m-1-2.north west) -- node[above=2pt] {$n^k_{\rm b}$} (m-1-2.north east); \end{tikzpicture} } \end{equation} Suppose given $k∈[1,n-1]$. We represent $f_k$ by a matrix $M_{f_k}$ with respect to the bases of $S^{λ^k}$ and $S^{λ^{k+1}}$, i.e.\ $f_k(x) = x\cdot M_{f_k}$ for $x∈S^{λ^k}$. As $f_k(B_{\rm b}^k) = B_{\rm c}^{k+1}$ and $f_k(B_{\rm c}^k)\subseteq \{0\}$, the matrix $M_{f_k}$ has the following block form: \newcommand{\hhlline}[3]{\draw (#1-#2-1.south west) -- (#1-#2-#3.south east);} \begin{equation*} M_{f_k} = \xyprot{
\begin{tikzpicture}[decoration=brace,baseline]
\matrix (m) [matrix of math nodes,left delimiter=(,right delimiter=),
nodes={ outer sep=0pt},
row 1/.style={bottomrule},column 1/.style=rightrule] {
{\hphantom{E_{n^k_{\rm b}}}\llap{0}} & 0 \\
E_{n^k_{\rm b}} & 0\vphantom{E_{n^k_{\rm b}}} \\
};
\draw[decorate,transform canvas={xshift=1.5em},thick] (m-1-2.north east) -- node[right=2pt] {$n^k_{\rm c}$} (m-1-2.south east);
\draw[decorate,transform canvas={xshift=1.5em},thick] (m-2-2.north east) -- node[right=2pt] {$n^k_{\rm b}$} (m-2-2.south east);
\draw[decorate,transform canvas={yshift=0.5em},thick] (m-1-1.north west) -- node[above=2pt] {$n^{k+1}_{\rm c}$} (m-1-1.north east);
\draw[decorate,transform canvas={yshift=0.5em},thick] (m-1-2.north west) -- node[above=2pt] {\rlap{$n^{k+1}_{\rm b}$}\hphantom{$n$}} (m-1-2.north east);
\end{tikzpicture} } \end{equation*} Here $E_i$ is the $i\times i$-identity matrix for $i∈\mathbb{Z}_{\geq 1}$.
So for $x∈S^{λ^k}$, $ρ∈Γ^{\mathbb{Z}}$ we have \begin{align*} f_k(x)\cdot ρ =\, & x\cdot M_{f_k} \cdot ρ^{λ^{k+1}}
=
x\cdot \left(\begin{array}{c|c} 0 & 0 \\ \hline E_{n^k_{\rm b}}& 0\end{array}\right)
\cdot \left(\begin{array}{c|c} ρ^{λ^{k+1}}_{\rm cc} & ρ^{λ^{k+1}}_{\rm bc} \\ \hline ρ^{λ^{k+1}}_{\rm cb} \ru{5} & ρ^{λ^{k+1}}_{\rm bb} \end{array}\right) =
x\cdot \left(\begin{array}{c|c} 0&0 \\ \hline \ru{5} ρ^{λ^{k+1}}_{\rm cc} & ρ^{λ^{k+1}}_{\rm bc} \end{array}\right)\\
f_k(x\cdot ρ) =\, & x \cdot ρ^{λ^{k}} \cdot M_{f_k} =
x\cdot \left(\begin{array}{c|c} ρ^{λ^k}_{\rm cc} & ρ^{λ^k}_{\rm bc} \\ \hline ρ^{λ^k}_{\rm cb} \ru{5} & ρ^{λ^k}_{\rm bb} \end{array}\right)
\cdot \left(\begin{array}{c|c} 0 & 0 \\ \hline E_{n^k_{\rm b}}& 0\end{array}\right)
= x\cdot \left(\begin{array}{c|c} ρ^{λ^k}_{\rm bc} & 0 \\ \hline \ru{5} ρ^{λ^k}_{\rm bb} & 0 \end{array}\right). \end{align*} This way we have $f_k(x\cdot ρ)\equiv_n f_k(x)\cdot ρ$ for all $x∈S^{λ^k}$ if and only if \mbox{$ρ_{\rm bb}^{λ^k} \equiv_n ρ_{\rm cc}^{λ^{k+1}}$}, $ρ_{\rm bc}^{λ^k} \equiv_n 0$ and $ρ_{\rm bc}^{λ^{k+1}} \equiv_n 0$. So \begin{align} \label{lambdazex} Λ^{\mathbb{Z}} = \{ρ∈Γ^{\mathbb{Z}} \mid (ρ^{λ^k}_{\rm bb} \equiv_n ρ^{λ^{k+1}}_{\rm cc}\text{ for } k∈[1,n-1])\text{ and } (ρ^{λ^k}_{\rm bc} \equiv_n 0\text{ for } k∈[1,n])\}. \end{align} We have (cf. e.g.\ \cite[Corollary 4.2.6]{Ku99})
\[|Γ^{\mathbb{Z}}/Λ^{\mathbb{Z}}|=n^{\frac{1}{2}\sum_{k∈[1,n]} \binom{n-1}{k-1}^2},\] which is proven by counting the congruences in \eqref{lambdazex}: \begin{align*}
|Γ^{\mathbb{Z}}/Λ^{\mathbb{Z}}| =\,& n^{\sum_{k=1}^{n-1} (n_{\rm b}^k)^2 + \sum_{k=1}^{n} n_{\rm b}^k\cdot n_{\rm c}^k}\\ \ovs{n_{\rm b}^n=0}& n^{\sum_{k∈[1,n]} ((n_{\rm b}^k)^2 + n_{\rm b}^k\cdot n_{\rm c}^k)} = n^{\sum_{k∈[1,n]} n_{\rm b}^k(n_{\rm b}^k + n_{\rm c}^k)} \\ \sum_{k∈[1,n]} n_{\rm b}^k(n_{\rm c}^k + n_{\rm b}^k)=\,& \sum_{k∈[1,n]} \tbinom{n-2}{k-1}\left(\tbinom{n-2}{k-2}+\tbinom{n-2}{k-1}\right)\\ =\,& \frac{1}{2}\sum_{k∈[1,n]} \left(\tbinom{n-2}{k-1}\tbinom{n-2}{k-2}+\tbinom{n-2}{k-1}^2\right)
+ \frac{1}{2}\sum_{k∈[1,n]} \left(\tbinom{n-2}{k-1}\tbinom{n-2}{k-2}+\tbinom{n-2}{k-2}^2\right)\\ =\,& \frac{1}{2}\sum_{k∈[1,n]} \tbinom{n-2}{k-1}\left(\tbinom{n-2}{k-2}+\tbinom{n-2}{k-1}\right) +\tbinom{n-2}{k-2}\left(\tbinom{n-2}{k-1}+\tbinom{n-2}{k-2}\right)\\ =\,&\frac{1}{2}\sum_{k∈[1,n]} \tbinom{n-2}{k-1}\tbinom{n-1}{k-1} +\tbinom{n-2}{k-2}\tbinom{n-1}{k-1} =\frac{1}{2}\sum_{k∈[1,n]} \tbinom{n-1}{k-1}^2\\ \end{align*}
Recall that $p\geq 3$ is a prime. Let $n=p$. We have the commutative diagram of $\mathbb{Z}$-modules \begin{align} \begin{aligned} \label{diagrz} \xyprot{
\xymatrix@!C{ \mathbb{Z} \Sy_p\ar@{^{(}->}[d]^{r^{\mathbb{Z}}}\ar@{^{(}->}[r]^{ι^{\mathbb{Z}}\circ r^{\mathbb{Z}}} & Γ^{\mathbb{Z}}\ar@{->>}[r] \ar[d]^{\rotatebox{90}{=}} & Γ^{\mathbb{Z}}/(ι^{\mathbb{Z}}\circ r^{\mathbb{Z}}(\mathbb{Z} \Sy_p)) \ar@{->>}[d]^{s^{\mathbb{Z}}} \\ Λ^{\mathbb{Z}} \ar@{^{(}->}[r]^{ι^{\mathbb{Z}}} & Γ^{\mathbb{Z}} \ar@{->>}[r] & Γ^{\mathbb{Z}}/Λ^{\mathbb{Z}} } } \end{aligned} \end{align} The map $ι^{\mathbb{Z}}$ is the inclusion of $Λ^{\mathbb{Z}}$ in $Γ^{\mathbb{Z}}$. The maps from $Γ^{\mathbb{Z}}$ to $Γ^{\mathbb{Z}}/(ι^{\mathbb{Z}}\circ r^{\mathbb{Z}}(\mathbb{Z} \Sy_p))$ and to $Γ^{\mathbb{Z}}/Λ^{\mathbb{Z}}$ are the residue class maps. As $r^{\mathbb{Z}}(\mathbb{Z} \Sy_p)\subseteq Λ^{\mathbb{Z}}$, we have an unique surjective map $s^{\mathbb{Z}}:Γ^{\mathbb{Z}}/(ι^{\mathbb{Z}}\circ r^{\mathbb{Z}}(\mathbb{Z} \Sy_p))→ Γ^{\mathbb{Z}}/Λ^{\mathbb{Z}}$ such that the right rectangle is commutative. By construction, the rows of the diagram are short exact sequences. Note that the morphisms of the left rectangle are in fact $\mathbb{Z}$-algebra morphisms.
We will need the following result on the localization of rings.
\begin{lemma}[cf.\ {\cite[chap.\ II Localisation, §2, $\text{n}^{\text{o}}$ 3, Théorème 1]{Bo61}}] \label[lemma]{boexact} Let $A$ be a commutative ring. Let $P\subseteq R$ a prime ideal of $A$. Let $A_P$ be the localization of $A$ at $P$. Then $A_P$ is a flat $A$-module, that is, the functor $-\underset{A}{\otimes} \leftidx{_A}{(A_P)}{_{A_P}}$ from the category of $A$-modules to the category of $A_P$-modules is exact. \end{lemma}
We denote by $\mathbb{Z}_{(p)}$ the localization of $\mathbb{Z}$ at the prime ideal $(p):=p\mathbb{Z}$. We apply the functor $-\underset{\mathbb{Z}}{\otimes } \mathbb{Z}_{(p)}$ to obtain a commutative diagram \eqref{diagrz} of the following form: \begin{align} \label{diagr} \begin{aligned} \xyprot{
\xymatrix@!C{ \mathbb{Z}_{(p)}\!\Sy_p\ar@{^{(}->}[d]^{r}\ar@{^{(}->}[r]^{ι \circ r} & Γ\ar@{->>}[r] \ar[d]^{\rotatebox{90}{=}} & Γ/(ι\circ r(\mathbb{Z}_{(p)}\!\Sy_p)) \ar@{->>}[d]^{s} \\ Λ \ar@{^{(}->}[r]^{ι} & Γ \ar@{->>}[r] & Γ/Λ } } \end{aligned} \end{align} By \cref{boexact}, the functor $-\underset{\mathbb{Z}}{\otimes } \mathbb{Z}_{(p)}$ is exact, so the short exact sequences are mapped to short exact sequences, monomorphisms to monomorphisms and epimorphisms to epimorphisms. So the rows of diagram \eqref{diagr} are exact and we have mono-/epimorphism as indicated by the arrows. We identify $\mathbb{Z} \Sy_p \underset{\mathbb{Z}}{\otimes } \mathbb{Z}_{(p)}$ with $\mathbb{Z}_{(p)}\!\Sy_p$. We identify $Γ^{\mathbb{Z}}\underset{\mathbb{Z}}{\otimes } \mathbb{Z}_{(p)}$ with \begin{align*} Γ := \prod_{λ\dashv n} \mathbb{Z}_{(p)}^{n_λ\times n_λ}. \end{align*} The map $ι$ realizes $Λ:=Λ^{\mathbb{Z}}\underset{\mathbb{Z}}{\otimes } \mathbb{Z}_{(p)}$ as the following subset of $Γ$, for which we will use notation analogous to \eqref{eq:blockmotivation}: \begin{align*} Λ = \{ρ∈Γ \mid (ρ^{λ^k}_{\rm bb} \equiv_p ρ^{λ^{k+1}}_{\rm cc}\text{ for } k∈[1,p-1])\text{ and } (ρ^{λ^k}_{\rm bc} \equiv_p 0 \text{ for } k∈[1,p])\} \end{align*} As the rows are exact, we identify $(Γ^{\mathbb{Z}}/(ι^{\mathbb{Z}}\circ r^{\mathbb{Z}}(\mathbb{Z} \Sy_p)) )\underset{\mathbb{Z}}{\otimes } \mathbb{Z}_{(p)}$ with $Γ/(ι\circ r(\mathbb{Z}_{(p)}\!\Sy_p)$ and $(Γ^{\mathbb{Z}}/Λ^{\mathbb{Z}})\underset{\mathbb{Z}}{\otimes } \mathbb{Z}_{(p)}$ with $Γ/Λ$.
By the classification of finitely generated $\mathbb{Z}$-modules, each finite $\mathbb{Z}$-module $M$ is isomorphic to a finite direct sum of modules of the form $\mathbb{Z}/q^a \mathbb{Z}$, where $q$ is a prime and $a\in \mathbb{Z}_{\geq 0}$. If $q\neq p$ then $(\mathbb{Z}/q^a \mathbb{Z}) \underset{\mathbb{Z}}{\otimes } \mathbb{Z}_{(p)} \cong (0)$. Otherwise $(\mathbb{Z}/p^a \mathbb{Z}) \underset{\mathbb{Z}}{\otimes } \mathbb{Z}_{(p)} \cong \mathbb{Z}_{(p)}/p^a\mathbb{Z}_{(p)}$ and $|(\mathbb{Z}/p^a \mathbb{Z}) \underset{\mathbb{Z}}{\otimes } \mathbb{Z}_{(p)}| = p^a = |\mathbb{Z}/p^a\mathbb{Z}|$. For $x=p^{a_p} \cdot \prod_{\natop{q \text{ prime}}{q\neq p}} q^{a_q} ∈\mathbb{Z}_{\geq 1}$, we set \[(x)_p := p^{a_p}.\]
So for finite $M$, we have $|M\underset{\mathbb{Z}}{\otimes } \mathbb{Z}_{(p)}| = (|M|)_p$.
By the total index formula (cf. e.g.\ \cite[Proposition 1.1.4]{Ku99}), we have \begin{align*}
|Γ^{\mathbb{Z}}/(ι^{\mathbb{Z}}\circ r^{\mathbb{Z}}(\mathbb{Z} \Sy_p))| = \sqrt{\frac{p!^{p!}}{\prod_{λ \dashv p} n_λ^{n_λ^2}}}\,. \end{align*} By the hook formula (cf. \cite[20.1]{Ja78}, \cite[Lemma 4.2.7]{Ku99}), we have for $λ\dashv p$ \begin{align*} (n_λ)_p = \begin{cases} 1 & \text{if $λ$ is a hook-partition} \\ p & \text{ otherwise} \end{cases}\,. \end{align*} So \begin{align*}
|Γ/(i\circ r(\mathbb{Z}_{(p)}\!\Sy_p))| =\,& \left(\sqrt{\frac{p!^{p!}}{\prod_{λ \dashv p} n_λ^{n_λ^2}}}\right)_{\mathllap{p}} = \sqrt{\frac{p^{p!}}{\prod_{\natop{λ \dashv p}{λ\text{ not a hook}}} (n_λ)_p^{n_λ^2}}}\\ =\,& \sqrt{\frac{\prod_{λ\dashv p} p^{n_λ^2}}{\prod_{\natop{λ \dashv p}{λ\text{ not a hook}}} p^{n_λ^2}}} = \sqrt{\prod_{k∈[1,n]} p^{n_{λ^k}^2}} = p^{\frac{1}{2}\sum_{k∈[1,n]}\binom{p-1}{k-1}^2}\\
=\,& |Γ^{\mathbb{Z}}/Λ^{\mathbb{Z}}| = (|Γ^{\mathbb{Z}}/Λ^{\mathbb{Z}}|)_p= |Γ/Λ|. \end{align*} By the pigeon-hole-principle, $s$ is an isomorphism as it is surjective. As \eqref{diagr} has exact rows, $r$ needs to be an isomorphism as well. Note that the functor $-\underset{\mathbb{Z}}{\otimes } \mathbb{Z}_{(p)}$ transforms morphisms of $\mathbb{Z}$-algebras into morphisms of $\mathbb{Z}_{(p)}$-algebras. In particular, the left rectangle in \eqref{diagr} consists of morphisms of $\mathbb{Z}_{(p)}$-algebras and $r:\mathbb{Z}_{(p)}\!\Sy_p→Λ$ is an isomorphism of $\mathbb{Z}_{(p)}$-algebras. We have proven the}
Recall that $p\geq 3$ is a prime.
For $R$ a ring and $λ\dashv p$ a partition of $p$, the $R\!\Sy_p$-Specht module $S^λ$ is finitely generatey free over $R$ with dimension independent of $R$, cf.\ \cite[8.1, proof of 8.4]{Ja78}. We denote this dimension by $n_λ$.
A partition of the form $λ^k := (p-k+1,1^{k-1})$, $k∈[1,p]$ is called a \textit{hook partition} of $p$.
Over the valuation ring $\mathbb{Z}_{(p)}$, there is a well-known description of the group algebra $\mathbb{Z}_{(p)}\!\Sy_p$, cf. e.g.\ \cite[Corollary 4.2.8]{Ku99} (using \cite{Pe71}), cf.\ also \cite[Chapter 7]{Ro80}: \begin{pp} \label[pp]{lambda} Set $n^{k}_{\rm b} = \binom{p-2}{k-1}$, $n^{k}_{\rm c} = \binom{p-2}{k-2}$. Then $n^{k}_{\rm b} + n^{k}_{\rm c} = \binom{p-1}{k-1} = n_{λ^k}$. Set
$ Γ := \prod_{λ \dashv p} \mathbb{Z}_{(p)}^{n_λ\times n_λ} $.
For $ρ∈Γ$, and $λ\dashv p$, we denote by $ρ^λ$ the $λ$-th component of $ρ$. For $λ=λ^k$, $k∈[1,p]$, a hook partition, we name certain subblocks of $ρ^{λ^k}$ as follows. \begin{equation*} ρ^{λ^k} = \xyprot{
\begin{tikzpicture}[decoration=brace,baseline]
\matrix (m) [matrix of math nodes,left delimiter=(,right delimiter=),
nodes={ outer sep=0pt},
row 1/.style={bottomrule},column 1/.style=rightrule] {
ρ^{λ^k}_{\rm cc} & ρ^{λ^k}_{\rm bc} \\
ρ^{λ^k}_{\rm cb} & ρ^{λ^k}_{\rm bb} \\
};
\draw[decorate,transform canvas={xshift=1.5em},thick] (m-1-2.north east) -- node[right=2pt] {$n^k_{\rm c}$ rows} (m-1-2.south east);
\draw[decorate,transform canvas={xshift=1.5em},thick] (m-2-2.north east) -- node[right=2pt] (ru) {$n^k_{\rm b}$ rows} (m-2-2.south east);
\draw[decorate,transform canvas={yshift=0.5em},thick] (m-1-1.north west) -- node[above=2pt] {$n^k_{\rm c}$,} (m-1-1.north east);
\draw[decorate,transform canvas={yshift=0.5em},thick] (m-1-2.north west) -- node[above=2pt] {$n^k_{\rm b}$ \rlap{columns}} (m-1-2.north east);
\node[transform canvas={xshift=1.5em}] at (ru.south east) {.}; \end{tikzpicture} } \end{equation*} We have the following $\mathbb{Z}_{(p)}$-subalgebra $Λ$ of $Γ$. \begin{align*}
Λ &:= \{ρ∈Γ \mid ρ^{λ^k}_{\rm bb} \equiv_p ρ^{λ^{k+1}}_{\rm cc}\text{ for }k∈[1,p-1]\text{ and } ρ^{λ^k}_{\rm bc} \equiv_p 0\text{ for }k∈[1,p]\} \end{align*}
Now there is an isomorphism of $\mathbb{Z}_{(p)}$-algebras
\[\begin{array}{rccc} r:&\mathbb{Z}_{(p)}\!\Sy_p & \xrightarrow{\phantom{X}\sim\phantom{X}}& Λ. \end{array} \] such that $ρ∈Λ$ acts on the trivial $\mathbb{Z}_{(p)}\!\Sy_p$-module $\mathbb{Z}_{(p)}$ by multiplication with its ($1\times 1$ / scalar-)component $ρ^{λ^1}$, i.e.\ for $x∈\mathbb{Z}_{(p)}\!\Sy_p$ and for $y∈\mathbb{Z}_{(p)}$, we have $yx = y\cdot r(x)^{λ^1}$.
\end{pp}
\kommentar{\begin{bsp} FIXME: Dieses Bsp. rausmachen. \label[bsp]{bsp:s3} For $p=3$, the ring $\mathbb{Z}_{(3)}\Sy_3$ is isomorphic to the subring $Λ$ of $Γ=\mathbb{Z}_{(3)}^{1\times 1} \times \mathbb{Z}_{(3)}^{2\times 2} \times \mathbb{Z}_{(3)}^{1\times 1}$ described as \begin{center} \xyprot{
\noindent \begingroup \renewcommand*{\arraystretch}{0.25} \begin{tikzpicture}[inner xsep=0.1mm, inner ysep=0.0mm, cell/.style={rectangle,draw=black}, space/.style={minimum height=1.0em,matrix of nodes,row sep=0.0mm,column sep=1mm,
}] \matrix (m1) [space, below delimiter=\}\rotatebox{0}{$\begin{matrix*}[l]\scriptstyle{\times}\\ \scriptstyle\times \\ \scriptstyle{\times}\end{matrix*}$}] { \nsm{\mathbb{Z}_{(3)}\vphantom{a_1^1}}\\ }; \matrix (m2) [right=of m1, space, below delimiter=\}\rotatebox{0}{$\begin{matrix*}[l]\scriptstyle\times \\ \scriptstyle{\times}{\times}\end{matrix*}$}] { \nsm{\mathbb{Z}_{(3)}\vphantom{a_1^1}} & \nsm{3\mathbb{Z}_{(3)}\vphantom{a_1^1}}\\% \hline \nsm{\mathbb{Z}_{(3)}\vphantom{a_1^1}} & \nsm{\mathbb{Z}_{(3)}\vphantom{a_1^1}} \\ }; \matrix (m3) [right=of m2,space, below delimiter=\}\rotatebox{0}{$\begin{matrix*}[l] \scriptstyle{\times}{\times}{\times}\end{matrix*}$}] { \nsm{\mathbb{Z}_{(3)}\vphantom{a_1^1}}\\ }; \node[draw=black, fit=(m1-1-1)](B12) {}; \node[draw=black, fit=(m2-1-1)](B21) {}; \node[draw=black, fit=(m2-2-2)](B22) {}; \node[draw=black, fit=(m3-1-1) ](B31) {}; \draw[black] (B12.east) -- node[sloped,above]{\raisebox{0.5mm}{$\scriptscriptstyle 3$}} (B21.west); \draw[black] (B22.east) -- node[sloped,above]{\raisebox{0.5mm}{$\scriptscriptstyle 3$}} (B31.west);
\end{tikzpicture}. \endgroup } \end{center} An entry in this tuple of matrices indicates that an element of $Λ$ must have its corresponding entry in the indicated set. A relation "\begin{tikzpicture} \node (n1) {}; \node (n2) [right=of n1] {}; \draw[black] (n1.east) -- node[sloped,above]{$\scriptscriptstyle p$} (n2.west); \end{tikzpicture}" between (equal sized) subblocks indicates that these subblocks are equivalent modulo $p$, i.e.\ the difference of corresponding entries is an element of $p\mathbb{Z}_{(p)}$. The blocks are labeled with the diagrams of the corresponding partitions. Alternatively, $Λ$ is the $\mathbb{Z}_{(3)}$-span of \begin{align*} \left( 3, \begin{pmatrix} 0 & 0 \\ 0& 0\end{pmatrix}, 0 \right)=:u, && \left( 1, \begin{pmatrix} 1 & 0 \\ 0& 0\end{pmatrix}, 0 \right)=:\tilde e_1, && \left( 0, \begin{pmatrix} 0 & 3 \\ 0& 0\end{pmatrix}, 0 \right)=: v_1,\\ \left( 0, \begin{pmatrix} 0 & 0 \\ 1& 0\end{pmatrix}, 0 \right)=: v_2,&& \left( 0, \begin{pmatrix} 0 & 0 \\ 0& 1\end{pmatrix}, 1 \right)=:\tilde e_2, && \left( 0, \begin{pmatrix} 0 & 0 \\ 0& 3\end{pmatrix}, 0\right)=: w. \end{align*} We have an orthogonal decomposition $1=\tilde e_1 + \tilde e_2$ into primitive idempotents. Thus we have a decomposition $Λ=\tilde P_1 \oplus \tilde P_2$ into indecomposable projective right modules, where \begin{align*} \tilde P_1 :=\,& \tilde e_1 Λ = \langle u, \tilde e_1, v_1\rangle_{\mathbb{Z}_{(3)}}, & \tilde P_2 :=\,& \tilde e_2 Λ = \langle v_2, \tilde e_2, w \rangle_{\mathbb{Z}_{(3)}}. \end{align*} A more pictoral description of $\tilde P_i$, $i∈[1,2]$ is that the elements of $\tilde P_i⊂Λ$ are exactly the elements of $Λ$ whose non-zero entries are all in the box in FIXME.
In this case all partitions of $3$ are of hook-type. Thus there appear no full matrix algebras as direct factors of $Λ$.
\end{bsp} }
\begin{bsp} For $p=5$, the $\mathbb{Z}_{(p)}$-algebra $\mathbb{Z}_{(5)}\Sy_5$ is isomorphic to the subalgebra $Λ$ of $Γ=\mathbb{Z}_{(5)}^{1\times 1}\times \mathbb{Z}_{(5)}^{4\times 4}\times \mathbb{Z}_{(5)}^{6\times 6}\times \mathbb{Z}_{(5)}^{4\times 4}\times \mathbb{Z}_{(5)}^{1\times 1}\times \mathbb{Z}_{(5)}^{5\times 5}\times \mathbb{Z}_{(5)}^{5\times 5}$ described as \begin{center} \xyprot{
\noindent \begingroup \renewcommand*{\arraystretch}{0.25} \newcommand{-0.9mm}{-0.9mm} \begin{tikzpicture}[inner xsep=0.1mm, inner ysep=0.0mm,node distance=4mm,
space/.style={minimum height=1.0em,matrix of nodes,row sep=1.0mm,column sep=1mm,
} ] \matrix (m1) [space, below delimiter=\}\rotatebox{0}{\smash{$\begin{matrix*}[l]\scriptstyle{\times}\\ \scriptstyle\times \\ \scriptstyle\times \\ \scriptstyle\times \\ \scriptstyle{\times}\end{matrix*}$}}] { \asm{\,\mathbb{Z}_{(5)}}\\ }; \matrix (m2) [right=of m1, space, below delimiter=\}\rotatebox{0}{$\begin{matrix*}[l]\scriptstyle\times \\ \scriptstyle\times \\ \scriptstyle\times \\ \scriptstyle{\times}{\times}\end{matrix*}$}] { \asm{\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} \\% \hline \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \,\asm{\mathbb{Z}_{(5)}}\, \\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} \\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} \\ }; \matrix (m3) [right=of m2, space, below delimiter=\}\rotatebox{0}{$\begin{matrix*}[l]\scriptstyle\times \\ \scriptstyle\times \\ \scriptstyle{\times}{\times}{\times}\end{matrix*}$}] { \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} \\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} \\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} \\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} \\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} \\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} \\ }; \matrix (m4) [right=of m3, space, below delimiter=\}\rotatebox{0}{$\begin{matrix*}[l]\scriptstyle\times \\ \scriptstyle{\times}{\times}{\times}{\times}\end{matrix*}$}] { \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} \\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} \\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{5\mathbb{Z}_{(5)}} \\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} \\ }; \matrix (m5) [right=of m4,space, below delimiter=\}\rotatebox{0}{$\begin{matrix*}[l] \scriptstyle{\times}{\times}{\times}{\times}{\times}\end{matrix*}$}] { \asm{\mathbb{Z}_{(5)}\,}\\ }; \node[draw=black, fit=(m1-1-1)](B12) {}; \node[draw=black, fit=(m2-1-1)](B21) {}; \node[draw=black, fit=(m2-2-2.north west) (m2-4-4.south east)](B22) {}; \node[draw=black, fit=(m3-1-1.north west) (m3-3-3.south east)](B31) {}; \node[draw=black, fit=(m3-4-4.north west) (m3-6-6.south east)](B32) {}; \node[draw=black, fit=(m4-1-1.north west) (m4-3-3.south east)](B41) {}; \node[draw=black, fit=(m4-4-4)] (B42) {}; \node[draw=black, fit=(m5-1-1)] (B51) {}; \draw[black] (B12.east) -- node[sloped,above,pos=0.6]{\raisebox{0.5mm}{$\scriptscriptstyle 5$}} (B21.west); \draw[black] (B22.east) -- node[sloped,above,pos=0.6]{\raisebox{0.5mm}{$\scriptscriptstyle 5$}} (B31.west); \draw[black] (B32.east) -- node[sloped,above,pos=0.6]{\raisebox{0.5mm}{$\scriptscriptstyle 5$}} (B41.west); \draw[black] (B42.east) -- node[sloped,above,pos=0.6]{\raisebox{0.5mm}{$\scriptscriptstyle 5$}} (B51.west);
\node[left=-0.9mm] (C011) at (m1-1-1.south west) {}; \node[below=-0.9mm] (C11) at (C011) {}; \node[below=-0.9mm] (C12) at (m1-1-1.south east) {}; \node[below=-0.9mm] (C13) at (m2-1-1.south west) {}; \node[below=-0.9mm] (C14) at (m2-1-4.south east) {}; \node[above=-0.9mm] (C15) at (m2-1-4.north east) {}; \node[above=-0.9mm] (C16) at (m2-1-1.north west) {}; \node[above=-0.9mm] (C17) at (m1-1-1.north east) {}; \node[left=-0.9mm] (C018) at (m1-1-1.north west) {}; \node[above=-0.9mm] (C18) at (C018) {}; \draw[red,rounded corners=2pt] (C11.center) -- (C12.center) -- (C13.center) -- (C14.center) -- (C15.center) -- node[above]{\raisebox{1.5mm}{$\tilde P_1$}} (C16.center) -- (C17.center) -- (C18.center) -- cycle;
\node[below=-0.9mm] (C21) at (m2-2-1.south west) {}; \node[below=-0.9mm] (C22) at (m2-2-4.south east) {}; \node[below=-0.9mm] (C23) at (m3-1-1.south west) {}; \node[below=-0.9mm] (C24) at (m3-1-6.south east) {}; \node[above=-0.9mm] (C25) at (m3-1-6.north east) {}; \node[above=-0.9mm] (C26) at (m3-1-1.north west) {}; \node[above=-0.9mm] (C27) at (m2-2-4.north east) {}; \node[above=-0.9mm] (C28) at (m2-2-1.north west) {}; \draw[red,rounded corners=2pt] (C21.center) -- (C22.center) -- (C23.center) -- (C24.center) -- (C25.center) -- node[above]{\raisebox{1.5mm}{$\tilde P_2$}} (C26.center) -- (C27.center) -- (C28.center) -- cycle;
\node[below=-0.9mm] (C31) at (m3-4-1.south west) {}; \node[below=-0.9mm] (C32) at (m3-4-6.south east) {}; \node[below=-0.9mm] (C33) at (m4-1-1.south west) {}; \node[below=-0.9mm] (C34) at (m4-1-4.south east) {}; \node[above=-0.9mm] (C35) at (m4-1-4.north east) {}; \node[above=-0.9mm] (C36) at (m4-1-1.north west) {}; \node[above=-0.9mm] (C37) at (m3-4-6.north east) {}; \node[above=-0.9mm] (C38) at (m3-4-1.north west) {}; \draw[red,rounded corners=2pt] (C31.center) -- (C32.center) -- (C33.center) -- (C34.center) -- (C35.center) -- node[above]{\raisebox{1.5mm}{$\tilde P_3$}} (C36.center) -- (C37.center) -- (C38.center) -- cycle;
\node[below=-0.9mm] (C41) at (m4-4-1.south west) {}; \node[below=-0.9mm] (C42) at (m4-4-4.south east) {}; \node[below=-0.9mm] (C43) at (m5-1-1.south west) {}; \node[right=-0.9mm](C044) at (m5-1-1.south east) {}; \node[below=-0.9mm] (C44) at (C044) {}; \node[right=-0.9mm](C045) at (m5-1-1.north east) {}; \node[above=-0.9mm] (C45) at (C045) {}; \node[above=-0.9mm] (C46) at (m5-1-1.north west) {}; \node[above=-0.9mm] (C47) at (m4-4-4.north east) {}; \node[above=-0.9mm] (C48) at (m4-4-1.north west) {}; \draw[red,rounded corners=2pt] (C41.center) -- (C42.center) -- (C43.center) -- (C44.center) -- (C45.center) -- node[above]{\raisebox{1.5mm}{$\tilde P_4$}} (C46.center) -- (C47.center) -- (C48.center) -- cycle;
\end{tikzpicture} \\* \nopagebreak \begin{tikzpicture}[inner xsep=0.1mm, inner ysep=0.0mm,node distance=4mm, cell/.style={rectangle,draw=black}, space/.style={minimum height=1.0em,matrix of nodes,row sep=0.0mm,column sep=1mm,
}] \matrix (mx) [space, below delimiter=\}\rotatebox{0}{$\begin{matrix*}[l] \scriptstyle{\times}{\times} \\ \scriptstyle{\times}{\times}{\times} \end{matrix*}$}] { \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}}\\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}}\\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}}\\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}}\\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}}\\ }; \matrix (my) [space,right=of mx, below delimiter=\}\rotatebox{0}{$\begin{matrix*}[l]\scriptstyle{\times}\\ \scriptstyle{\times}{\times} \\ \scriptstyle{\times}{\times} \end{matrix*}$}] { \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}}\\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}}\\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}}\\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}}\\ \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}} & \asm{\mathbb{Z}_{(5)}}\\ }; \end{tikzpicture}. \endgroup } \end{center} An entry in this tuple of matrices indicates that an element of $Λ$ must have its corresponding entry in the indicated set. A relation "\begin{tikzpicture} \node (n1) {}; \node (n2) [right=of n1] {}; \draw[black] (n1.east) -- node[sloped,above]{$\scriptscriptstyle 5$} (n2.west); \end{tikzpicture}" between (equal sized) subblocks indicates that these subblocks are equivalent modulo $5$, i.e.\ the difference of corresponding entries is an element of $5\mathbb{Z}_{(5)}$. The blocks are labeled with the diagrams of the corresponding partitions.
The right ideals $\tilde P_i = \tilde e_iΛ$, $i∈[1,4]=[1,p-1]$ (cf.\ the definitions below) are framed with red lines.
\end{bsp}
\subsection {A projective resolution of \texorpdfstring{$\mathbb{Z}_{(p)}$}{Zp} over \texorpdfstring{$\mathbb{Z}_{(p)}\!\Sy_p$}{ZpSp}} \label{secpreszp}
Recall that $p\geq 3$ is a prime.
Recall from \cref{lambda} that $Λ$ is a subring of $Γ=\prod_{λ\dashv p} \mathbb{Z}_{(p)}^{n_λ\times n_λ}$.
For $λ\dashv p$ and $i,j∈[1,n_λ]$, we set $η_{λ,i,j}$ to be the element of $Γ$ such that $(η_{λ,i,j})^{\tilde{λ}} = 0$ for $\tilde{λ}\neq λ$ and $(η_{λ,i,j})^{λ}∈\mathbb{Z}^{n_λ\times n_λ}$ has entry $1$ at position $(i,j)$ and zeros elsewhere.
Let $k∈[1,p-1]$. We obtain the idempotent \begin{align*} \tilde e_k := η_{λ^k,n_{\rm c}^{k}+1,n_{\rm c}^{k}+1} + η_{λ^{k+1},1,1} ∈ Λ. \end{align*}
We define corresponding projective right $Λ$-modules \begin{align*} \tilde P_k :=\,& \tilde e_k Λ \quad \text{ for $k∈[1,p-1]$.} \end{align*}
\begin{bem} \label[bem]{rem:moralg} Let $A$ be an $R$-algebra and let $e,e'∈A$ be two idempotents. For the right modules $eA$, $e'A$, we have the isomorphism of $R$-Modules \[ \begin{array}{rcl}
\Hom_A(eA, e'A) &\underset{\sim}{\overset{T_{e',e}}{\longrightarrow}}& e'Ae\\ f &\longmapsto& T_{e',e}(f) := f(e)\\ T_{e',e}^{-1}(e'be):=(ea \mapsto e'bea) &\reflectbox{$\longmapsto$}&
e'be \phantom{:= f(e)X}. \end{array}\] Thus given $m∈e'Ae$, the morphism $T_{e',e}^{-1}(m)$ acts on elements $x∈eA$ by the multiplication of $m$ on the left: $(T_{e',e}^{-1}(m))(x) = m\cdot x$.
Given idempotents $e,e',e''∈A$, and elements $f∈\Hom_A(eA,e'A)$, $g∈A(e'A,e''A)$, we have $T_{e'',e}(g\circ f) = g(f(e)) = g(e'f(e)) = g(e')\cdot f(e)= T_{e'',e'}(g)\cdot T_{e',e}(f)$. \end{bem} \begin{Def} \label[Def]{def:diffs}
We define via \cref{rem:moralg} \begin{equation*} \begin{array}{lclll} \hat e_{k} &:=\,& T_{\tilde e_k, \tilde e_k}^{-1}(\tilde e_k) &∈\Hom_Λ(\tilde P_k,\tilde P_k) & \text{for }k∈[1,p-1]\\ \hat e_{1,1} &:=\,& T_{\tilde e_1,\tilde e_1}^{-1}(pη_{λ^1,1,1}) &∈\Hom_Λ(\tilde P_1,\tilde P_1) \\ \hat e_{p-1,p-1}&:=\,& T_{\tilde e_{p-1},\tilde e_{p-1}}^{-1}(pη_{λ^p,1,1}) &∈\Hom_Λ(\tilde P_{p-1},\tilde P_{p-1})\\ \hat e_{k+1,k} &:=\,& T_{\tilde e_{k+1},\tilde e_k}^{-1}(η_{λ^{k+1},n_{\rm c}^{k+1}+1,1}) &∈\Hom_Λ(\tilde P_k,\tilde P_{k+1}) & \text{for }k∈[1,p-2]\\ \hat e_{k,k+1} &:=\,& T_{\tilde e_k,\tilde e_{k+1}}^{-1}(pη_{λ^{k+1},1,n_{\rm c}^{k+1}+1}) &∈\Hom_Λ(\tilde P_{k+1},\tilde P_k) & \text{for }k∈[1,p-2]. \end{array} \end{equation*} Note that $\hat e_k$ is the identity map on $\tilde P_k$ for $k∈[1,p-1]$.
Moreover, we define the $\mathbb{Z}_{(p)}\!\Sy_p$-linear map $\hat{ε}:\tilde P_1 → \mathbb{Z}_{(p)}$, $\hat{ε}(ρ) := ρ^{λ^1}$.
\end{Def}
It is straightforward to show the following lemma. \begin{lemma} \label[lemma]{lem:zprel} We have \begin{equation*} \begin{array}{lclr} \hat e_{1,1} + \hat e_{1,2}\circ \hat e_{2,1} &=& p\hat e_1\\ \hat e_{k,k-1}\circ\hat e_{k-1,k} + \hat e_{k,k+1}\circ\hat e_{k+1,k} &=& p\hat e_k &\text{ for }k∈[2,p-2]\\ \hat e_{p-1,p-2}\circ\hat e_{p-2,p-1} + \hat e_{p-1,p-1} &=& p\hat e_{p-1}\\
\hat{ε}\circ \hat e_{1,1} &=& p\hat{ε}. \end{array} \end{equation*} \end{lemma}
Furthermore, it is straightforward to check that we obtain a projective resolution of $\mathbb{Z}_{(p)}$ as follows. We set \begin{align*} \tilde{\pr}_i :=\,& \begin{cases} \tilde P_{ω(i)} & i\geq 0 \\ 0 & i<0 \end{cases}, \end{align*} where the integer $ω(i)$ is given by the following construction: Recall the stipulation $l:=2(p-1)$. We have $i = jl+r$ for some $j∈\mathbb{Z}$ and $0\leq r\leq l-1$. Then \begin{align} \label{omega} ω(i) := \begin{cases} r+1 & \text{ for }0\leq r \leq p-2\\ l-r = 2(p-1) -r & \text{ for } p-1 \leq r \leq 2(p-1)-1 = l-1 \end{cases}. \end{align} So $ω(i)$ increases by steps of one from $1$ to $p-1$ as $i$ runs from $jl$ to $jl+(p-2)$ and $ω(i)$ decreases from $p-1$ to $1$ as $i$ runs from $jl+(p-1)$ to $jl + (l-1)$. Finally we set \begin{align*} \hat d_i := \begin{cases} \hat e_{ω(i-1),ω(i)}: \tilde P_{ω(i)} → \tilde P_{ω(i-1)} & i\geq 1\\ 0 & i\leq 0 \end{cases}. \end{align*} Now we have the projective resolution of $\mathbb{Z}_{(p)}$ \begin{align} \label{presform1} \cdots\xrightarrow{\hat d_3} \tilde{\pr}_2\xrightarrow{\hat d_2}\tilde{\pr}_1\xrightarrow{\hat d_1}\tilde{\pr}_0\xrightarrow{0=\hat d_0} 0\rightarrow\cdots\,, \end{align} written more explicitly as \begin{align*} \cdots\rightarrow &\tilde P_2\xrightarrow{\hat e_{1,2}} \tilde P_1 \xrightarrow{\hat e_{1,1}} \tilde P_1 \xrightarrow{\hat e_{2,1}} \tilde P_2 \rightarrow … \rightarrow \tilde P_{p-2} \xrightarrow{\hat e_{p-1,p-2}}\tilde P_{p-1} \nonumber \\
&\xrightarrow{\hat e_{p-1,p-1}} \tilde P_{p-1} \xrightarrow{\hat e_{p-2,p-1}}\tilde P_{p-2}\rightarrow … \rightarrow \tilde P_2\xrightarrow{\hat e_{1,2}} \tilde P_1 \rightarrow 0\rightarrow \cdots\,, \end{align*} with augmentation $\hat{ε}:\tilde P_1 → \mathbb{Z}_{(p)}$.
\kommentar{
Then let \begin{enumerate}[(1)] \item $\mathscr{B}^{\Leftrightarrow}:=\{β_{k,x,y}^{\Leftrightarrow} \mid k∈[1,p-1], x,y∈[1,n_{\rm b}^{k}]\}$, where $β_{k,x,y}^{\Leftrightarrow} := η_{λ^k,n_{\rm c}^{k}+x,n_{\rm c}^{k}+y} + η_{λ^{k+1},x,y}$. \item $\mathscr{B}^{\Leftarrow}:=\{β_{k,x,y}^{\Leftarrow} \mid k∈[1,p-1], x,y∈[1,n_{\rm b}^{k}]\}$, where \mbox{$β_{k,x,y}^{\Leftarrow} := pη_{λ^k,n_{\rm c}^{k}+x,n_{\rm c}^{k}+y}$}. \item $\mathscr{B}^{\Rightarrow}:=\{β_{k,x,y}^{\Rightarrow} \mid k∈[1,p-1], x,y∈[1,n_{\rm b}^{k}]\}$, where \mbox{$β_{k,x,y}^{\Rightarrow} := pη_{λ^{k+1},x,y}$}. \item $\mathscr{B}^{\leftarrow} := \{β_{k,x,y}^{\leftarrow} \mid k∈[1,p], x∈[1,n_{\rm b}^{k}],y∈[1,n_{\rm c}^{k}]\}$, where
\mbox{$β_{k,x,y}^{\leftarrow} := η_{λ^k,n_{\rm c}^{k}+x,y}$}. \item $\mathscr{B}^{\rightarrow} := \{β_{k,x,y}^{\rightarrow} \mid k∈[1,p], x∈[1,n_{\rm c}^{k}],y∈[1,n_{\rm b}^{k}]\}$, where \mbox{$β_{k,x,y}^{\rightarrow} := pη_{λ^k,x,n_{\rm c}^{k}+y}$}. \item $\mathscr{B}^*:=\{η_{λ,x,y} \mid λ\dashv p\text{ not a hook partition}, x,y∈[1,n_λ]\}$. \end{enumerate} We have two $\mathbb{Z}_{(p)}$-bases $\mathscr{B}^{\Leftrightarrow} \sqcup \mathscr{B}^{\Leftarrow} \sqcup \mathscr{B}^{\leftarrow} \sqcup \mathscr{B}^{\rightarrow} \sqcup \mathscr{B}^*$ and \mbox{$\mathscr{B}^{\Leftrightarrow} \sqcup \mathscr{B}^{\Rightarrow} \sqcup \mathscr{B}^{\leftarrow} \sqcup \mathscr{B}^{\rightarrow} \sqcup \mathscr{B}^*$} of $Λ$.
\begin{bsp}[$p=3$, continuation of \cref{bsp:s3}] The only of the $β_{a,b,c}^d$ that are defined above and that are not shown in \cref{bsp:s3} are the following elements. \begin{align*} \left( 0, \begin{pmatrix} 3 & 0 \\ 0& 0\end{pmatrix}, 0 \right)= β_{1,1,1}^⇒, && \left( 0, \begin{pmatrix} 0 & 0 \\ 0& 0\end{pmatrix}, 3 \right)= β_{2,1,1}^⇒ \end{align*} $\mathscr{B}^*$ is empty since all partitions are hook partitions. \end{bsp}
Once more, see \cref{bsp:s3} for an illustration of the case $p=3$.
Let \begin{enumerate}[(1)] \item $\mathscr{B}_k^{\Leftrightarrow} := (β_{k,1,y}^{\Leftrightarrow}\colon y∈(1,…,n_{\rm b}^{k})) = (η_{λ^k,n_{\rm c}^{k}+1,n_{\rm c}^{k}+y} + η_{λ^{k+1},1,y}\colon y∈(1,…,n_{\rm b}^{k}))$ \item $\mathscr{B}_k^{\Leftarrow} := (β_{k,1,y}^{\Leftarrow}\colon y∈(1,…,n_{\rm b}^{k})) = (pη_{λ^k,n_{\rm c}^{k}+1,n_{\rm c}^{k}+y}\colon y∈(1,…,n_{\rm b}^{k}))$ \item $\mathscr{B}_k^{\Rightarrow} := (β_{k,1,y}^{\Rightarrow}\colon y∈(1,…,n_{\rm b}^{k})) = (pη_{λ^{k+1},1,y} \colon y∈(1,…,n_{\rm b}^{k}))$ \item $\mathscr{B}_k^{\leftarrow} := (β_{k,1,y}^{\leftarrow}\colon y∈(1,…,n_{\rm c}^{{k}})) =(η_{λ^{k},n_{\rm c}^{k}+1,y} \colon y∈(1,…,n_{\rm c}^k))$ \item $\mathscr{B}_k^{\rightarrow} := (β_{k+1,1,y}^{\rightarrow}\colon y∈(1,…,n_{\rm b}^{k+1})) =(pη_{λ^{k+1},1,n_{\rm c}^{k+1}+y}\colon y∈(1,…, n_{\rm b}^{k+1}))$ \end{enumerate} \begin{bem} \label[bem]{bem:pbases} Similarly to the bases of $Λ$, the tuples $\mathscr{B}_k^{\Leftrightarrow}\sqcup \mathscr{B}_k^{\Leftarrow} \sqcup \mathscr{B}_k^{\leftarrow} \sqcup \mathscr{B}_k^{\rightarrow}$ and \mbox{$\mathscr{B}_k^{\Leftrightarrow}\sqcup \mathscr{B}_k^{\Rightarrow} \sqcup \mathscr{B}_k^{\leftarrow} \sqcup \mathscr{B}_k^{\rightarrow}$} are $\mathbb{Z}_{(p)}$-bases of $\tilde P_k$. \end{bem}
\begin{bem} Let $k∈[1,p-1]$. The idempotent $\tilde e_k$ is actually a primitive idempotent and thus $\tilde P_k$ is an indecomposable projective $Λ$-right module: Assume $\tilde e_k=c+c'$ for some idempotents $0\neq c,c'∈Λ$ that are orthogonal, that is $c\cdot c' = c'\cdot c = 0$. Then $\tilde e_k \cdot c = (c+c')c = c^2 = c = c(c+c') = c\cdot \tilde e_k$. Similarly, we have $\tilde e_k \cdot c' = c' = c'\cdot \tilde e_k$. Thus $c,c' ∈\tilde e_kΛ\tilde e_k$. The $\mathbb{Z}_{(p)}$-algebra \[\tilde e_kΛ\tilde e_k = \langle e_k, β_{k,1,1}^⇐\rangle_{\mathbb{Z}_{(p)}} = \langle e_k, β_{k,1,1}^⇒\rangle_{\mathbb{Z}_{(p)}}\] is isomorphic to the $\mathbb{Z}_{(p)}$-algebra \begin{equation*}
\begin{tikzpicture} \node (n1) {$\mathbb{Z}_{(p)}$}; \node (n2) [right=of n1] {$\mathbb{Z}_{(p)}$}; \draw[black] (n1.east) -- node[above]{$\scriptstyle p$} (n2.west); \node at (n1.west) {$\mathllap{J:=}$}; \end{tikzpicture} \end{equation*} consisting of elements $\{(a,b)∈\mathbb{Z}_{(p)}\times \mathbb{Z}_{(p)} \mid a\equiv_p b\}$. The only idempotents in $\mathbb{Z}_{(p)}\times \mathbb{Z}_{(p)}$ are $(0,0)∈J$, $(1,1)∈J$, $(1,0)\notin J$ and $(0,1)\notin J$. Thus the identity element $(1,1)$ of $J$ cannot be decomposed into non-trivial idempotents and the same holds for $\tilde e_k$. \end{bem}
\begin{lemma} \label[lemma]{lem:kerim} We have \begin{align*} &{\rm(a)}&&\ker \hat e_{k+1,k} = \langle \mathscr{B}_k^{\leftarrow} \sqcup \mathscr{B}_k^{⇐}\rangle_{\mathbb{Z}_{(p)}},&& \im \hat e_{k+1,k} = \langle \mathscr{B}_{k+1}^{\leftarrow} \sqcup \mathscr{B}_{k+1}^{⇐}\rangle_{\mathbb{Z}_{(p)}} \,\, \text{ for $k∈[1,p-2]$},\\ &{\rm(b)}&&\ker \hat e_{k,k+1} = \langle \mathscr{B}_{k+1}^→\sqcup \mathscr{B}_{k+1}^⇒\rangle_{\mathbb{Z}_{(p)}}, &&\im \hat e_{k,k+1} = \langle \mathscr{B}_k^{→}\sqcup \mathscr{B}_k^⇒ \rangle_{\mathbb{Z}_{(p)}}\,\,\text{ for $k∈[1,p-2]$},\\ &{\rm(c)}&&\ker \hat e_{p-1,p-1} {=} \langle \mathscr{B}_{p-1}^{\leftarrow} \sqcup \mathscr{B}_{p-1}^⇐ \rangle_{\mathbb{Z}_{(p)}}, &&\im \hat e_{p-1,p-1} {=} \langle \mathscr{B}_{p-1}^⇒ \sqcup \mathscr{B}_{p-1}^→\rangle_{\mathbb{Z}_{(p)}},\\ &{\rm(d)}&&\ker \hat e_{1,1} = \langle \mathscr{B}_1^⇒ \sqcup \mathscr{B}_1^→ \rangle_{\mathbb{Z}_{(p)}},&& \im \hat e_{1,1} = \langle \mathscr{B}_1^⇐ \sqcup \mathscr{B}_1^{\leftarrow} \rangle_{\mathbb{Z}_{(p)}}. \end{align*} \end{lemma} \begin{proof}
(a): \hphantom{XXX}
$ \begin{array}[t]{rcl} \hat e_{k+1,k}(\mathscr{B}_k^{⇔}) &\svs{\text{R.}\ref{rem:moralg}}& (η_{λ^{k+1},n_{\rm c}^{k+1}+1,1} η_{λ^{k+1},1,y} \colon y∈(1,…,n_{\rm b}^{k}))\\ &= & (η_{λ^{k+1},n_{\rm c}^{{k+1}}+1,y} \colon y∈(1,…,n_{\rm c}^{{k+1}})) = \mathscr{B}_{k+1}^{\leftarrow}\\ \hat e_{k+1,k}(\mathscr{B}_k^{→}) &\svs{\text{R.}\ref{rem:moralg}}& (η_{λ^{k+1},n_{\rm c}^{k+1}+1,1} pη_{λ^{k+1},1,n_{\rm c}^{{k+1}}+y} \colon y∈(1,…,n_{\rm b}^{k+1}))\\ &=& (pη_{λ^{k+1},n_{\rm c}^{k+1}+1,n_{\rm c}^{k+1}+y} \colon y∈(1,…,n_{\rm b}^{k+1})) = \mathscr{B}_{k+1}^{⇐}\\
\hat e_{k+1,k}(\mathscr{B}_k^{\leftarrow}) &\svs[\subseteq]{\text{R.}\ref{rem:moralg}} & \{0\}\\ \hat e_{k+1,k}(\mathscr{B}_k^{⇐}) &\svs[\subseteq]{\text{R.}\ref{rem:moralg}} & \{0\} \end{array}
$
Thus by \cref{bem:pbases}, assertion (a) holds.
(b): \hphantom{\textit{Proof.}XXX}
$ \begin{array}[t]{rcl} \hat e_{k,k+1}(\mathscr{B}_{k+1}^{⇔}) &\svs{\text{R.}\ref{rem:moralg}}& (pη_{λ^{k+1},1,n_{\rm c}^{k+1}+1} η_{λ^{k+1},n_{\rm c}^{{k+1}}+1,n_{\rm c}^{{k+1}}+y} \colon y∈(1,…,n_{\rm b}^{{k+1}})) \\ &=& (pη_{λ^{k+1},1,n_{\rm c}^{{k+1}}+y} \colon y∈(1,…,n_{\rm b}^{{k+1}})) = \mathscr{B}_k^{→}\\ \hat e_{k,k+1}(\mathscr{B}_{k+1}^{\leftarrow}) &\svs{\text{R.}\ref{rem:moralg}}& (pη_{λ^{k+1},1,n_{\rm c}^{{k+1}}+1} η_{λ^{k+1},n_{\rm c}^{{k+1}}+1,y} \colon y∈(1,…,n_{\rm c}^{{k+1}})) \\ &=& (pη_{λ^{k+1},1,y} \colon y∈(1,…,n_{\rm b}^{k})) = \mathscr{B}_k^⇒\\ \hat e_{k,k+1}(\mathscr{B}_{k+1}^→) &\svs[\subseteq]{\text{R.}\ref{rem:moralg}} & \{0\}\\ \hat e_{k,k+1}(\mathscr{B}_{k+1}^⇒) &\svs[\subseteq]{\text{R.}\ref{rem:moralg}} & \{0\} \end{array} $
Thus by \cref{bem:pbases}, assertion (b) holds.
(c): \hphantom{\textit{Proof.}XX}
$ \begin{array}[t]{rcl} \hat e_{p-1,p-1}(\mathscr{B}_{p-1}^⇔) &\svs{\text{R.}\ref{rem:moralg}}& (pη_{λ^p,1,1}η_{λ^{(p-1)+1},1,y} \colon y∈(1,…,n_{\rm b}^{{p-1}})) \\ &=& (pη_{λ^{(p-1)+1},1,y} \colon y∈(1,…,n_{\rm b}^{{p-1}})) = \mathscr{B}_{p-1}^⇒\\ \mathscr{B}_{p-1}^→ &=& () \text{ as $n_{\rm b}^{p} = 0$}\\ \hat e_{p-1,p-1}(\mathscr{B}_{p-1}^{\leftarrow}) &\svs[\subseteq]{\text{R.}\ref{rem:moralg}} & \{0\}\\ \hat e_{p-1,p-1}(\mathscr{B}_{p-1}^⇐) &\svs[\subseteq]{\text{R.}\ref{rem:moralg}} & \{0\} \end{array} $
Thus by \cref{bem:pbases}, assertion (c) holds.
(d): \hphantom{\textit{Proof.}XXXXX}
$ \begin{array}[t]{rcl} \hat e_{1,1}(\mathscr{B}_1^⇔) &\svs{\text{R.}\ref{rem:moralg}}& (pη_{λ^1,1,1}η_{λ^1,n_{\rm c}^{1}+1,n_{\rm c}^{1}+y} \colon y∈(1,…,n_{\rm b}^{1}))\\ &\svs{n_{\rm c}^{1}=0}& (pη_{λ^1,n_{\rm c}^{1}+1,n_{\rm c}^{1}+y} \colon y∈(1,…,n_{\rm b}^{1})) = \mathscr{B}_1^⇐\\ \mathscr{B}_1^{\leftarrow} &=& ()\text{ as $n_{\rm c}^{1}=0$}\\ \hat e_{1,1}(\mathscr{B}_1^⇒) &\svs[\subseteq]{\text{R.}\ref{rem:moralg}} & \{0\}\\ \hat e_{1,1}(\mathscr{B}_1^→) &\svs[\subseteq]{\text{R.}\ref{rem:moralg}} & \{0\} \end{array} $
Thus by \cref{bem:pbases}, assertion (d) holds. \end{proof}
The trivial $\mathbb{Z}_{(p)}\!\Sy_p$-module $\mathbb{Z}_{(p)}$ becomes a $Λ$-module via the isomorphism of $\mathbb{Z}_{(p)}$-algebras $r:\mathbb{Z}_{(p)}\!\Sy_p→Λ$ described in \cref{lambda}. We want to construct a projective resolution of $\mathbb{Z}_{(p)}$ over $Λ$.
$Γ$ is a right $Λ$-module as $Λ$ is a subalgebra of $Γ$. The set $Γ^{λ^1}:=\{ρ∈Γ \mid ρ^λ = 0 \text{ for }λ\neq λ^1\}$ is a right $Λ$-submodule of $Γ$. As $n_{\rm c}^{k}= 0$ and $n_{\rm b}^k=1$, $Γ^{λ^1}$ is free over $\mathbb{Z}_{(p)}$ with basis $\{η_{λ^1,1,1}\}$.
Given a partition $λ\dashv p$, the operation of an element $x∈\mathbb{Z}_{(p)}\!\Sy_p$ on the Specht module corresponding to $λ$ is multiplication with the matrix $r(x)^λ$ with respect to a certain basis of that Specht module, cf.\ the definition of $r^{\mathbb{Z}}$ in the proof of \cref{lambda}.
As $\mathbb{Z}_{(p)}$ is the Specht module corresponding to the trivial partition $λ^1$ of $p$, and as $\mathbb{Z}_{(p)}$ is one-dimensional, the operation of $x∈\mathbb{Z}_{(p)}\!\Sy_p$ on $\mathbb{Z}_{(p)}$ is multiplication with the scalar $r(x)^{λ^1}$. Thus an element $ρ∈Λ$ operates on $\mathbb{Z}_{(p)}$ via multiplication with the scalar $ρ^{λ^1}$ and we haven an isomorphism of right $Λ$-modules by \begin{align*} \begin{array}{rlcl} \hat{ε}^1 : &Γ^{λ^1} & \longrightarrow & \mathbb{Z}_{(p)}\\ & η_{λ^1,1,1} & \longmapsto & 1. \end{array} \end{align*}
We have the morphism of right $Λ$-modules \begin{align*} \begin{array}{rlclr} \hat{ε}^0: &\tilde P_1 &\longrightarrow & Γ^{λ^1}\\ &\tilde e_1x &\longmapsto & η_{λ^1,1,1} \tilde e_1x= η_{λ^1,1,1}x & \text{ for }x∈Λ \end{array} \end{align*}
We have $\hat{ε}^0(\tilde e_1) = \hat{ε}^0(η_{λ^1,1,1} + η_{λ^2,1,1}) = η_{λ^1,1,1}$, thus $\hat{ε}^0$ is surjective as $\{η_{λ^1,1,1}\}$ is a $\mathbb{Z}_{(p)}$-basis of $Γ^{λ^1}$.
Given $x∈\tilde P_1$, we have $\hat e_{1,1}(x) = p \hat{ε}^0(x)$ as elements of $Γ$. Thus the maps $\hat e_{1,1}$ and $\hat{ε}^0$ have the same kernel. Concatenation with the isomorphism $\hat{ε}^1$ yields the surjective morphism of right $Λ$-modules \[ \hat{ε} := \hat{ε}^1 \circ \hat{ε}^0 : \tilde P_1 \longrightarrow \mathbb{Z}_{(p)},\] for which we have $\ker \hat{ε} = \ker \hat e_{1,1}$.
With these properties of $\hat{ε}$ and \cref{lem:kerim}, we are able to directly formulate a projective resolution of $\mathbb{Z}_{(p)}$:
We set \begin{align*} \tilde{\pr}_i :=\,& \begin{cases} \tilde P_{ω(i)} & i\geq 0 \\ 0 & i<0 \end{cases}, \end{align*} where the integer $ω(i)$ is given by the following construction: Recall the stipulation $l:=2(p-1)$. We have $i = jl+r$ for some $j∈\mathbb{Z}$ and $0\leq r\leq l-1$. Then \begin{align} \label{omega} ω(i) := \begin{cases} r+1 & \text{ for }0\leq r \leq p-2\\ l-r = 2(p-1) -r & \text{ for } p-1 \leq r \leq 2(p-1)-1 = l-1 \end{cases}. \end{align} So $ω(i)$ increases by steps of one from $1$ to $p-1$ as $i$ runs from $jl$ to $jl+(p-2)$ and $ω(i)$ decreases from $p-1$ to $1$ as $i$ runs from $jl+(p-1)$ to $jl + (l-1)$. Finally we set \begin{align*} \hat d_i := \begin{cases} \hat e_{ω(i-1),ω(i)}: \tilde P_{ω(i)} → \tilde P_{ω(i-1)} & i\geq 1\\ 0 & i\leq 0 \end{cases}. \end{align*} Now \cref{lem:kerim} gives the projective resolution of $\mathbb{Z}_{(p)}$ \begin{align} \label{presform1} \cdots\xrightarrow{\hat d_3} \tilde{\pr}_2\xrightarrow{\hat d_2}\tilde{\pr}_1\xrightarrow{\hat d_1}\tilde{\pr}_0\xrightarrow{0=\hat d_0} 0\rightarrow\cdots\,. \end{align} More explicitly, we have \begin{align*} \cdots\rightarrow &\tilde P_2\xrightarrow{\hat e_{1,2}} \tilde P_1 \xrightarrow{\hat e_{1,1}} \tilde P_1 \xrightarrow{\hat e_{2,1}} \tilde P_2 \rightarrow … \rightarrow \tilde P_{p-2} \xrightarrow{\hat e_{p-1,p-2}}\tilde P_{p-1} \nonumber \\
&\xrightarrow{\hat e_{p-1,p-1}} \tilde P_{p-1} \xrightarrow{\hat e_{p-2,p-1}}\tilde P_{p-2}\rightarrow … \rightarrow \tilde P_2\xrightarrow{\hat e_{1,2}} \tilde P_1 \rightarrow 0\rightarrow \cdots\,. \end{align*} The corresponding extended projective resolution is \begin{align*} \cdots \rightarrow &\tilde P_2\xrightarrow{\hat e_{1,2}} \tilde P_1 \xrightarrow{\hat e_{1,1}} \tilde P_1 \xrightarrow{\hat e_{2,1}} \tilde P_2 \rightarrow … \rightarrow \tilde P_{p-2} \xrightarrow{\hat e_{p-1,p-2}}\tilde P_{p-1} \nonumber \\
&\xrightarrow{\hat e_{p-1,p-1}} \tilde P_{p-1} \xrightarrow{\hat e_{p-2,p-1}}\tilde P_{p-2}\rightarrow … \rightarrow \tilde P_2\xrightarrow{\hat e_{1,2}} \tilde P_1 \xrightarrow{\hat{ε}} \mathbb{Z}_{(p)} \rightarrow 0\rightarrow\cdots\,, \end{align*} which is an exact sequence.
We have proven the \begin{tm} \label[tm]{tm:przp} Recall that $p\geq 3$ is a prime. \\* The sequence \eqref{presform1} is a projective resolution of $\mathbb{Z}_{(p)}$, with augmentation $\tilde{\pr}_0=\tilde P_1 \xrightarrow{\hat{ε}}\mathbb{Z}_{(p)}$. \end{tm} \begin{lemma} \label[lemma]{lem:zprel} Recall that $p\geq 3$ is a prime. We have \begin{equation*} \begin{array}{lclr} \hat e_{1,1} + \hat e_{1,2}\circ \hat e_{2,1} &=& p\hat e_1\\ \hat e_{k,k-1}\circ\hat e_{k-1,k} + \hat e_{k,k+1}\circ\hat e_{k+1,k} &=& p\hat e_k &\text{ for }k∈[2,p-2]\\ \hat e_{p-1,p-2}\circ\hat e_{p-2,p-1} + \hat e_{p-1,p-1} &=& p\hat e_{p-1}\\
\hat{ε}\circ \hat e_{1,1} &=& p\hat{ε}. \end{array} \end{equation*} \end{lemma} \begin{proof} We have by \cref{rem:moralg} \begin{align*} T_{\tilde e_1,\tilde e_1}(&\hat e_{1,1} + \hat e_{1,2}\circ\hat e_{2,1}) = T_{\tilde e_1,\tilde e_1}(\hat e_{1,1}) + T_{\tilde e_1,\tilde e_2}(\hat e_{1,2})T_{\tilde e_2,\tilde e_1}(\hat e_{2,1})\\ &= pη_{λ^1,1,1} + pη_{λ^2,1,n_{\rm c}^2+1}η_{λ^2,n_{\rm c}^2+1,1} = p(η_{λ^1,1,1}+η_{λ^2,1,1}) = T_{\tilde e_1,\tilde e_1}(p\hat e_{1})\\
T_{\tilde e_k,\tilde e_k}(&\hat e_{k,k-1}\circ\hat e_{k-1,k} + \hat e_{k,k+1}\circ\hat e_{k+1,k})\\ &= T_{\tilde e_k,\tilde e_{k-1}}(\hat e_{k,k-1})T_{\tilde e_{k-1},\tilde e_k}(\hat e_{k-1,k}) + T_{\tilde e_k,\tilde e_{k+1}}(\hat e_{k,k+1})T_{\tilde e_{k+1},\tilde e_k}(\hat e_{k+1,k})\\ &= η_{λ^k,n_{\rm c}^k+1,1}pη_{λ^k,1,n_{\rm c}^k+1} + pη_{λ^{k+1},1,n_{\rm c}^{k+1}+1}η_{λ^{k+1},n_{\rm c}^{k+1}+1,1}\\ &= p(η_{λ^k,n_{\rm c}^k+1,n_{\rm c}^k+1} + η_{λ^{k+1},1,1}) = T_{\tilde e_k,\tilde e_k}(p\hat e_{k})\\
T_{\tilde e_{p-1},\tilde e_{p-1}}(&\hat e_{p-1,p-2}\circ\hat e_{p-2,p-1} + \hat e_{p-1,p-1})\\ &= T_{\tilde e_{p-1},\tilde e_{p-2}}(\hat e_{p-1,p-2})T_{\tilde e_{p-2},\tilde e_{p-1}}(\hat e_{p-2,p-1}) + T_{\tilde e_{p-1},\tilde e_{p-1}}(\hat e_{p-1,p-1})\\ &= η_{λ^{p-1},n_{\rm c}^{p-1}+1,1}pη_{λ^{p-1},1,n_{\rm c}^{p-1}+1} + pη_{λ^p,1,1}\\ &= p(η_{λ^{p-1},n_{\rm c}^{p-1}+1,n_{\rm c}^{p-1}+1} + η_{λ^p,1,1}) = T_{\tilde e_{p-1},\tilde e_{p-1}}(p\hat e_{p-1}). \end{align*} Finally for $x∈\tilde P_1$, we have \begin{align*} (\hat{ε}^0\circ \hat e_{1,1})(x) =\,& η_{λ^1,1,1}\cdot pη_{λ^1,1,1}\cdot x = pη_{λ^1,1,1}\cdot x = p \hat{ε}^0(x), \end{align*} thus $\hat{ε}\circ \hat e_{1,1} = \hat{ε}^1\circ \hat{ε}^0\circ \hat e_{1,1} = p\hat{ε}^1\circ \hat{ε}^0 = p\hat{ε}$. \end{proof} }
\subsection{A projective resolution of \texorpdfstring{$ {\mathbb{F}_{\!p}} $}{Fp} over \texorpdfstring{$ {\mathbb{F}_{\!p}} \!\Sy_p$}{FpSp}} \label{sec:prfp} \kommentar{ We have $\mathbb{Z}_{(p)}\!\Sy_p/(p\mathbb{Z}_{(p)}\!\Sy_p) \simeq {\mathbb{F}_{\!p}} \!\Sy_p$. Since $\mathbb{Z}_{(p)}\!\Sy_p \simeq Λ$, we identify $ {\mathbb{F}_{\!p}} \!\Sy_p = Λ/(pΛ)$. We obtain the desired projective resolution by reducing the projective resolution of $\mathbb{Z}_{(p)}$ "modulo $p$":
\subsubsection*{Reduction modulo $I$} Let $R$ be a principal ideal domain. Let $(A,ρ)$ be an $R$-algebra. Let $I$ be an ideal of $R$. We set $\bar{R}:=R/\!I$.
As $R$ is a principal ideal domain, $ρ(I)A$ is an additive subset of $A$. As $ρ(I)$ is a subset of the center of $A$, $ρ(I)A$ is an ideal of $A$ and $A/(ρ(I)A)=:\bar{A}$ is an $\bar{R}$-algebra.
We regard a right $A$-module $M_A$ as a right $R$-module $M_R$ via $m\cdot r := m\cdot ρ(r)$ for $m∈M$, $r∈R$. \begin{lemma} \label[lemma]{lem:ni} The functors $-\underset{A}{\otimes} \bar{A}$ and $-\underset{R}{\otimes} \bar{R}$ from ${\tt Mod}$-$A$ to ${\tt Mod}$-$R$ are naturally isomorphic. The natural isomorphism $-\underset{A}{\otimes} \bar{A}→-\underset{R}{\otimes} \bar{R}$ is given at the module $M_A$ by \begin{equation*} \begin{array}{rcl} M_A \underset{A}{\otimes} \leftidx{_A}{\bar{A}}{} &\overset{\sim}{\longrightarrow}& M_R\underset{R}{\otimes} \leftidx{_R}{\bar{R}}{}\\ m\otimes (a+ ρ(I)A) &\longmapsto & ma \otimes (1+I)\\ m\otimes (r+ρ(I)A)&\reflectbox{$\longmapsto$} & m\otimes (r+I). \end{array} \end{equation*} \end{lemma} \begin{proof} By the universal property of the tensor product, the two maps given above are well-defined and $R$-linear. Straightforward calculation gives that they invert each other and that we have a natural transformation. \end{proof}
\begin{lemma} \label[lemma]{lem:ef}
The functor $-\underset{A}{\otimes}\leftidx{_A}{\bar{A}}{_{\bar{A}}}$ from ${\tt Mod}$-$A$ to ${\tt Mod}$-$\bar A$ maps exact sequences of right $A$-modules that are free and of finite rank as $R$-modules to exact sequences of right $\bar{A}$-modules.
\end{lemma} \begin{proof} Because $-\underset{A}{\otimes}\leftidx{_A}{\bar{A}}{_{\bar{A}}}$ is an additive functor, it maps complexes to complexes. For considerations of exactness, we may compose our functor with the forgetful functor from ${\tt Mod}$-$\bar A$ to ${\tt Mod}$-$ R$. This composite is $-\underset{A}{\otimes}\leftidx{_A}{\bar{A}}{}$.
By the natural isomorphism given in \cref{lem:ni}, it suffices to show that $-\underset{R}{\otimes}\leftidx{_R}{\bar{R}}{}$ transforms exact sequences of right $A$-modules that are free and of finite rank as $R$-modules into exact sequences.
Let $\cdots\xrightarrow{d_{i+1}}M_i\xrightarrow{d_i}M_{i-1}\xrightarrow{d_{i-1}}\cdots$ be an exact sequence of right $A$-modules that are free and of finite rank as $R$-modules. Then $\im d_i$ is a submodule of the free $R$-module $M_{i-1}$. As $R$ is a principal ideal domain, $\im d_i$ is free. Hence the short exact sequence $\im d_{i+1} \rightarrow M_i \rightarrow \im d_i$ splits. Now the additive functor $-\underset{R}{\otimes}\leftidx{_R}{\bar{R}}{}$ maps split short exact sequences to (split) short exact sequences and the proof is complete. \end{proof} \subsubsection*{Reduction modulo $p$} }
The isomorphism $r:\mathbb{Z}_{(p)}\!\Sy_p \rightarrow Λ$ from \cref{lambda} induces an isomorphism of $ {\mathbb{F}_{\!p}} $-algebras $ {\mathbb{F}_{\!p}} \!\Sy_p = \mathbb{Z}_{(p)}\!\Sy_p/(p\mathbb{Z}_{(p)}\!\Sy_p) \xrightarrow{\bar{r}} Λ/(pΛ)=:\bar{Λ}$. \\* For the sake of simplicity in the next step, we identify $\bar{Λ}$ and $ {\mathbb{F}_{\!p}} \!\Sy_p$ along $\bar{r}$.
\begin{lemma} \label[lemma]{lem:prfp} Recall that $p\geq 3$ is a prime. Applying the functor $-\underset{Λ}{\otimes} \leftidx{_Λ}{\bar{Λ}}{_{\bar{Λ}}}$\,, we obtain \begin{itemize} \item the projective modules $P_k:= \tilde P_k\underset{Λ}{\otimes} \leftidx{_Λ}{\bar{Λ}}{_{\bar{Λ}}}$
for $k∈[1,p-1]$, \item $ {\mathbb{F}_{\!p}} :=\mathbb{Z}_{(p)}\underset{Λ}{\otimes} \leftidx{_Λ}{\bar{Λ}}{_{\bar{Λ}}}$ (the trivial $ {\mathbb{F}_{\!p}} \!\Sy_p$-module), \item
$ \begin{array}[t]{lclll} e_k&:=&\hat e_k\underset{Λ}{\otimes} \leftidx{_Λ}{\bar{Λ}}{_{\bar{Λ}}}&∈\Hom_{ {\mathbb{F}_{\!p}} \Sy_p}(P_k,P_k)&\text{for $k∈[1,p-1]$,} \\ e_{1,1}&:=& \hat e_{1,1}\underset{Λ}{\otimes} \leftidx{_Λ}{\bar{Λ}}{_{\bar{Λ}}}&∈\Hom_{ {\mathbb{F}_{\!p}} \Sy_p}(P_1,P_1),\\ e_{p-1,p-1} &:=& \hat e_{p-1,p-1}\underset{Λ}{\otimes} \leftidx{_Λ}{\bar{Λ}}{_{\bar{Λ}}}&∈\Hom_{ {\mathbb{F}_{\!p}} \Sy_p}(P_{p-1},P_{p-1}), \\ e_{k+1,k}&:=& \hat e_{k+1,k}\underset{Λ}{\otimes} \leftidx{_Λ}{\bar{Λ}}{_{\bar{Λ}}}&∈\Hom_{ {\mathbb{F}_{\!p}} \Sy_p}(P_k,P_{k+1})&\text{for $k∈[1,p-2]$,} \\ e_{k,k+1} &:=& \hat e_{k,k+1}\underset{Λ}{\otimes} \leftidx{_Λ}{\bar{Λ}}{_{\bar{Λ}}}&∈\Hom_{ {\mathbb{F}_{\!p}} \Sy_p}(P_{k+1},P_k) &\text{for $k∈[1,p-2]$,} \\ ε &:=& \hat{ε}\underset{Λ}{\otimes} \leftidx{_Λ}{\bar{Λ}}{_{\bar{Λ}}}&∈\Hom_{ {\mathbb{F}_{\!p}} \Sy_p}(P_1, {\mathbb{F}_{\!p}} ) . \end{array}
$ \end{itemize}
The complex
\begin{align} \label{prfp} \pres {\mathbb{F}_{\!p}} := (\pres \mathbb{Z}_{(p)})\underset{Λ}{\otimes} \leftidx{_Λ}{\bar{Λ}}{_{\bar{Λ}}}= (\cdots\xrightarrow{d_3} {\pr}_2\xrightarrow{ d_2}{\pr}_1\xrightarrow{ d_1}{\pr}_0\xrightarrow{0=d_0} 0\rightarrow\cdots), \end{align} \begin{align*} {\pr}_i :=\,& \begin{cases} P_{ω(i)} & i\geq 0 \\ 0 & i<0 \end{cases} & d_i :=\,& \begin{cases} e_{ω(i-1),ω(i)}: P_{ω(i)} →P_{ω(i-1)} & i\geq 1\\ 0 & i\leq 0, \end{cases} \end{align*} is a projective resolution of $ {\mathbb{F}_{\!p}} $ with augmentation $ε:P_1→ {\mathbb{F}_{\!p}} $. More explicitly, $\pres {\mathbb{F}_{\!p}} $ is \begin{align*} …\rightarrow & \underbrace{P_2}_{\mathclap{l+1}}\xrightarrow{ e_{1,2}} \underbrace{P_1}_{\mathclap{l=2(p-1)}} \xrightarrow{ e_{1,1}} \underbrace{P_1}_{\mathclap{(p-2)+p-1}} \xrightarrow{ e_{2,1}} \underbrace{P_2}_{\mathclap{(p-2)+p-2}} \rightarrow … \rightarrow \underbrace{P_{p-2}}_{\mathclap{p=(p-2)+2}} \xrightarrow{ e_{p-1,p-2}} \underbrace{P_{p-1}}_{\mathclap{(p-2)+1}} \nonumber \\
&\xrightarrow{ e_{p-1,p-1}} \underbrace{P_{p-1}}_{\mathclap{p-2}} \xrightarrow{ e_{p-2,p-1}} \underbrace{P_{p-2}}_{p-3}\rightarrow … \rightarrow \underbrace{P_2}_{1} \xrightarrow{ e_{1,2}} \underbrace{P_1}_0 \rightarrow 0. \end{align*} \end{lemma} \begin{lemma} \label[lemma]{lem:relfp} Recall that $p\geq 3$ is a prime. \begin{enumerate}[\rm (a)] \item We have the relations \begin{equation*} \begin{array}{lclr} e_{1,1} + e_{1,2} \circ e_{2,1} &=& 0\\
e_{k,k-1}\circ e_{k-1,k} + e_{k,k+1}\circ e_{k+1,k} &=& 0 &\text{ for } k∈[2,p-2]\\
e_{p-1,p-2} \circ e_{p-2,p-1} + e_{p-1,p-1} &=& 0\\
ε \circ e_{1,1} &=& 0 \end{array} \end{equation*} and $e_k$ is the identity on $P_k$ for $k∈[1,p-1]$. \item Given $k∈[2,p-1]$, we have $\Hom_{ {\mathbb{F}_{\!p}} \!\Sy_p}(P_k, {\mathbb{F}_{\!p}} ) = \{0\}$.
\item Given $k,k'∈[1,p-1]$ such that $|k-k'|>1$, we have $\Hom_{ {\mathbb{F}_{\!p}} \!\Sy_p}(P_k,P_{k'}) = \{0\}$. \item The set $\{ε\}$ is an $ {\mathbb{F}_{\!p}} $-basis of $\Hom_{ {\mathbb{F}_{\!p}} \!\Sy_p}(P_1, {\mathbb{F}_{\!p}} )$. \end{enumerate} \end{lemma} Assertion (a) results from \cref{lem:zprel}.
Assertions (b), (c) and (d) are derived from corresponding assertions over $\mathbb{Z}_{(p)}\!\Sy_p$ using $\Hom_{ {\mathbb{F}_{\!p}} \!\Sy_p}(P/pP, M/pM) \simeq \Hom_{\mathbb{Z}_{(p)}\!\Sy_p}(P,M)/p\Hom_{\mathbb{Z}_{(p)}\!\Sy_p}(P,M)$ for $\mathbb{Z}_{(p)}\!\Sy_p$-modules $P$ and $M$, where $P$ is projective.
\section{\texorpdfstring{${\rm A}_\infty$}{A(oo)}-algebras} \label{secainf}
\subsection{Definitions, General theory} \label{generaltheory} In this subsection, we review results presented in \cite{Ke01} and we fix notation.
Let $R$ be a commutative ring. We understand linear maps between $R$-modules to be $R$-linear. Tensor products are tensor products over $R$. By graded $R$-modules we understand $\mathbb{Z}$-graded $R$-modules. \kommentar{
\begin{Def}
A \textit{graded $R$-module $V$} is a $R$-module of the form $V = \oplus_{q∈\mathbb{Z}} V^q$.
An element $v_q∈V^q$, $q∈\mathbb{Z}$ is said to be of degree $q$. An element $v∈V$ is called \textit{homogeneous} if there is an integer $q∈\mathbb{Z}$ such that $v∈V^q$. For homogeneous elements $v$ resp.\ graded maps $g$ (see below), we denote their degrees by $|v|$ resp.\ $|g|$. \end{Def}
\begin{Def}
Let $A = \oplus_{q∈\mathbb{Z}}A^q$, $B = \oplus_{q∈\mathbb{Z}}B^q$ be two graded $R$-modules. A \textit{graded map of degree $z∈\mathbb{Z}$} is a linear map $g:A→B$ such that \mbox{$\im g\big|_{A^q} \subseteq B^{q+z}$} for $q∈\mathbb{Z}$.
\end{Def}
\begin{Def} Let $A=\oplus_{q∈\mathbb{Z}}A^q$, $B=\oplus_{q∈\mathbb{Z}}B^q$ be two graded $R$-modules. We have \begin{align*} A\otimes B =\,& \bigoplus_{z_1,z_2∈\mathbb{Z}} A^{z_1}\otimes B^{z_2} = \bigoplus_{q∈\mathbb{Z}}\left(\bigoplus_{z_1+z_2=q} A^{z_1}\otimes B^{z_2}\right). \end{align*} As we understand the direct sums to be internal direct sums in $A\otimes B$ and understand $A^{z_1}\otimes B^{z_2}$ to be the linear span of the set $\{a\otimes b∈A\otimes B \mid a ∈ A^{z_1},b∈A^{z_2}\}$, we have equations in the above, not just isomorphisms.
We then set $A\otimes B$ to be graded by $A\otimes B = \bigoplus_{q∈\mathbb{Z}}(A\otimes B)^q$, where $(A\otimes B)^q:= \bigoplus_{z_1+z_2=q} A^{z_1}\otimes B^{z_2}$.
Moreover, we grade the direct sum \begin{align*} A\oplus B =\,& \bigoplus_{q∈\mathbb{Z}} (A^q\oplus B^q)
\end{align*} by $(A\oplus B)^q := A^q\oplus B^q$.
\end{Def}
}
\begin{Def} In the definition of the tensor product of graded maps, we implement the \textit{Koszul sign rule}: Let $A_1,A_2,B_1,B_2$ be graded $R$-modules and $g:A_1→B_1$, $h:A_2→B_2$ graded maps. Then we set for homogeneous elements $x∈A_1,y∈A_2$ \begin{align} \label{koszulraw}
(g\otimes h)(x\otimes y) := (-1)^{|h|\cdot|x|} g(x)\otimes h(y). \end{align}
\end{Def}
\kommentar{ \begin{bem} It is known that for graded $R$-modules $A,B,C$, the map \begin{align} \label{theta} \begin{array}{rccc} Θ:&(A\otimes B) \otimes C& \longrightarrow & A\otimes (B\otimes C)\\ &(a\otimes b) \otimes c &\longmapsto & a\otimes (b\otimes c) \end{array} \end{align} is an isomorphism of $R$-modules. Because of the following, $Θ$ is homogeneous of degree $0$.
\begin{align*} ((A\otimes B)\otimes C)^q =\,& \bigoplus_{\mathclap{y+z_3=q}} (A\otimes B)^y \otimes C^{z_3} = \bigoplus_{y+z_3=q}\bigoplus_{z_1+z_2= y} (A^{z_1}\otimes B^{z_2})\otimes C^{z_3}\\ =\,& \bigoplus_{\mathclap{z_1+z_2+z_3=q}} (A^{z_1}\otimes B^{z_2})\otimes C^{z_3}\\ (A\otimes (B\otimes C))^q =\,& \bigoplus_{\mathclap{z_1+y=q}} A^{z_1}\otimes (B\otimes C)^y = \bigoplus_{z_1+y=q}\bigoplus_{z_2+z_3=y} A^{z_1}\otimes (B^{z_2}\otimes C^{z_3})\\ =\,& \bigoplus_{\mathclap{z_1+z_2+z_3=q}} A^{z_1}\otimes (B^{z_2}\otimes C^{z_3}) \end{align*}
Let $A_1,A_2,B_1,B_2,C_1,C_2$ be graded $R$-modules, $f:A_1→A_2$, $g:B_1→B_2$, $h:C_1→C_2$ graded maps. For homogeneous elements $x∈A_1 $, $y∈B_1$, $z∈C_1$, we have \begin{align*}
((f\otimes g)\otimes h)((x\otimes y)\otimes z) =\,& (-1)^{|x\otimes y|\cdot|h|}((f\otimes g)(x\otimes y))\otimes h(z)\\
=\,& (-1)^{(|x|+|y|)|h|+ |x|\cdot |g|} (f(x)\otimes g(y))\otimes h(z)\\
(f\otimes(g\otimes h))(x\otimes (y\otimes z)) =\,& (-1)^{|x|\cdot |g\otimes h|} f(x)\otimes((g\otimes h)(y\otimes z))\\
=\,& (-1)^{|x|(|g|+|h|) + |y|\cdot|h|} f(x)\otimes(g(y)\otimes h(z)) \\
=\,& (-1)^{(|x|+|y|)|h|+ |x|\cdot |g|} f(x)\otimes(g(y)\otimes h(z)). \end{align*} Thus we have the following commutative diagram ($Θ_1$ and $Θ_2$ are derived from \eqref{theta}) \begin{align*} \xymatrix@C=2pc{ (A_1\otimes B_1)\otimes C_1\ar[r]^{Θ_{1}}\ar[d]^{(f\otimes g)\otimes h} & A_1\otimes (B_1\otimes C_1)\ar[d]^{f\otimes (g\otimes h)}\\ (A_2\otimes B_2)\otimes C_2\ar[r]^{Θ_{2}} & A_2\otimes (B_2\otimes C_2) } \end{align*} It is therefore valid to use $Θ$ as an identification and to omit the brackets for the tensorization of graded $R$-modules and the tensorization of graded maps.
\end{bem} }
Concerning the signs in the definition of ${\rm A}_∞$-algebras and ${\rm A}_∞$-morphisms, we follow the variant given e.g.\ in \cite{Le03} and \cite{Ka80}.
\begin{Def} Let $n∈\mathbb{Z}_{\geq 0}\cup \{∞\}$. \begin{enumerate}[(i)]
\item Let $A$ be a graded $R$-module. A \textit{pre-${\rm A}_n$-structure on $A$} is a family of graded maps $(m_k:A^{\otimes k}→A)_{k∈[1,n]}$ with $|m_k|=2-k$ for $k∈[1,n]$. The tuple $(A,(m_k)_{k∈[1,n]})$ is called a pre-${\rm A}_n$-algebra.
\item Let $A$, $A'$ be graded $R$-modules. A \textit{pre-${\rm A}_n$-morphism from $A'$ to $A$} is a family of graded maps $(f_k:A'^{\otimes k}→A)_{k∈[1,n]}$ with $|f_k|=1-k$ for $k∈[1,n]$. \end{enumerate} \end{Def}
\begin{Def} Let $n∈\mathbb{Z}_{\geq 0}\cup\{∞\}$. \begin{enumerate}[(i)] \item An \textit{${\rm A}_n$-algebra} is a pre-${\rm A}_n$-algebra $(A,(m_k)_{k∈[1,n]})$ such that for \mbox{$k∈[1,n]$} \refstepcounter{equation}\label{ainfrel} \begin{align} \sum_{\natop{k=r+s+t,}{r,t\geq 0, s\geq 1}} (-1)^{rs+t}m_{r+1+t}\circ (1^{\otimes r}\otimes m_s\otimes 1^{\otimes t})=0 . \tag*{(\theequation)[$k$]} \end{align} In abuse of notation, we sometimes abbreviate $A = (A,(m_k)_{k\geq 1})$ for ${\rm A}_∞$-algebras. \item
Let $(A',(m'_k)_{k∈[1,n]})$ and $(A,(m_k)_{k∈[1,n]})$ be ${\rm A}_n$-algebras. An \textit{${\rm A}_n$-morphism} or \textit{morphism of ${\rm A}_n$-algebras} from $(A',(m'_k)_{k∈[1,n]})$ to $(A,(m_k)_{k∈[1,n]})$ is a pre-${\rm A}_n$-morphism $(f_k)_{k∈[1,n]}$ such that for $k∈[1,n]$, we have \refstepcounter{equation}\label{finfrel} \begin{align}
\sum_{\mathclap{\substack{k=r+s+t\\r,t\geq 0,s\geq 1}}} (-1)^{rs+t} f_{r+1+t}\circ (1^{\otimes r}\otimes m'_s\otimes 1^{\otimes t}) = \sum_{\mathclap{\substack{1\leq r\leq k\\ i_1+…+i_r=k\\ i_s\geq 1}}} (-1)^v m_r\circ (f_{i_1}\otimes f_{i_2}\otimes … \otimes f_{i_r}), \tag*{(\theequation)[$k$]} \end{align} where
$v := \sum_{1\leq t < s\leq r}(1-i_s)i_t$.
\end{enumerate} \end{Def}
\begin{bsp}[dg-algebras] \label[bsp]{ex:dg} Let $(A,(m_k)_{k\geq 1})$ be an ${\rm A}_\infty$-algebra. If $m_n=0$ for $n\geq 3$ then $A$ is called a \textit{differential graded algebra} or \textit{dg-algebra}. In this case the equations \eqref{ainfrel}[$n$] for $n\geq 4$ become trivial: We have $(r+1+t)+s = n+1$ ⇒ $(r+1+t)+s\geq 5$ ⇒ $m_{r+1+t}=0$ or $m_s=0$. So all summands in \eqref{ainfrel}[$n$] are zero for $n\geq 4$. Here are the equations for $n∈\{1,2,3\}$: \begin{align*} \eqref{ainfrel}[1]: && 0 =\,& m_1\circ m_1\\ \eqref{ainfrel}[2]: && 0 =\,& m_1\circ m_2 - m_2\circ (m_1\otimes 1+1\otimes m_1)\\ \eqref{ainfrel}[3]: && 0 =\,& m_1\circ m_3 + m_2\circ (1\otimes m_2-m_2\otimes1)\\ &&& + m_3\circ (m_1\otimes 1^{\otimes 2}+1\otimes m_2\otimes 1+1^{\otimes 2}\otimes m_1) \\&& \ovs{m_3=0}& m_2\circ (1\otimes m_2-m_2\otimes1) \end{align*}
So \eqref{ainfrel}[$1$] ensures that $m_1$ is a differential. Moreover, \eqref{ainfrel}[$3$] states that $m_2$ is an associative binary operation, since for homogeneous $x,y,z∈A$ we have $0=m_2\circ \mbox{$(1\otimes m_2-m_2\otimes1)(x\otimes y\otimes z)$} = m_2(x\otimes m_2(y\otimes z) - m_2(x\otimes y)\otimes z)$, where because of $|m_2|=0$ there are no additional signs caused by the Koszul sign rule. Equation \eqref{ainfrel}[$2$] is the Leibniz rule.
\end{bsp} \begin{bsp}[${\rm A}_n$-morphisms induce complex morphisms] \label[bsp]{ex:cc} $ $\\* Let $n∈\mathbb{Z}_{\geq 1}\cup \{∞\}$. Let $(A', (m'_k)_{k∈[1,n]})$ and $(A, (m_k)_{k∈[1,n]})$ be two ${\rm A}_n$-algebras and let $(f_k)_{k∈[1,n]}: (A', (m'_k)_{k∈[1,n]}) → (A, (m_k)_{k∈[1,n]})$ be an ${\rm A}_n$-morphism.
By \eqref{ainfrel}[$1$], $(A',m'_1)$ and $(A,m_1)$ are complexes. Equation \eqref{finfrel}[$1$] is \begin{align*}
f_1\circ m'_1 =\,& m_1\circ f_1. \end{align*} Thus $f_1:(A',m'_1)→(A,m_1)$ is a complex morphism.
For $n\geq 2$, we have also \eqref{finfrel}[$2$]: \begin{align} \label{finfrel2}
f_1\circ m'_2 - f_2\circ(m'_1\otimes 1 + 1\otimes m'_1) =\,& m_1\circ f_2 + m_2\circ (f_1\otimes f_1) \end{align} \end{bsp} Recall the conventions concerning $\Hom_B^k(C,C')$. \begin{lemma}[{cf.\ e.g.\ \cite[Section 3.3]{Ke01}}] \label[lemma]{lem:cai} Let $B$ be an (ordinary) $R$-algebra and $M=((M_i)_{i∈\mathbb{Z}}, (d_i)_{i∈\mathbb{Z}})$ a complex of $B$-modules, that is a sequence $(M_i)_{i∈\mathbb{Z}}$ of $B$-modules and $B$-linear maps $d_i:M_i→M_{i-1}$ such that $d_{i-1}\circ d_i = 0$ for all $i∈\mathbb{Z}$. Let \begin{align*} \Hom^i_B(M,M):=\,& \prod_{z∈\mathbb{Z}}\Hom_B(M_{z+i},M_z)\\ =\,& \{g=(g_z)_{z∈\mathbb{Z}} \mid g_z∈\Hom_B(M_{z+i},M_z)\text{ for }z∈\mathbb{Z}\}. \end{align*} Then \begin{align*} A=\Hom^*_B(M,M) := \bigoplus_{i∈\mathbb{Z}} \Hom^i_B(M,M) \end{align*}
is a graded $R$-module. We have $d := (d_{z+1})_{z∈\mathbb{Z}}=\sum_{i∈\mathbb{Z}} \lfloor d_{i+1}\rfloor_{i+1}^{i} ∈\Hom^1_B(M,M)$. We define $m_1:=d_{\Hom^*(M,M)}:A→A$, that is for homogeneous $g∈A$ we have \begin{align*}
m_1(g) = d \circ g -(-1)^{|g|} g\circ d. \end{align*} We define $m_2:A^{\otimes 2}→A$ for homogeneous $g,h∈A$ to be composition, i.e.\ \begin{align*} m_2(g\otimes h) := g \circ h. \end{align*} For $n\geq 3$ we set $m_n:A^{\otimes n}→A$, $m_n=0$. Then $(m_n)_{n\geq 1}$ is an ${\rm A}_\infty$-algebra structure on $A=\Hom^*_B(M^*,M^*)$. More precisely, $(A, (m_n)_{n\geq 1})$ is a dg-algebra. \end{lemma} \kommentar{ \begin{proof}
Because of $|d|=1$ we have $|m_1| = 1 = 2-1$. The graded map $m_2$ has degree $0 = 2-2$. The other maps $m_n$ are zero and have therefore automatically correct degree. As discussed in \cref{ex:dg} we only need to check \eqref{ainfrel}[$n$] for $n=1,2,3$. Equation \eqref{ainfrel}[$1$] holds because for homogeneous $g∈A$ we have \begin{align*}
m_1(m_1(g)) =\,& m_1[d\circ g - (-1)^{|g|}g\circ d]\\
=\,& d\circ[d\circ g - (-1)^{|g|}g\circ d] - (-1)^{|g|+1}[d\circ g - ({-}1)^{|g|}g\circ d]\circ d\\
\ovs{d^2=0}& -(-1)^{|g|}d\circ g\circ d - (-1)^{|g|+1}d\circ g\circ d = 0. \end{align*} Concerning \eqref{ainfrel}[$2$], we have for homogeneous $g,h∈A$ \begin{align*}
(m_2\circ (m_1\otimes 1+&1\otimes m_1))(g\otimes h)= m_2(m_1(g)\otimes h + (-1)^{|g|}g\otimes m_1(h))\\
=\,& (d\circ g - (-1)^{|g|}g\circ d)\circ h + (-1)^{|g|}g\circ(d\circ h - (-1)^{|h|}h\circ d)\\
=\,& d\circ g \circ h - (-1)^{|g|+|h|}g\circ h\circ d\\ =\,& (m_1\circ m_2 )(g\otimes h). \end{align*} The map $m_2$ is induced by the composition of morphisms which is associative. As discussed in \cref{ex:dg}, equation \eqref{ainfrel}[$3$] holds.
\end{proof} }
\begin{bem} \label[bem]{bem:di} In $\Hom^*(\pres {\mathbb{F}_{\!p}} ,\pres {\mathbb{F}_{\!p}} )$ we have (cf. \eqref{prfp}) \begin{align*} d =\,& \sum_{i\geq 0} \lfloor e_{ω(i),ω(i+1)}\rfloor_{i+1}^i\,. \end{align*} \end{bem}
\begin{Def}[Homology of ${\rm A}_∞$-algebras, quasi-isomorphisms, minimality, minimal models] \label[Def]{def:homology} As $m_1^2=0$ (cf.\ \eqref{ainfrel}[$1$]) and
$|m_1|=1$, we have the complex
\[\cdots → A^{i-1}\xrightarrow{m_1|_{A^{i-1}}}A^{i}\xrightarrow{m_1|_{A^i}} A^{i+1}→\cdots\,.\]
We define ${\operatorname{H}}^k A:= \ker(m_1|_{A^k})/\im(m_1|_{A^{k-1}})$ and ${\operatorname{H}}^* A := \bigoplus_{k∈\mathbb{Z}} {\operatorname{H}}^k A$, which gives the homology of $A$ the structure of a graded $R$-module.
A morphism of ${\rm A}_∞$-algebras $(f_k)_{k\geq 1}: (A', (m'_k)_{k\geq 1}) → (A, (m_k)_{k\geq 1})$ is called a \textit{quasi-isomorphism} if the morphism of complexes $f_1:(A', m'_1)→(A,m_1)$ (cf. \cref{ex:cc}) is a quasi-isomorphism.
An ${\rm A}_∞$-algebra is called \textit{minimal}, if $m_1=0$. If $A$ is an ${\rm A}_∞$-algebra and $A'$ is a minimal ${\rm A}_∞$-algebra quasi-isomorphic to $A$, then $A'$ is called a \textit{minimal model of $A$}. \end{Def}
The existence of minimal models is assured by the following theorem. \begin{tm}(minimality theorem, cf.\ \cite{Ke01ad} (history), \cite{Ka82}, \cite{Ka80}, \cite{Pr84}, \cite{GuLaSt91}, \cite{JoLa01}, \cite{Me99}, … ) \label[tm]{tm:kadeishvili} Let $(A,(m_k)_{k\geq 1})$ be an ${\rm A}_\infty$-algebra such that the homology ${\operatorname{H}}^* A$ is a projective $R$-module. Then there exists an ${\rm A}_\infty$-algebra structure $(m'_k)_{k\geq 1}$ on ${\operatorname{H}}^*A$ and a quasi-isomorphism of ${\rm A}_∞$-algebras $(f_k)_{k\geq 1}:({\operatorname{H}}^* A,(m'_k)_{k\geq 1})→(A,(m_k)_{k\geq 1})$, such that \begin{itemize} \item $m'_1 = 0$ and \item the complex morphism $f_1:({\operatorname{H}}^*A,m'_1)→(A, m_1)$ induces the identity in homology. I.e.\ each element $x∈{\operatorname{H}}^* A$, which is a homology class of $(A,m_1)$, is mapped by $f_1$ to a representing cycle. \end{itemize}
\end{tm}
For constructing ${\rm A}_∞$-structures induced by another ${\rm A}_∞$-algebra, we have the following \begin{lemma}[{cf.\ \cite[Proof of Theorem 1]{Ka80}}] \label[lemma]{lem:aaut} Let $n∈\mathbb{Z}_{\geq 1}\cup \{∞\}$. Let $(A',(m'_k)_{k∈[1,n]})$ be a pre-${\rm A}_n$-algebra. Let $(A,(m_k)_{k∈[1,n]})$ be an ${\rm A}_n$-algebra. Let $(f_k)_{k∈[1,n]}$ be a pre-${\rm A}_n$-morphism from $A'$ to $A$ such that \eqref{finfrel}$[k]$ holds for $k∈[1,n]$. Suppose $f_1$ to be injective.\\* Then $(A',(m'_k)_{k∈[1,n]})$ is an ${\rm A}_n$-algebra and $(f_k)_{k∈[1,n]}$ is a morphism of ${\rm A}_n$-algebras from $(A',(m'_k)_{k∈[1,n]})$ to $(A,(m_k)_{k∈[1,n]})$. \end{lemma} This results from the bar construction and a straightforward induction on $n$. \kommentar{\begin{proof} We have the corresponding pre-${\rm A}_n$-triple $((m'_k)_{k∈[1,n]}, (b'_k)_{k∈[1,n]}, b')$, the corresponding pre-${\rm A}_n$-triple $((m_k)_{k∈[1,n]}, (b_k)_{k∈[1,n]}, b)$ and the corresponding pre-${\rm A}_n$-morphism triple $((f_k)_{k∈[1,n]}, (F_k)_{k∈[1,n]}, F)$.
It suffices to prove by induction on $k∈[0,n]$ that $(b')^2\big|_{TSA'_{\leq k}} = 0$, cf. \cref{tm:stasheff}.\\* For $k=0$, there is nothing to prove.
For the induction step, suppose that $b'\,^2\big|_{TSA'_{\leq k}} = 0$. Then by \cref{lem:bfind}(i), we have $\im(b'\,^2 \circ ι'_{k+1})\subseteq SA$. Thus $0 = b^2\circ F \circ ι'_{k+1} \overset{\text{L.}\ref{lem:stasheff2}}{=} F\circ b'\,^2 \circ ι'_{k+1} = F_1\circ b'\,^2 \circ ι'_{k+1}$. As the injectivity of $f_1$ implies the injectivity of $F_1$, we have $b'\,^2\circ ι'_{k+1} =0$ and thus $b'\,^2\big|_{TSA'_{\leq k+1}} = 0$.
\end{proof} }
\begin{lemma}[{\cite[Theorem 5]{Ve10}}] \label[lemma]{lem:finffinite} Let $R$ be a commutative ring and $(A,(m_n)_{n\geq 1})$ be a dg-algebra (over $R$). Suppose given a graded $R$-module $B$ and graded maps $f_n:B^{\otimes n}→A$, $m'_n:B^{\otimes n}→B$ for $n\geq 1$. Suppose given $k\geq 1$ such that we have $f_i=0$ for $i\geq k$, we have $m'_i=0$ for $i\geq k+1$,
and \eqref{finfrel}$[n]$ is satisfied for $n∈[1,2k-2]$. Then \eqref{finfrel}$[n]$ is satisfied for all $n\geq 1$. \end{lemma}
\subsection{The homology of \texorpdfstring{$\Hom^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}(\pres {\mathbb{F}_{\!p}} ,\pres {\mathbb{F}_{\!p}} )$}{Hom*(FpSp)(PResFp,PResFp)}} \label{subsec:homology} We need a well-known result of homological algebra in a particular formulation: \begin{lemma} \label[lemma]{prpr-prm} Let $F$ be a field. Let $B$ be an $F$-algebra. Let $M$ be a $B$-module. Let $Q = (\cdots \rightarrow Q_2 \xrightarrow{d_2} Q_1 \xrightarrow{d_1} Q_0 → 0→ \cdots)$ be a projective resolution of $M$ with augmentation $ε:Q_0→M$. Then we have maps for $k∈\mathbb{Z}$ \begin{align*} Ψ_k:\Hom^k_B(Q,Q) &\xrightarrow{}\Hom^k_B(Q, M) := \Hom_B(Q_k,M)\\ (g_i:Q_{i+k}→Q_i)_{i∈\mathbb{Z}} &\mapsto ε\circ g_0 \end{align*} The right side is equipped with the differentials (dualization of $d_k$) \begin{align*} (d_k)^*:\Hom_B(Q_k,M) &\rightarrow \Hom_B(Q_{k+1},M)\\ g & \mapsto (-1)^k g\circ d_k \end{align*} and the left side is equipped with the differential $m_1$ of its dg-algebra structure, cf.\ \cref{lem:cai}.
Then $(Ψ_k)_{k∈\mathbb{Z}}$ becomes a complex morphism from the complex $\Hom^*_B(Q,Q)$ to the complex $\Hom^*_B(Q, M)$ that induces isomorphisms $\bar{Ψ}_k$ of $F$-vector spaces on the homology \begin{align*} \bar{Ψ}_k:{\operatorname{H}}^k\Hom^*_B(Q,Q) & \xrightarrow{\simeq}{\operatorname{H}}^k \Hom^*_B(Q, M)\\ \overline{(g_i:Q_{i+k}→Q_i)_{i∈\mathbb{Z}}}&\mapsto \overline{ε\circ g_0} \end{align*}
\end{lemma} \cref{prpr-prm} is \cite[§5 Proposition 4a)]{Bo80} applied to the quasi-isomorphism induced by the augmentation, cf.\ \cite[§3 Définition 1]{Bo80}.
Recall the notation $\lfloor x\rfloor_y^z$ for the description of elements of $\Hom_B^k(C,C')$. \begin{pp} \label[pp]{pp:iota} Recall that $p\geq 3$ is a prime and $l=2(p-1)$. \\* Write \mbox{$A := \Hom^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}(\pres {\mathbb{F}_{\!p}} ,\pres {\mathbb{F}_{\!p}} )$}. Let \begin{align*} ι:=\,&\sum\nolimits_{i\geq 0} \lfloor e_{ω(i)} \rfloor_{i+l}^i =\sum\nolimits_{i\geq 0}\sum\nolimits_{k=0}^{l-1}\lfloor e_{ω(k)} \rfloor_{(i+1)l+k}^{il+k} ∈ A^l \\
χ :=\,& \sum\nolimits_{i\geq 0}\left(\lfloor e_1 \rfloor_{il+l-1}^{il} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{k+1,k}\rfloor_{il+l-1+k}^{il + k}\right\rgroup\right.\\ &\left. + \lfloor e_{p-1}\rfloor_{il+l-1 +(p-1)}^{il+(p-1)} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{p -k-1,p-k}\rfloor_{il+l-1+(p-1)+k}^{il +(p-1)+ k}\right\rgroup\right)∈A^{l-1}. \end{align*} \begin{enumerate}[\rm (a)] \item For $j\geq 0$, we have $
ι^j = \sum\nolimits_{i\geq 0} \lfloor e_{ω(i)} \rfloor_{i+jl}^i= \sum\nolimits_{i\geq 0}\sum\nolimits_{k=0}^{l-1}\lfloor e_{ω(k)} \rfloor_{(i+j)l+k}^{il+k}\,. $ \item Suppose given $y\geq 0$. Let $h∈A^y$ be $l$-periodic, that is $h=\sum\nolimits_{i\geq 0} \sum\nolimits_{k=0}^{l-1} \lfloor h_k\rfloor_{il+k+y}^{il + k}$. Then for $j\geq 0$, we have \begin{align*} h\circ ι^j = ι^j\circ h = \sum\nolimits_{i\geq 0} \sum\nolimits_{k=0}^{l-1} \lfloor h_k\rfloor_{(i+j)l + k +y}^{il+k}∈A^{y+jl}. \end{align*} \item Suppose given $y∈\mathbb{Z}$. For $h∈A^y$ and $j\geq 0$, we have $m_1(h\circ ι^j) = m_1(h)\circ ι^j$. \item For $j\geq 0$, we have $m_1(ι^j)=0$. Thus $ι^j$ is a cycle. \item For $j\geq 0$, we have \begin{align*} χι^j :=\,& χ\circ ι^j = ι^j\circ χ\\ =\,& \sum\nolimits_{i\geq 0}\left(\lfloor e_1 \rfloor_{(i+j+1)l-1}^{il} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{k+1,k}\rfloor_{(i+j+1)l-1+k}^{il + k}\right\rgroup\right.\\ &\left. + \lfloor e_{p-1}\rfloor_{(i+j+1)l-1 +(p-1)}^{il+(p-1)} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{p -k-1,p-k}\rfloor_{(i+j+1) l-1+(p-1)+k}^{il +(p-1)+ k}\right\rgroup\right)∈A^{jl+l-1}. \end{align*} For convenience, we also define $χ^0ι^j:= ι^j$ and $χ^1ι^j:=χι^j=χ\circ ι^j$ for $j\geq 0$. \item For $j\geq 0$, we have $m_1(χι^j) = 0$. Thus $χι^j$ is a cycle. \item Suppose given $k∈\mathbb{Z}$. A $ {\mathbb{F}_{\!p}} $-basis of ${\operatorname{H}}^kA$ is given by \begin{align*}
\{\overline{ι^j}\} & \text{ if $k=jl$ for some $j\geq 0$}\\ \{\overline{χι^j}\} & \text{ if $k=jl+l-1$ for some $j\geq 0$} \\ ∅ &\text{ else}.
\end{align*}
Thus the set $\mathfrak{B}:=\{\overline{ι^j}\mid j\geq 0\}\sqcup \{\overline{χι^j}\mid j\geq 0\}$ is an $ {\mathbb{F}_{\!p}} $-basis of ${\operatorname{H}}^*A = \bigoplus_{z∈\mathbb{Z}} {\operatorname{H}}^zA$.
\end{enumerate} \end{pp}
\kommentar{
Before we proceed we display $ι$ and $χ$ for the case $p = 5$ as an example:\\* The period is of length $l = 2p - 2 = 2\cdot 5 - 2 = 8$. The terms inside circles denote the degrees.
For $χ$, the differentials on the right side are negated to obtain a commutative diagram (We have $0=m_1(χ) = d\circ χ + χ\circ d$ because $|χ|$ is odd)
\begin{center} \begin{footnotesize} \[ \xymatrix@C=0.5pc@!R=0.43pc{ & \pres( {\mathbb{F}_{\!5}} )[8]\ar[r]^-\iota \ar@{}[d]^*--{\rotatebox{90}{\hspace{3mm}$=$\ru{2}}} & \pres( {\mathbb{F}_{\!5}} ) \ar@{}[d]^*--{\rotatebox{90}{\hspace{3mm}$=$\ru{2}}}\\
& \rotatebox{90}{$\cdots)$}\ar[d] & \rotatebox{90}{$\cdots)$}\ar[d] \\ *+[F-:<5pt>]{ 8} & P_1\ar[r]^{e_1} \ar[d]_{e_{1,1}} & P_1\ar[d]^{e_{1,1}} \\ *+[F-:<5pt>]{ 7} & P_1\ar[r]^{e_1} \ar[d]_{e_{2,1}} & P_1\ar[d]^{e_{2,1}} \\ *+[F-:<5pt>]{ 6} & P_2\ar[r]^{e_2} \ar[d]_{e_{3,2}} & P_2\ar[d]^{e_{3,2}} \\ *+[F-:<5pt>]{ 5} & P_3\ar[r]^{e_3} \ar[d]_{e_{4,3}} & P_3\ar[d]^{e_{4,3}} \\ *+[F-:<5pt>]{ 4} & P_4\ar[r]^{e_4} \ar[d]_{e_{4,4}} & P_4\ar[d]^{e_{4,4}} \\ *+[F-:<5pt>]{ 3} & P_4\ar[r]^{e_4} \ar[d]_{e_{3,4}} & P_4\ar[d]^{e_{3,4}} \\ *+[F-:<5pt>]{ 2} & P_3\ar[r]^{e_3} \ar[d]_{e_{2,3}} & P_3\ar[d]^{e_{2,3}} \\ *+[F-:<5pt>]{ 1} & P_2\ar[r]^{e_2} \ar[d]_{e_{1,2}} & P_2\ar[d]^{e_{1,2}} \\ *+[F-:<5pt>]{ 0} & P_1\ar[r]^{e_1} \ar[d]_{e_{1,1}} & P_1\ar[d] \\ *+[F-:<5pt>]{-1} & P_1\ar[r] \ar[d]_{e_{2,1}} & 0 \ar[d] \\ *+[F-:<5pt>]{-2} & P_2\ar[r] \ar[d]_{e_{3,2}} & 0 \ar[d] \\ *+[F-:<5pt>]{-3} & P_3\ar[r] \ar[d]_{e_{4,3}} & 0 \ar[d] \\ *+[F-:<5pt>]{-4} & P_4\ar[r] \ar[d]_{e_{4,4}} & 0 \ar[d] \\ *+[F-:<5pt>]{-5} & P_4\ar[r] \ar[d]_{e_{3,4}} & 0 \ar[d] \\ *+[F-:<5pt>]{-6} & P_3\ar[r] \ar[d]_{e_{2,3}} & 0 \ar[d] \\ *+[F-:<5pt>]{-7} & P_2\ar[r] \ar[d]_{e_{1,2}} & 0 \ar[d] \\ *+[F-:<5pt>]{-8} & P_1\ar[r] \ar[d] & 0 \ar[d] \\ *+[F-:<5pt>]{-9} & 0 \ar[r]\ar[d] & 0 \ar[d] \\
& \rotatebox{-90}{$\cdots)$} & \rotatebox{-90}{$\cdots)$} \\ } \hspace{2cm}
\xymatrix@C=0.5pc@!R=0.43pc{
& \pres( {\mathbb{F}_{\!5}} )[8-1]\ar[r]^-\chi \ar@{}[d]^*--{\rotatebox{90}{\hspace{3mm}$=$\ru{2}}} & \pres( {\mathbb{F}_{\!5}} ) \ar@{}[d]^*--{\rotatebox{90}{\hspace{3mm}$=$\ru{2}}}\\
& \rotatebox{90}{$\cdots)$}\ar[d] & \rotatebox{90}{$\cdots)$}\ar[d] \\ *+[F-:<5pt>]{ 9} & P_1\ar[r]^{e_{2,1}}\ar[d]_{e_{1,1}} & P_2\ar[d]^{-e_{1,2}} \\ *+[F-:<5pt>]{ 8} & P_1\ar[r]^{e_1} \ar[d]_{e_{2,1}} & P_1\ar[d]^{-e_{1,1}} \\ *+[F-:<5pt>]{ 7} & P_2\ar[r]^{e_{1,2}}\ar[d]_{e_{3,2}} & P_1\ar[d]^{-e_{2,1}} \\ *+[F-:<5pt>]{ 6} & P_3\ar[r]^{e_{2,3}}\ar[d]_{e_{4,3}} & P_2\ar[d]^{-e_{3,2}} \\ *+[F-:<5pt>]{ 5} & P_4\ar[r]^{e_{3,4}}\ar[d]_{e_{4,4}} & P_3\ar[d]^{-e_{4,3}} \\ *+[F-:<5pt>]{ 4} & P_4\ar[r]^{e_4} \ar[d]_{e_{3,4}} & P_4\ar[d]^{-e_{4,4}} \\ *+[F-:<5pt>]{ 3} & P_3\ar[r]^{e_{4,3}}\ar[d]_{e_{2,3}} & P_4\ar[d]^{-e_{3,4}} \\ *+[F-:<5pt>]{ 2} & P_2\ar[r]^{e_{3,2}}\ar[d]_{e_{1,2}} & P_3\ar[d]^{-e_{2,3}} \\ *+[F-:<5pt>]{ 1} & P_1\ar[r]^{e_{2,1}}\ar[d]_{e_{1,1}} & P_2\ar[d]^{-e_{1,2}} \\ *+[F-:<5pt>]{ 0} & P_1\ar[r]^{e_1} \ar[d]_{e_{2,1}} & P_1\ar[d] \\ *+[F-:<5pt>]{-1} & P_2\ar[r] \ar[d]_{e_{3,2}} & 0 \ar[d] \\ *+[F-:<5pt>]{-2} & P_3\ar[r] \ar[d]_{e_{4,3}} & 0 \ar[d] \\ *+[F-:<5pt>]{-3} & P_4\ar[r] \ar[d]_{e_{4,4}} & 0 \ar[d] \\ *+[F-:<5pt>]{-4} & P_4\ar[r] \ar[d]_{e_{3,4}} & 0 \ar[d] \\ *+[F-:<5pt>]{-5} & P_3\ar[r] \ar[d]_{e_{2,3}} & 0 \ar[d] \\ *+[F-:<5pt>]{-6} & P_2\ar[r] \ar[d]_{e_{1,2}} & 0 \ar[d] \\ *+[F-:<5pt>]{-7} & P_1\ar[r] \ar[d] & 0 \ar[d] \\ *+[F-:<5pt>]{-8} & 0 \ar[r]\ar[d] & 0 \ar[d] \\
& \rotatebox{-90}{$\cdots)$} & \rotatebox{-90}{$\cdots)$} \\ } \] \end{footnotesize} \end{center} }
\begin{proof} The element $ι$ is well-defined since $ω(y) = ω(l+y)$ for $y\geq 0$.\\* In the definition of $χ$ we need to check that the "$\lfloor * \rfloor_*^*$" are well defined. This is easily proven by calculating the $ω(y)$ where $y$ is the lower respective upper index of "$\lfloor * \rfloor_*^*$".
(a): As $\pr_i = \{0\}$ for $i<0$, the identity element of $A$ is given by $ι^0 = \sum\nolimits_{i\geq 0}\lfloor e_{ω(i)}\rfloor_{i}^i$, which agrees with the assertion in case $j=0$. So we have proven the induction basis for induction on $j$. So now assume that for some $j\geq 0$ the assertion holds. Then \begin{align*} ι^{j+1} =\,& ι\circ ι^j = \left(\sum\nolimits_{i\geq 0} \lfloor e_{ω(i)} \rfloor_{i+l}^i\right)\circ\left(\sum\nolimits_{i'\geq 0} \lfloor e_{ω(i')} \rfloor_{i'+jl}^{i'}\right) \\ =\,& \sum\nolimits_{i\geq 0} \lfloor e_{ω(i)}\circ e_{ω(i+l)} \rfloor_{i+l+jl}^i = \sum\nolimits_{i\geq 0} \lfloor e_{ω(i)}\rfloor_{i+(j+1)l}^i\,. \end{align*} Thus the proof by induction is complete.
(b): We have \begin{align*} ι^j \circ h =\,& \left(\sum_{i\geq 0} \sum_{k=0}^{l-1}\lfloor e_{ω(il+k)} \rfloor_{(i+j)l+k}^{il+k}\right) \circ \left(\sum_{i'\geq 0}\sum_{k'=0}^{l-1}\lfloor h_{k'}\rfloor_{i'l+k'+ y}^{i'l+k'}\right) \overset{\natop{i'\leadsto i+j}{k'\leadsto k}}{=} \sum_{i\geq 0} \sum_{k=0}^{l-1} \lfloor h_k\rfloor_{(i+j)l+k+y}^{il+k}\\ h\circ ι^j =\,& \left(\sum_{i\geq 0}\sum_{k=0}^{l-1}\lfloor h_k\rfloor_{il+k+y}^{il+k}\right) \circ
\left(\sum_{i'\geq 0} \lfloor e_{ω(i')} \rfloor_{i'+jl}^{i'}\right) = \sum_{i\geq 0}\sum_{k=0}^{l-1} \lfloor h_k\rfloor_{(i+j)l + k+y}^{il+k}\, . \end{align*} So we have proven (b).
(c): The differential $d$ of $\pres {\mathbb{F}_{\!p}} $ is $l$-periodic (cf. \cref{bem:di}) and thus \begin{align*} m_1(h)\circ ι^j =\,& (d\circ h - (-1)^y h\circ d)\circ ι^j\\
\ovs{(b), |ι^j|\equiv_2 0}& d\circ h\circ ι^j - (-1)^{y+|ι^j|} h\circ ι^j\circ d = m_1(h\circ ι^j). \end{align*}
(d): We have $ m_1(ι^j) \overset{(c)}{=} m_1(ι^0)\circ ι^j = (d\circ ι^0 - (-1)^0 ι^0 d)\circ ι^j = (d - d)\circ ι^j = 0. $
(e) is implied by (b) using the fact that $χ$ is $l$-periodic.
(f): Because of (c) we have $m_1(χι^j) = m_1(χ)\circ ι^j$. Because $|χ|=l-1$ is odd we have \begin{align*} m_1&(χ) = d\circ χ - (-1) χ\circ d = χ\circ d + d\circ χ\\ \ovs{\text{R.}\ref{bem:di}}& \Big(\sum\nolimits_{i\geq 0}\left(\lfloor e_1 \rfloor_{il+l-1}^{il} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{k+1,k}\rfloor_{il+l-1+k}^{il + k}\right\rgroup+ \lfloor e_{p-1}\rfloor_{il+l-1 +(p-1)}^{il+(p-1)}\right.\\ &\left. + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{p -k-1,p-k}\rfloor_{il+l-1+(p-1)+k}^{il +(p-1)+ k}\right\rgroup\right)\Big) \circ \left(\sum\nolimits_{y\geq 0} \lfloor e_{ω(y),ω(y+1)}\rfloor_{y+1}^y\right)\\
&+\left(\sum\nolimits_{y\geq 0} \lfloor e_{ω(y),ω(y+1)}\rfloor_{y+1}^y\right) \circ \Big( \sum\nolimits_{i\geq 0}\left(\lfloor e_1 \rfloor_{il+l-1}^{il} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{k+1,k}\rfloor_{il+l-1+k}^{il + k}\right\rgroup\right.\\ &\left. + \lfloor e_{p-1}\rfloor_{il+l-1 +(p-1)}^{il+(p-1)} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{p -k-1,p-k}\rfloor_{il+l-1+(p-1)+k}^{il +(p-1)+ k}\right\rgroup\right)\Big)\\
=\,& \sum\nolimits_{i\geq 0}\Big( \lfloor e_1\circ e_{1,1}\rfloor_{il+l}^{il} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{k+1,k}\circ e_{k,k+1}\rfloor_{il+l+k}^{il+k}\right\rgroup\\ &+\lfloor e_{p-1}\circ e_{p-1,p-1}\rfloor_{il+l+(p-1)}^{il+(p-1)} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{p-k-1,p-k}\circ e_{p-k,p-k-1}\rfloor_{il+l+(p-1)+k}^{il+(p-1)+k}\right\rgroup\Big)\\
&+\sum\nolimits_{i\geq 1} \lfloor e_{1,1}\circ e_1\rfloor_{il+l-1}^{il-1} + \sum\nolimits_{i\geq 0}\Big( \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{k,k+1}\circ e_{k+1,k}\rfloor_{il+l+k-1}^{il+k-1}\right\rgroup\\ &+ \lfloor e_{p-1,p-1}\circ e_{p-1}\rfloor_{il+l-1+(p-1)}^{il-1+(p-1)} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{p-k,p-k-1}\circ e_{p-k-1,p-k}\rfloor_{il+l-1+(p-1)+k}^{il-1+(p-1) + k}\right\rgroup\Big)\\
\overset{*}{=\,}& \sum\nolimits_{i\geq 0}\Big( \lfloor e_{1,1} + e_{1,2}\circ e_{2,1}\rfloor_{il+l}^{il} + \left\lgroup\sum\nolimits_{k=1}^{p-3}\lfloor e_{k+1,k}\circ e_{k,k+1}+ e_{k+1,k+2}\circ e_{k+2,k+1}\rfloor_{il+l+k}^{il+k}\right\rgroup\\
&+ \lfloor e_{p-1,p-2}\circ e_{p-2,p-1} + e_{p-1,p-1}\rfloor_{il+l+p-2}^{il+p-2} + \lfloor e_{p-1,p-1}+e_{p-1,p-2}\circ e_{p-2,p-1}\rfloor_{il+l+p-1}^{il+p-1}\\
&+ \left\lgroup\sum\nolimits_{k=1}^{p-3}\lfloor e_{p-k-1,p-k}\circ e_{p-k,p-k-1} + e_{p-k-1,p-k-2}\circ e_{p-k-2,p-k-1}\rfloor_{il+l+p-1+k}^{il+p-1+k}\right\rgroup\\
&+ \lfloor e_{1,2}\circ e_{2,1} + e_{1,1}\rfloor_{(i+1)l+l-1}^{(i+1)l-1}\Big) \overset{\text{L.}\ref{lem:relfp}(a)}{=} 0 \end{align*} In the step marked by "$*$" we sort the summands by their targets. Note that when splitting sums of the form $\sum\nolimits_{k=1}^{p-2}(…)_k$ into $(…)_1+\sum\nolimits_{k=2}^{p-2}(…)_k$ or into $(…)_{p-2}+\sum\nolimits_{k=1}^{p-3}(…)_k$, the existence of the summand that is split off is ensured by $p\geq 3$.
(g): We first show that the differentials of the complex $\Hom^*(\pres {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )$ (cf. \cref{prpr-prm}) are all zero: By \cref{lem:relfp}, $\{ε\}$ is an $ {\mathbb{F}_{\!p}} $-basis of $\Hom_{ {\mathbb{F}_{\!p}} \!\Sy_p}(P_1, {\mathbb{F}_{\!p}} )$, and for $k∈[2,p-1]$ we have $\Hom_{ {\mathbb{F}_{\!p}} \!\Sy_p}(P_k, {\mathbb{F}_{\!p}} ) = 0$. So the only non-trivial $(d_k)^*$ are those where $\pr_k = \pr_{k+1} = P_1$. This is the case only when $k=lj + l-1$ for some $j\geq 0$. Then $d_k= e_{1,1}$. For $ε∈\Hom(P_1, {\mathbb{F}_{\!p}} )$, we have $(d_k)^* (ε) = (-1)^kε\circ e_{1,1} \overset{\text{L.}\ref{lem:relfp}(a)}{=} 0$. As $ \Hom(P_1, {\mathbb{F}_{\!p}} )= \langle ε\rangle_ {\mathbb{F}_{\!p}} $, we have $(d_k)^* = 0$.
So ${\operatorname{H}}^k \Hom^*(\pres {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} ) = \Hom^k(\pres {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )$. We use \cref{prpr-prm}.\\* For $k=jl$, $j\geq 0$, we have $\bar{Ψ}^k(\overline{ι^j}) \overset{(a)}{=} ε$, and $\{ε\}$ is a basis of ${\operatorname{H}}^k \Hom^*(\pres {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )$.\\* For $k=jl+l-1$, $j\geq 0$, we have $\bar{Ψ}^k(\overline{χι^j})\overset{(e)}{=}ε$, and $\{ε\}$ is a basis of ${\operatorname{H}}^k \Hom^*(\pres {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )$.\\* Finally, for $k=jl+r$ for some $j\geq 0$ and some $r∈[1,l-2]$ and for $k<0$, we have ${\operatorname{H}}^k \Hom^*(\pres {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )=\{0\}$. \end{proof}
\subsection[An \texorpdfstring{${\rm A}_\infty$}{A(oo)}-structure on \texorpdfstring{$\Ext^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}\!( {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )$}{Ext*(FpSp)(Fp,Fp)} as a minimal model of \texorpdfstring{$\Hom^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}$}{Hom*(FpSp)}\discretionary{}{}{}\texorpdfstring{$(\pres {\mathbb{F}_{\!p}} ,$}{(PResFp,)}\discretionary{}{}{}\texorpdfstring{$\pres {\mathbb{F}_{\!p}} )$}{PresFp}] {An \texorpdfstring{${\rm A}_\infty$}{A(oo)}-structure on \texorpdfstring{$\Ext^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}( {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )$}{Ext*(FpSp)(Fp,Fp)} as a minimal model of \texorpdfstring{$\Hom^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}(\pres {\mathbb{F}_{\!p}} ,\pres {\mathbb{F}_{\!p}} )$}{Hom*(FpSp)(PResFp,PresFp)}} \label{subsec:minmod}
Recall that $p\geq 3$ is a prime. Write $A := \Hom^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}(\pres {\mathbb{F}_{\!p}} ,\pres {\mathbb{F}_{\!p}} )$, which becomes an ${\rm A}_∞$-algebra $(A, (m_n)_{n\geq 1})$ over $R= {\mathbb{F}_{\!p}} $ via \cref{lem:cai}. We implement $\Ext^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}( {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} )$ as $\Ext^*_{ {\mathbb{F}_{\!p}} \!\Sy_p}( {\mathbb{F}_{\!p}} , {\mathbb{F}_{\!p}} ):={\operatorname{H}}^*A$.
Our goal in this section is to construct an ${\rm A}_\infty$-structure $(m'_n)_{n\geq 1}$ on ${\operatorname{H}}^*A$ and a morphism of ${\rm A}_\infty$-algebras $f=(f_n)_{n\geq 1}:({\operatorname{H}}^*A,(m'_n)_{n\geq 1})→(A, (m_n)_{n\geq 1})$ which satisfy the statements of \cref{tm:kadeishvili}. I.e.\ we will construct a minimal model of $A$.
In preparation of the definitions of the $f_n$ and $m'_n$, we name and examine certain elements of $A$: \begin{lemma} \label[lemma]{lem:gamma} Suppose given $k∈[2,p-1]$. We set \begin{align*} γ_k:= \sum\nolimits_{i\geq 0}\left( \lfloor e_k \rfloor_{k(l-1)+li}^{k-1+li} +\lfloor e_{p-k} \rfloor_{k(l-1) + (p-1) + li}^{k-1+(p-1)+li}\right)∈A^{k(l-2)+1}. \end{align*} For $j\geq 0$, we have \begin{align*} γ_kι^j := γ_k\circ ι^j = ι^j\circ γ_k = \sum\nolimits_{i\geq 0}\left( \lfloor e_k \rfloor_{k(l-1)+l(i+j)}^{k-1+li} + \lfloor e_{p-k} \rfloor_{k(l-1) + (p-1) + l(i+j)}^{k-1+(p-1)+li}\right) ∈A^{k(l-2)+1+jl} \end{align*} and
\begin{align*} m_1(γ_kι^j)=\,&\sum\nolimits_{i\geq 0}\left( \lfloor e_{k-1,k} \rfloor_{k(l-1)+l(i+j)}^{k-2+li} +
\lfloor e_{p-k+1,p-k}\rfloor_{k(l-1) + (p-1) + l(i+j)}^{k-2+(p-1)+li}\right.\\ &\left.+
\lfloor e_{k,k-1}\rfloor_{k(l-1)+1+l(i+j)}^{k-1+li} + \lfloor e_{p-k,p-(k-1)}\rfloor_{k(l-1) + p + l(i+j)}^{k-1+(p-1)+li}\right). \end{align*} \end{lemma} \begin{proof} We need to prove that $γ_k$ is well-defined. Let $i\geq 0$.\\* We consider the first term. The complex $\pres {\mathbb{F}_{\!p}} $ (cf.\ \eqref{prfp}, \eqref{omega}) has entry $P_k$ at position $k(l-1)+li$ and at position $k-1+li$: We have $k(l-1)+li = (k-1+i)l + l - k$. So $ω(k(l-1)+li) = l - (l - k) = k$ since $p-1\leq l-k \leq l-1$. We have $ω(k-1+li) = (k-1)+1 = k$ since $0\leq k-1 \leq p-2$. As $k(l-1)+li, k-1+li\geq 0$, we have $\pr_{k(l-1)+li}=P_{ω(k(l-1)+li)}=P_k$ and $\pr_{k-1+li} = P_{ω(k-1+li)} = P_k$. So the first term is well-defined.\\*
Now consider the second term. The complex $\pres {\mathbb{F}_{\!p}} $ has entry $P_{p-k}$ at position $k(l-1) + (p-1) + li$ and at position $k-1+(p-1)+li$: We have $k(l-1) + (p-1) + li = (i+k)l + (p-1)-k$, so $ω(k(l-1) + (p-1)+li) = (p-1)-k + 1 = p-k$ since $0\leq (p-1)-k \leq p-2$. We have $ω(k-1+(p-1)+li) = 2(p-1) - (k-1) -(p-1) = p-k$ since $p-1\leq k-1 + (p-1) \leq 2(p-1)-1$. As $k(l-1) + (p-1) + li, k-1+(p-1)+li\geq 0$, we have $\pr_{k(l-1)+(p-1)+li} = P_{ω(k(l-1)+(p-1)+li)} = P_{p-k}$ and $\pr_{k-1+(p-1)+li} = P_{ω(k-1+(p-1)+li)} = P_{p-k}$. So the second term is well-defined.
The degree of the tuple of maps is computed to be $(k(l-1) + li) - (k-1+li) = k(l-2) + 1 = (k(l-1)+(p-1)+li) - (k-1+(p-1)+li)$.
The explicit formula for $γ_kι^j$ is an application of \cref{pp:iota}(b).
The degree $|γ_kι^j| = k(l-2) + 1$ is odd, so \begin{align*} m_1(γ_kι^j)\,\, \ovs{\text{L.}\ref{lem:cai}}& d \circ γ_kι^j + γ_kι^j \circ d\\ \ovs{\text{R.}\ref{bem:di}}& \sum\nolimits_{i\geq 0} \lfloor e_{ω(k-2),ω(k-1)}\rfloor_{k-1+li}^{k-2+li} \circ \sum\nolimits_{i\geq 0} \lfloor e_k \rfloor_{k(l-1)+l(i+j)}^{k-1+li} \\ &+ \sum\nolimits_{i\geq 0} \lfloor e_{ω(p-1+k-2),ω(p-1+k-1)}\rfloor_{k-1+(p-1)+li}^{k-2+(p-1)+li}\circ \sum\nolimits_{i\geq 0} \lfloor e_{p-k} \rfloor_{k(l-1) + (p-1) + l(i+j)}^{k-1+(p-1)+li} \\ &+\sum\nolimits_{i\geq 0} \lfloor e_k \rfloor_{k(l-1)+l(i+j)}^{k-1+li} \circ \sum\nolimits_{i\geq 0} \lfloor e_{ω(l-k),ω(l-k+1)}\rfloor_{k(l-1)+1+l(i+j)}^{k(l-1)+l(i+j)}\\ &+ \sum\nolimits_{i\geq 0} \lfloor e_{p-k} \rfloor_{k(l-1) + (p-1) + l(i+j)}^{k-1+(p-1)+li}\circ
\sum\nolimits_{i\geq 0} \lfloor e_{ω(p-1-k),ω(p-k)}\rfloor_{k(l-1) + p + l(i+j)}^{k(l-1) + (p-1) + l(i+j)}\\ =\,&\sum\nolimits_{i\geq 0} \lfloor e_{k-1,k} \rfloor_{k(l-1)+l(i+j)}^{k-2+li} + \sum\nolimits_{i\geq 0} \lfloor e_{p-k+1,p-k}\rfloor_{k(l-1) + (p-1) + l(i+j)}^{k-2+(p-1)+li}\\ &+\sum\nolimits_{i\geq 0} \lfloor e_{k,k-1}\rfloor_{k(l-1)+1+l(i+j)}^{k-1+li} + \sum\nolimits_{i\geq 0} \lfloor e_{p-k,p-(k-1)}\rfloor_{k(l-1) + p + l(i+j)}^{k-1+(p-1)+li}\\ \end{align*} Note that in the second line $k-2 +li\geq 0$ as $i\geq 0$ and $k\geq 2$. \end{proof} \begin{lemma} \label[lemma]{lem:chichi} For $j,j'\geq 0$, we have $ χι^j \circ χι^{j'} = m_1(γ_2ι^{j+j'}). $ \end{lemma} \begin{proof} It suffices to prove that $χ\circ χ = m_1(γ_2)$ since then $χι^j \circ χι^{j'} \overset{\text{P.}\ref{pp:iota}(e)}{=} χ\circ χ\circ ι^{j+j'} = m_1(γ_2)\circ ι^{j+j'} \overset{\text{P.}\ref{pp:iota}(c)}{=} m_1(γ_2ι^{j+j'})$.\\* To determine when a composite is zero, we will need the following. For \mbox{$0\leq k,k' < l$}, we examine the condition \begin{align} \label{mulccg} il +l - 1 + k = i'l + k'. \end{align} If $k=0$ then \eqref{mulccg} holds iff $i=i'$ and $k' = l-1$.\\* If $k\geq 1$ then \eqref{mulccg} holds iff $i+1=i'$ and $k'=k-1$.\\* So
\begin{align*} χ\circ χ \ovs{p\geq 3\vphantom{I_{I_y}}}& \left(\sum\nolimits_{i\geq 0}\left(\lfloor e_1 \rfloor_{il+l-1}^{il} + \lfloor e_{2,1}\rfloor_{il+l}^{il+1}+\left\lgroup\sum\nolimits_{k=2}^{p-2}\lfloor e_{k+1,k}\rfloor_{il+l-1+k}^{il + k}\right\rgroup\right.\right.\\ & + \lfloor e_{p-1}\rfloor_{il+l-1 +(p-1)}^{il+(p-1)} + \lfloor e_{p-2,p-1}\rfloor_{il+l+p-1}^{il+p} +\left.\left.\left\lgroup\sum\nolimits_{k=2}^{p-2}\lfloor e_{p -k-1,p-k}\rfloor_{il+l-1+(p-1)+k}^{il +(p-1)+ k}\right\rgroup\right)\right)\\ &\circ\left(\sum\nolimits_{i'\geq 0}\left(\lfloor e_1 \rfloor_{i'l+l-1}^{i'l} + \left\lgroup\sum\nolimits_{k'=1}^{p-3}\lfloor e_{k'+1,k'}\rfloor_{i'l+l-1+k'}^{i'l + k'}\right\rgroup\right.\right.
+ \lfloor e_{p-1,p-2}\rfloor_{i'l+l+p-3}^{i'l+p-2}\\ & + \lfloor e_{p-1}\rfloor_{i'l+l-1 +(p-1)}^{i'l+(p-1)} \left.\left.+ \left\lgroup\sum\nolimits_{k'=1}^{p-3}\lfloor e_{p -k'-1,p-k'}\rfloor_{i'l+l-1+(p-1)+k'}^{i'l +(p-1)+ k'}\right\rgroup+ \lfloor e_{1,2}\rfloor_{i'l+l+2(p-2)}^{i'l+l-1}\right)\right)\\
=\,& \sum\nolimits_{i\geq 0}\Big(\lfloor e_1\circ e_{1,2} \rfloor_{il+l+2(p-2)}^{il} + \lfloor e_{2,1}\circ e_1\rfloor_{il+2l-1}^{il+1}\\ & +\left\lgroup\sum\nolimits_{k=2}^{p-2}\right.\!\underbrace{\lfloor e_{k{+}1,k}\circ e_{k,k{-}1}\rfloor_{il+2l-1+k-1}^{il + k}}_{=0\text{ by L.\ref{lem:relfp}(c)}}\left.\vphantom{\sum\nolimits_k^p}\!\!\right\rgroup + \lfloor e_{p-1}\circ e_{p-1,p-2}\rfloor_{il+2l+p-3}^{il+(p-1)}\\ &+ \lfloor e_{p-2,p-1}\circ e_{p-1}\rfloor_{il+2l+p-2}^{il+p} +\left\lgroup\sum\nolimits_{k=2}^{p-2}\right.\underbrace{\lfloor e_{p -k-1,p-k}\circ e_{p-k,p-k+1}\rfloor_{il+2l-1+p-1+k-1}^{il +(p-1)+ k}}_{=0\text{ by L.\ref{lem:relfp}(c)}}\left.\vphantom{\sum\nolimits_k^p}\!\right\rgroup\Big)\\
=\,& \sum\nolimits_{i\geq 0}\Big(\lfloor e_{1,2} \rfloor_{(i+2)l-2}^{il} + \lfloor e_{2,1}\rfloor_{(i+2)l-1}^{il+1} +\lfloor e_{p-1,p-2}\rfloor_{(i+2)l+p-3}^{il+p-1} + \lfloor e_{p-2,p-1}\rfloor_{(i+2)l+p-2}^{il+p} \Big)\\ \ovs{\text{L.}\ref{lem:gamma}}& m_1(γ_2) \end{align*}
\end{proof}
Below are the definitions which will give a minimal ${\rm A}_∞$-algebra structure on ${\operatorname{H}}^*A$ and a quasi-isomorphism of ${\rm A}_∞$-algebras ${\operatorname{H}}^*A → A$. \begin{Def} \label[Def]{defall} Recall from \cref{pp:iota} that $\mathfrak{B}= \mbox{$\{\overline{ι^j}\mid j\geq 0\}\sqcup \{\overline{χι^j}\mid j\geq 0\}$} = \{\overline{χ^aι^j} \mid j\geq 0, a∈\{0,1\}\}$ is a basis of ${\operatorname{H}}^*A$. For $n∈\mathbb{Z}_{\geq 1}$, we set \begin{align*} \mathfrak{B}^{\otimes n} :=\,& \{\overline{χ^{a_1}ι^{j_1}}\otimes … \otimes \overline{χ^{a_n}ι^{j_n}} ∈ ({\operatorname{H}}^*A)^{\otimes n} \mid a_i∈\{0,1\} \text{ and } j_i∈\mathbb{Z}_{\geq 0} \text{ for all }i∈[1,n] \},
\end{align*}
which is a basis of $({\operatorname{H}}^* A)^{\otimes n}$ consisting of homogeneous elements.
For $n\geq 1$, we define the $ {\mathbb{F}_{\!p}} $-linear map $f_n:({\operatorname{H}}^*A)^{\otimes n}→ A$ as follows: \begin{description} \item[Case $n=1$:] $f_1$ is given on $\mathfrak{B}$ by $f_1(\overline{ι^j}):=ι^j$ and $f_1(\overline{χι^j}):= χι^j$. \item[Case {$n∈[2,p-1]$}:] $f_n$ is given on elements of $\mathfrak{B}^{\otimes n}$
by \begin{align*} f_n(\overline{χ^{a_1}ι^{j_1}}\otimes … \otimes \overline{χ^{a_n}ι^{j_n}}) := \begin{cases} 0 & \text{if }∃i∈[1,n]: a_i = 0 \\ (-1)^{n-1}γ_nι^{j_1+…+j_n} & \text{if } 1=a_1 = a_2 = … = a_n \end{cases} \end{align*} \item[Case {$n\geq p$}:] We set $f_n:= 0$. \end{description} For $n\geq 1$, we define the $ {\mathbb{F}_{\!p}} $-linear map $m'_n:({\operatorname{H}}^*A)^{\otimes n}→ {\operatorname{H}}^*A$ by defining it on elements $\overline{χ^{a_1}ι^{j_1}}\otimes … \otimes \overline{χ^{a_n}ι^{j_n}}∈\mathfrak{B}^{\otimes n}$: \begin{description} \item[{Case $∃i∈[1,n]: a_i = 0$:}] $ $\\*
$m'_n(\overline{χ^{a_1}ι^{j_1}}\otimes … \otimes \overline{χ^{a_n}ι^{j_n}}) := 0$ for $n\neq 2$ and\\*
$ m'_2(\overline{χ^{a_1}ι^{j_1}} \otimes \overline{χ^{a_2}ι^{j_2}}) := \overline{χ^{a_1+a_2}ι^{j_1+j_2}} $ (Note that $a_1+a_2∈\{0,1\}$).
\item[Case $a_1 = a_2 = … = a_n=1$:] $ $\\*
$m'_n(\overline{χι^{j_1}}\otimes … \otimes \overline{χι^{j_n}}):= 0$ for $n\neq p$ and\\* $m'_p(\overline{χι^{j_1}}\otimes … \otimes \overline{χι^{j_p}}) := (-1)^p\overline{ι^{p-1+j_1+…+j_p}}=-\overline{ι^{p-1+j_1+…+j_p}}$. \end{description}
\end{Def} Note that since $p\geq 3$, we have $m'_2(\overline{χι^{j_1}} \otimes \overline{χι^{j_2}}) = 0$ for $j_1,j_2\geq 0$.
\begin{tm} \label[tm]{tm:statement} The pair $({\operatorname{H}}^*A, (m'_n)_{n\geq 1})$ is a minimal ${\rm A}_∞$-algebra.
The tuple $(f_n)_{n\geq 1}$ is an quasi-isomorphism of ${\rm A}_∞$-algebras from $({\operatorname{H}}^*A, (m'_n)_{n\geq 1})$ to $(A, (m_n)_{n\geq 1})$. More precisely, \mbox{$f_1:({\operatorname{H}}^* A, m'_1)→(A, m_1)$} induces the identity in homology. \end{tm} The proof of \cref{tm:statement} will take the remainder of \cref{subsec:minmod}. We will use \cref{lem:aaut}. \begin{lemma} \label[lemma]{lem:degree}
The maps $f_n$ and $m'_n$ have degree $|f_n| = 1-n$ and $|m'_n| = 2-n$. I.e.\ $(f_n)_{n\geq 1}$ is a pre-$A_∞$-morphism from ${\operatorname{H}}^* A$ to $A$, and $({\operatorname{H}}^*A, (m'_n)_{n\geq 1})$ is a pre-${\rm A}_∞$-algebra. \end{lemma} \begin{proof}
We have $|f_1|=0$ as $|\overline{ι^j}| = |ι^j|$ and $|\overline{χι^j}|=|χι^j|$. For $n\geq p$ the map $f_n$ is of degree $1-n$ as $f_n=0$. For $n∈[2,p-1]$ the statement $|f_n|=1-n$ is proven by checking the degrees for the elements of the basis $\mathfrak{B}^{\otimes n}$ whose image under $f_n$ is non-zero: \begin{align*}
|f_n(\overline{χι^{j_1}}\otimes … \otimes \overline{χι^{j_n}})| =\,&
|(-1)^{n-1}γ_nι^{j_1+…+j_n}| \overset{\text{L.}\ref{lem:gamma}}{=} (j_1+…+j_n)l + n(l-1) + 1-n\\
=\,& 1-n+\sum\nolimits_{x=1}^n |\overline{χι^{j_x}}| = 1-n + |\overline{χι^{j_1}}\otimes … \otimes \overline{χι^{j_n}}| \end{align*}
Thus $|f_n|=1-n$ for all $n$ and we have proven the first statement.
Now we show $|m'_n|=2-n$. As before, we only need check the degrees for basis elements whose image is non-zero: For $\overline{χ^{a_1}ι^{j_1}}\otimes\overline{χ^{a_2}ι^{j_2}}$, $j_1,j_2\geq 0$, $a_1,a_2∈\{0,1\}$, $0∈\{a_1,a_2\}$, we have \begin{align*}
|m'_2(\overline{χ^{a_1}ι^{j_1}}\otimes\overline{χ^{a_2}ι^{j_2}})| =\,& |\overline{χ^{a_1+a_2}ι^{j_1+j_2}}| = (a_1+a_2)(l-1)+l(j_1+j_2)\\
=\,& a_1(l-1) + j_1l + a_2(l-1) + j_2l = |\overline{χ^{a_1}ι^{j_1}}\otimes\overline{χ^{a_2}ι^{j_2}}| + (2-2). \end{align*} For $\overline{χι^{j_1}}\otimes \cdots \otimes \overline{χι^{j_p}}$, $j_x\geq 0$ for $x∈[1,p]$, we have \begin{align*}
|m'_p(\overline{χι^{j_1}}\otimes \cdots \otimes \overline{χι^{j_p}})| =\,& |\overline{ι^{p-1+j_1+…+j_p}}| = l(p-1+j_1+…+j_p)\\ =\,& lp - l + l(j_1+…+j_p) = lp - 2p+2 + l(j_1+…+j_p)\\
=\,& p(l-1) + l(j_1+…+j_p) + 2-p
= |\overline{χι^{j_1}}\otimes \cdots \otimes \overline{χι^{j_p}}| + 2-p \end{align*} \end{proof}
\begin{lemma} \label[lemma]{lem:f1} We have $m'_1=0$. The equation \eqref{finfrel}$[1]$ holds. The complex morphism {$f_1:(A',m'_1)→(A,m_1)$} is a quasi-isomorphism inducing the identity in homology. \end{lemma} \begin{proof} The equality $m'_1=0$ follows immediately from the definition. Thus $m_1\circ f_1 = 0 = f_1\circ m'_1$. Moreover $f_1$ is a quasi-isomorphism inducing the identity in homology by construction, cf.\ \cref{pp:iota}(g). \end{proof}
\begin{lemma} \label[lemma]{lem:f1inj} The map $f_1$ is injective. \end{lemma} \begin{proof} The set $X:=\{ χ^aι^j \mid a∈\{0,1\}, j∈\mathbb{Z}_{\geq 1}\}\subseteq A$ is linearly independent, since it consists of nonzero elements of different summands of the direct sum $A=\bigoplus_{k∈\mathbb{Z}}\Hom^k(\pres {\mathbb{F}_{\!p}} ,\pres {\mathbb{F}_{\!p}} )$.
The set $\mathfrak{B}$, which is a basis of ${\operatorname{H}}^* A$, is mapped bijectively to $X$ by $f_1$, so $f_1$ is injective. \end{proof}
\begin{lemma} \label[lemma]{lem:f2} The equation \eqref{finfrel}$[2]$ holds.
\end{lemma} \begin{proof} As $m'_1=0$, equation \eqref{finfrel}[$2$] is equivalent to (cf.\ \eqref{finfrel2}) \begin{align*}
f_1\circ m'_2 =\,& m_1\circ f_2 + m_2\circ (f_1\otimes f_1). \end{align*} We check this equation on $\mathfrak{B}^{\otimes 2}$: Recall \cref{pp:iota,defall}.
\begin{align*} f_1m'_2(\overline{ι^j}\otimes \overline{ι^{j'}}) =\,& ι^{j+j'} = m_2(f_1\otimes f_1)(\overline{ι^j}\otimes \overline{ι^{j'}}) = (m_1\circ f_2 + m_2\circ (f_1\otimes f_1))(\overline{ι^j}\otimes \overline{ι^{j'}})\\
f_1m'_2(\overline{ι^j}\otimes \overline{χι^{j'}})=\,& χι^{j+j'} = m_2(f_1\otimes f_1)(\overline{ι^j}\otimes \overline{χι^{j'}})\\ =\,& (m_1\circ f_2 + m_2\circ (f_1\otimes f_1)) (\overline{ι^j}\otimes \overline{χι^{j'}})\\
f_1m'_2(\overline{χι^j} \otimes \overline{ι^{j'}}) =\,& χι^{j+j'} = m_2(f_1\otimes f_1)(\overline{χι^j} \otimes \overline{ι^{j'}})\\ =\,& (m_1\circ f_2 + m_2\circ (f_1\otimes f_1))(\overline{χι^j} \otimes \overline{ι^{j'}})\\
f_1m'_2(\overline{χι^j}\otimes \overline{χι^{j'}}) =\, &0 \overset{\text{L.}\ref{lem:chichi}}{=} -m_1(γ_2ι^{j+j'}) + m_2(f_1\otimes f_1)(\overline{χι^j}\otimes \overline{χι^{j'}})\\ =\,& (m_1\circ f_2 + m_2\circ (f_1\otimes f_1))(\overline{χι^j}\otimes \overline{χι^{j'}}) \end{align*}
Note that there are no additional signs due to the Koszul sign rule since $|f_1|=0$. \end{proof} The following results directly from \cref{defall}. \begin{kor} \label[kor]{kor:zero} For $n\geq 2$ and $a_1,…,a_n∈\{0,1\}$, $j_1,…,j_n\geq 0$, we have
\begin{align*} f_n(\overline{χ^{a_1}ι^{j_1}}\otimes … \otimes \overline{χ^{a_n}ι^{j_n}})= f_n(\overline{χ^{a_1}}\otimes … \otimes \overline{χ^{a_n}}) \circ ι^{j_1+…+j_n}. \end{align*} If there is additionally an $x∈[1,n]$ with $a_x=0$ then
\[f_n(\overline{χ^{a_1}ι^{j_1}}\otimes … \otimes \overline{χ^{a_n}ι^{j_n}})=0.\] \end{kor} Equation \eqref{finfrel}[$n$] can be reformulated as \begin{align*} &f_1\circ m'_n+\underbrace{\sum_{\substack{n=r+s+t \\ r,t\geq 0,s\geq 1\\ s\leq n-1}} (-1)^{rs+t} f_{r+1+t}\circ (1^{\otimes r}\otimes m'_s\otimes 1^{\otimes t})}_{=:Φ_n}\\
&= m_1\circ f_n +\underbrace{\sum_{\substack{2\leq r\leq n \\ i_1+…+i_r=n\\ i_s\geq 1}} (-1)^v m_r\circ (f_{i_1}\otimes f_{i_2}\otimes … \otimes f_{i_r})}_{=:Ξ_n}\,, \end{align*} where $v= \sum_{1\leq t < s\leq r}(1-i_s)i_t$.
A term of the form $f_{r+1+t}\circ (1^{\otimes r}\otimes m'_s\otimes 1^{\otimes t})$, $s\geq 3$, $r+t\geq 1$, is zero because of \cref{kor:zero} and the definition of $m'_p$. Also recall $m'_1=0$. Thus \begin{align} \label{eqphi} Φ_n
=\,&\sum_{\mathclap{\substack{n=r+2+t\\ r,t\geq 0}}} (-1)^{2r+t}f_{n-1}\circ (1^{\otimes r}\otimes m'_2\otimes 1^{\otimes t}) = \sum_{r=0}^{n-2} (-1)^{n-r} f_{n-1}\circ (1^{\otimes r}\otimes m'_2\otimes 1^{\otimes n-r-2}). \end{align} Because of $m_k=0$ for $k\geq 3$, we have \begin{align} \label{eqxi} Ξ_n =\,& \sum_{\substack{i_1+i_2 = n \\ i_1,i_2\geq 1}}(-1)^{(1-i_2)i_1} m_2\circ (f_{i_1}\otimes f_{i_2}) = \sum_{i=1}^{n-1}(-1)^{ni}m_2\circ (f_i\otimes f_{n-i}). \end{align} We have proven: \begin{lemma} \label[lemma]{lem:finfrelmod} For $n\geq 1$, condition \eqref{finfrel}$[n]$ is equivalent to $f_1\circ m'_n + Φ_n = m_1\circ f_n + Ξ_n$ where $Φ_n$ and $Ξ_n$ are as in \eqref{eqphi} and \eqref{eqxi}. \end{lemma}
\begin{lemma} \label[lemma]{lem:f3}
Condition \eqref{finfrel}$[n]$ holds for $n\geq 3$ and arguments $\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}}∈\mathfrak{B}^{\otimes n}=\{\overline{χ^{a_1}ι^{j_1}}\otimes … \otimes \overline{χ^{a_n}ι^{j_n}} ∈ ({\operatorname{H}}^*A)^{\otimes n} \mid a_i∈\{0,1\} \text{ and } j_i∈\mathbb{Z}_{\geq 0} \text{ for all }i∈[1,n] \}$ where $0∈\{a_1,…,a_n\}$. \end{lemma} \begin{proof}
Because of \cref{lem:finfrelmod,defall} it is sufficient to show that \[Φ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}}) = Ξ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})\] if at least one $a_x$ equals $0$.
\begin{description} \item[Case 1] At least two $a_x$ equal $0$:\\* To show $Φ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})=0$, we show \\* $f_{n-1}(1^{\otimes r}\otimes m'_2\otimes 1^{\otimes n-r-2})(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})=0$ for $r∈[0,n-2]$: In case both components of the argument of $m'_2$ are of the form $\overline{χ^0ι^j}$, the result of $m'_2$ is of the form $\overline{ι^{j'}}$ (see \cref{defall}). Since $2\leq n-1$, \cref{kor:zero} implies the result of $f_{n-1}$ is zero. Otherwise at least one of the components of the argument of $f_{n-1}$ must be of the form $\overline{ι^j}$ and the result of $f_{n-1}$ is zero as well. So $Φ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})=0$.\\* To show $Ξ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})=0$, we show $m_2(f_i\otimes f_{n-i})(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})=0$ for $i∈[1,n-1]$: \begin{itemize} \item Suppose given $i∈[2,n-2]$: The statements $a_1=…=a_i=1$ and $a_{i+1}=…=a_n=1$ cannot be true at the same time, so $f_i(…)=0$ or $f_{n-i}(…)=0$ and we have $m_2(f_i\otimes f_{n-i})(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})=0$. \item Suppose that $i=1$. Because at least two $a_x$ equal $0$ the statement $a_2=…=a_n=1$ cannot be true. Since $n-1\geq 2$, we have $f_{n-1}(…)=0$ and $m_2(f_1\otimes f_{n-1})(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})=0$. \item The case $i=n-1$ is analogous to the case $i=1$. \end{itemize} So we have $Φ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}}) = 0=Ξ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})$. \item[Case 2a] Exactly one $a_x$ equals $0$, where $x∈[2,n-1]$.\\* We have $Φ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})=0$: In case $n\geq p+1$, it follows from $f_{n-1}=0$. Let us check the case $n∈[3,p]$: Because of \cref{defall}, we have \mbox{$f_{n-1}(1^{\otimes r}\otimes m'_2\otimes 1^{\otimes n-r-2})(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})=0$} unless $r∈\{x-2,x-1\}$. So \begin{align*} Φ_n(\overline{χ^{a_1}ι^{j_1}}&\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})\\ =\,& (-1)^{n-x+2}f_{n-1}(1^{\otimes x-2}\otimes m'_2\otimes 1^{\otimes n-x} - 1^{\otimes x-1}\otimes m'_2\otimes 1^{n-x-1})\\
&(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})\\
=\,& (-1)^{n-x}f_{n-1}(\overline{χι^{j_1}}\otimes …\otimes\overline{χι^{j_{x-2}}}\otimes m'_2(\overline{χι^{j_{x-1}}}\otimes \overline{ι^{j_x}})\otimes \overline{χι^{j_{x+1}}}\otimes…\otimes \overline{χι^{j_n}}\\ &- \overline{χι^{j_1}}\otimes …\otimes\overline{χι^{j_{x-1}}}\otimes m'_2(\overline{ι^{j_{x}}}\otimes \overline{χι^{j_{x+1}}})\otimes \overline{χι^{j_{x+2}}}\otimes…\otimes \overline{χι^{j_n}})\\
=\,& (-1)^{n-x}f_{n-1}(\overline{χι^{j_1}}\otimes …\otimes\overline{χι^{j_{x-2}}}\otimes \overline{χι^{j_{x-1}+j_x}}\otimes \overline{χι^{j_{x+1}}}\otimes…\otimes \overline{χι^{j_n}}\\ &- \overline{χι^{j_1}}\otimes …\otimes\overline{χι^{j_{x-1}}}\otimes \overline{χι^{j_{x}+j_{x+1}}}\otimes \overline{χι^{j_{x+2}}}\otimes…\otimes \overline{χι^{j_n}})\\
=\,& (-1)^{n-x}((-1)^{n-2}γ_{n-1}ι^{j_1+…+j_n} - (-1)^{n-2}γ_{n-1}ι^{j_1+…+j_n}) = 0 \end{align*} To show $Ξ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})=0$, we show $m_2(f_i\otimes f_{n-i})(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})=0$ for $i∈[1,n-1]$: The element $χ^{a_x}ι^{j_x}$ is a tensor factor of the argument of $f_i$ or of $f_{n-i}$. We write $y=i$ or $y=n-i$ accordingly. Then $y\geq 2$ since $x\notin\{1,n\}$, so $f_y(…) = 0$ and thus $m_2(f_i\otimes f_{n-i})(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})=0$.\\* So $Φ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}}) = 0=Ξ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})$.
\item[Case 2b] Only $a_1=0$, all other $a_x$ equal $1$.\\* We have $f_{n-1}(1^{\otimes r}\otimes m'_2\otimes 1^{\otimes n-r-2})(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})=0$ unless $r=0$. So \begin{align*} Φ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}}) =\,&(-1)^nf_{n-1}(1^{\otimes 0}\otimes m'_2\otimes 1^{\otimes n-2})(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})\\ =\,&(-1)^n f_{n-1}(m'_2(\overline{ι^{j_1}}\otimes \overline{χι^{j_2}})\otimes \overline{χι^{j_3}}\otimes …\otimes \overline{χι^{j_n}})\\ =\,&(-1)^n f_{n-1}(\overline{χι^{j_1+j_2}}\otimes \overline{χι^{j_3}}\otimes …\otimes \overline{χι^{j_n}})\\
=\,& \begin{cases} γ_{n-1}ι^{j_1+…+j_n} & 3\leq n\leq p\\ 0 & n\geq p+1 \end{cases} \end{align*} We have $(f_i\otimes f_{n-1})(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})=0$ if $i\geq 2$. So \begin{align*} Ξ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}}) =\,& (-1)^{1\cdot n}m_2(f_1\otimes f_{n-1})(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})\\
\ovs{\eqref{koszulraw}}& (-1)^nm_2\left( (-1)^{n\cdot |ι^{j_1}|}f_1(\overline{ι^{j_1}})\otimes f_{n-1}(\overline{χι^{j_2}}\otimes…\otimes\overline{χι^{j_n}})\right)\\ =\,& (-1)^n m_2\left(ι^{j_1}\otimes f_{n-1}(\overline{χι^{j_2}}\otimes…\otimes\overline{χι^{j_n}})\right)\\% = s_{n-1}γ_{n-1}ι^{j_1+…+j_n} =\,& \begin{cases} (-1)^nm_2(ι^{j_1}\otimes (-1)^{n-2}γ_{n-1}ι^{j_2+…+j_n}) & 3\leq n\leq p\\ 0 & n\geq p+1 \end{cases}\\ =\,& \begin{cases} γ_{n-1}ι^{j_1+…+j_n} & 3\leq n\leq p\\ 0 & n\geq p+1 \end{cases}
\end{align*} So $Φ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}}) =Ξ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})$.
\item[Case 2c] Only $a_n=0$, all other $a_x$ equal $1$.\\* Argumentation analogous to case 2b gives \begin{align*} Φ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}}) =\,&(-1)^{2}f_{n-1}(1^{\otimes n-2}\otimes m'_2\otimes 1^{\otimes 0})(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})\\
\ovs{|m'_2|=0}&f_{n-1}(\overline{χι^{j_1}}\otimes …\otimes \overline{χι^{j_{n-2}}}\otimes m'_2(\overline{χι^{j_{n-1}}}\otimes \overline{ι^{j_n}}))\\ =\,& \begin{cases} (-1)^{n-2}γ_{n-1}ι^{j_1+…+j_n} & 3\leq n\leq p\\ 0 & n\geq p+1 \end{cases} \end{align*} and \begin{align*} Ξ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}}) =\,& (-1)^{n(n-1)}m_2(f_{n-1}\otimes f_{1})(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})\\
\ovs{|f_1|=0}& m_2\left(f_{n-1}(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_{n-1}}})\otimes f_{1}(\overline{ι^{j_n}})\right)\\
=\,& \begin{cases} (-1)^{n-2}γ_{n-1}ι^{j_1+…+j_n} & 3\leq n\leq p \\ 0& n\geq p+1 \end{cases} \end{align*} So $Φ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}}) =Ξ_n(\overline{χ^{a_1}ι^{j_1}}\otimes…\otimes\overline{χ^{a_n}ι^{j_n}})$. \end{description} \end{proof}
Now we examine the cases where $a_1=…=a_n=1$: \begin{lemma} \label[lemma]{lem:phichi} For $n\geq 3$, we have $Φ_n(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}})=0$ for $\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}}∈\mathfrak{B}^{\otimes n}=\{\overline{χ^{a_1}ι^{j_1}}\otimes … \otimes \overline{χ^{a_n}ι^{j_n}} ∈ ({\operatorname{H}}^*A)^{\otimes n} \mid a_i∈\{0,1\} \text{ and } j_i∈\mathbb{Z}_{\geq 0} \text{ for all }i∈[1,n] \}$. \end{lemma} \begin{proof} We have $Φ_n(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}})=0$ since $Φ_n=\sum_{r=0}^{n-2} (-1)^{n-r} f_{n-1}(1^{\otimes r}\otimes m'_2\otimes 1^{\otimes n-r-2})$ and the argument of $m'_2$ is always of the form $\overline{χι^x}\otimes \overline{χι^y}$, whence its result is zero. \end{proof}
\begin{lemma} \label[lemma]{lem:f4} Condition \eqref{finfrel}$[n]$ holds for $n∈[3,p-1]$ and arguments $\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}}∈\mathfrak{B}^{\otimes n}=\{\overline{χ^{a_1}ι^{j_1}}\otimes … \otimes \overline{χ^{a_n}ι^{j_n}} ∈ ({\operatorname{H}}^*A)^{\otimes n} \mid a_i∈\{0,1\} \text{ and } j_i∈\mathbb{Z}_{\geq 0} \text{ for all }i∈[1,n] \}$. \end{lemma} \begin{proof}
For computing $Ξ_n$, we first show that $m_2(f_k\otimes f_{n-k})(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}})=0$ for $k∈[2,n-2]$. We will need the following congruence. \begin{align} \label{gradesmismatch} \underbrace{(k(l-1)+l(i+x))}_{\equiv_{p-1} k(l-1)+(p-1)+l(i+x)} - \underbrace{(n-k-1+li')}_{\equiv_{p-1} n-k-1+(p-1)+li'} &\equiv_{p-1} -k+k-n+1 = -(n-1) \nonumber \\& \not\equiv_{p-1} 0 \end{align} The last statement results from $2\leq n\leq p-1$. We set "$\pm$" as a symbol for the (a posteriori irrelevant) signs in the following calculation. For $k∈[2,n-2]$, we have \begin{align*} \hphantom{XXX}m_2&(f_k\otimes f_{n-k})(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}})\\ =\,& \pm m_2((-1)^{k-1} γ_kι^{j_1+…+j_k}\otimes (-1)^{n-k-1}γ_{n-k}ι^{j_{k+1}+…+j_n})\\ \ovs{\substack {j_1+…+j_k=:x,\\j_{k+1}+…+j_n=:y}}& \pm γ_kι^x\circ γ_{n-k}ι^{y}\\ =\,& \pm \left(\sum\nolimits_{i\geq 0} \lfloor e_k \rfloor_{k(l-1)+l(i+x)}^{k-1+li} + \sum\nolimits_{i\geq 0} \lfloor e_{p-k} \rfloor_{k(l-1) + (p-1) + l(i+x)}^{k-1+(p-1)+li}\right)\\ &\circ \left(\sum\nolimits_{i'\geq 0} \lfloor e_{n-k} \rfloor_{(n-k)(l-1)+l(i'+y)}^{n-k-1+li'} + \sum\nolimits_{i'\geq 0} \lfloor e_{p-n+k} \rfloor_{(n-k)(l-1) + (p-1) + l(i'+y)}^{n-k-1+(p-1)+li'}\right)\\ \ovs{\eqref{gradesmismatch}} 0. \end{align*} So \begin{align*} Ξ_n(\overline{χι^{j_1}}\otimes…&\otimes\overline{χι^{j_n}})\\
=\,& m_2((-1)^nf_1\otimes f_{n-1}+(-1)^{n(n-1)}f_{n-1}\otimes f_1)(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}})\\
=\,& m_2((-1)^{n+n|\overline{χι^{j_1}}|}f_1(\overline{χι^{j_{1}}})\otimes f_{n-1}(\overline{χι^{j_2}}\otimes…\otimes\overline{χι^{j_n}})\\
&+ f_{n-1}(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_{n-1}}})\otimes f_1(\overline{χι^{j_n}})) \\ =\,& m_2(χι^{j_1}\otimes (-1)^{n-2}γ_{n-1}ι^{j_2+…+j_n} + (-1)^{n-2}γ_{n-1}ι^{j_1+…+j_{n-1}}\otimes χι^{j_n})\\ =\,& (-1)^n (χι^{j_1}\circ γ_{n-1}ι^{j_2+…+j_n} + γ_{n-1}ι^{j_1+…+j_{n-1}}\circ χι^{j_n})\\ \ovs{\text{P.}\ref{pp:iota}(e),\text{L.}\ref{lem:gamma}}& (-1)^n (χ\circ γ_{n-1} + γ_{n-1}\circ χ)\circ ι^{j_1+…+j_n} \end{align*} \begin{align*} χ\circ γ_{n-1} =\,& \Big(\sum\nolimits_{i\geq 0}\left(\lfloor e_1 \rfloor_{(i+1)l-1}^{il} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{k+1,k}\rfloor_{(i+1)l-1+k}^{il + k}\right\rgroup\right.\\ & \left. + \lfloor e_{p-1}\rfloor_{(i+1)l-1 +(p-1)}^{il+(p-1)} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{p -k-1,p-k}\rfloor_{(i+1)l-1+(p-1)+k}^{il +(p-1)+ k}\right\rgroup\right)\Big) \\ &\circ\Big(\sum\nolimits_{i'\geq 0} \lfloor e_{n-1} \rfloor_{(n-1)(l-1)+li'}^{n-2+li'} + \sum\nolimits_{i'\geq 0} \lfloor e_{p-n+1} \rfloor_{(n-1)(l-1) + (p-1) + li'}^{n-2+(p-1)+li'}\Big)\\ \overset{\hphantom{\displaystyle{=}}\mathllap{3\leq n\leq p-1}}{\underset{\hphantom{\displaystyle{=}}\mathllap{\natop{k\leadsto n-1}{i'\leadsto i+1}}}{=}}\,& \sum\nolimits_{i\geq 0} \lfloor e_{n,n-1}\circ e_{n-1} \rfloor_{(n-1)(l-1)+l(i+1)}^{il+n-1}\\ &+ \sum\nolimits_{i\geq 0} \lfloor e_{p-n,p-n+1}\circ e_{p-n+1} \rfloor_{(n-1)(l-1) + (p-1) + l(i+1)}^{il+p-1+n-1} \\ =\,&\sum\nolimits_{i\geq 0}\left(\lfloor e_{n,n-1} \rfloor_{n(l-1) + 1 +li}^{il+n-1}+ \lfloor e_{p-n,p-n+1}\rfloor_{n(l-1)+p+li}^{il+p-1+n-1}\right)\\
γ_{n-1}\circ χ =\,&\Big(\sum\nolimits_{i'\geq 0} \lfloor e_{n-1} \rfloor_{(n-1+i'-1)l+2(p-1)-(n-1)}^{n-2+li'} + \sum\nolimits_{i'\geq 0} \lfloor e_{p-n+1} \rfloor_{(n-1+i')l-n + p}^{n-2+(p-1)+li'}\Big) \\ &\circ\Big(\sum\nolimits_{i\geq 0}\left(\lfloor e_1 \rfloor_{(i+1)l-1}^{il} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{k+1,k}\rfloor_{(i+1)l-1+k}^{il + k}\right\rgroup\right.\\ & \left. + \lfloor e_{p-1}\rfloor_{(i+1)l-1 +(p-1)}^{il+(p-1)}
+ \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{p -k-1,p-k}\rfloor_{(i+1)l-1+(p-1)+k}^{il +(p-1)+ k}\right\rgroup\right)\Big)\\ \ovs{\substack{k\leadsto p-n}}& \sum\nolimits_{i'\geq 0} \lfloor e_{n-1} \circ e_{n-1,n} \rfloor_{(n-1+i')l-1+(p-1)+(p-n)}^{n-2+li'} \\ &+ \sum\nolimits_{i'\geq 0} \lfloor e_{p-n+1}\circ e_{p-n+1,p-n} \rfloor_{(n+i')l-1+p-n}^{n-2+(p-1)+li'}\\ =\,& \sum\nolimits_{i'\geq 0} \lfloor e_{n-1,n} \rfloor_{n(l-1)+ i'l}^{n-2+li'} + \sum\nolimits_{i'\geq 0} \lfloor e_{p-n+1,p-n} \rfloor_{n(l-1) +(p-1)+i'l }^{n-2+(p-1)+li'} \end{align*} So $χ\circ γ_{n-1} + γ_{n-1}\circ χ = m_1(γ_n)$ by \cref{lem:gamma}. Therefore \begin{align*} Ξ_n(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}}) =\,& (-1)^n m_1(γ_n)\circ ι^{j_1+…+j_n} \overset{\text{P.}\ref{pp:iota}(c)}{=} (-1)^n m_1(γ_n ι^{j_1+…+j_n})\\ =\,& - m_1((-1)^{n-1}γ_n ι^{j_1+…+j_n})\\ =\,& - m_1\circ f_n(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}}). \end{align*} We conclude using \cref{lem:finfrelmod} by \begin{align*} (f_1\circ m'_n + Φ_n)(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}}) \overset{\text{L.}\ref{lem:phichi},\text{D.}\ref{defall}}{=} 0 = (m_1\circ f_n + Ξ_n)(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}}). \end{align*}
\end{proof} \begin{lemma} \label[lemma]{lem:f5} Condition \eqref{finfrel}$[p]$ holds for arguments $\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_p}}∈\mathfrak{B}^{\otimes p}=\{\mbox{$\overline{χ^{a_1}ι^{j_1}}\otimes … \otimes \overline{χ^{a_p}ι^{j_p}}$} ∈ ({\operatorname{H}}^*A)^{\otimes p} \mid a_i∈\{0,1\} \text{ and } j_i∈\mathbb{Z}_{\geq 0} \text{ for all }i∈[1,p] \}$. \end{lemma} \begin{proof}
Recall that $|ι|=l=2(p-1)$ is even, $|χ|=l-1$ is odd and $|f_i| = 1-i$ by \cref{lem:degree}. We have \begin{align*} Ξ_p(\overline{χι^{j_1}}\otimes…&\otimes\overline{χι^{j_p}}) = \sum\nolimits_{i=1}^{p-1}(-1)^{pi}m_2(f_i\otimes f_{p-i}) (\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_p}})\\ =\,& \sum\nolimits_{i=1}^{p-1}(-1)^{pi+i(1-(p-i))}m_2(f_i(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_i}})\otimes f_{p-i}(\overline{χι^{j_{i+1}}}\otimes…\otimes\overline{χι^{j_p}}))\\ =\,& \sum\nolimits_{i=1}^{p-1}f_i(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_i}})\circ f_{p-i}(\overline{χι^{j_{i+1}}}\otimes…\otimes\overline{χι^{j_p}})\\
\ovs{p\geq 3\vphantom{I_{I_I}}}& χι^{j_1}\circ (-1)^{p-2}γ_{p-1}ι^{j_2+…+j_p} + (-1)^{p-2}γ_{p-1}ι^{j_1+…+j_{p-1}}\circ χι^{j_p}\\ & + \sum\nolimits_{i=2}^{p-2} (-1)^{i-1}γ_iι^{j_1+…+j_i}\circ (-1)^{p-i-1}γ_{p-i}ι^{j_{i+1}+…+j_p}\\ \ovs{\text{P.}\ref{pp:iota}(b)}& (-1)^p\left(χ\circ γ_{p-1} + γ_{p-1}\circ χ + \sum\nolimits_{k=2}^{p-2} γ_k\circ γ_{p-k}\right)\circ ι^{j_1+…+j_p} \end{align*} \begin{align*} χ\circγ_{p-1}=\,& \Big(\sum\nolimits_{i\geq 0}\left(\lfloor e_1 \rfloor_{(i+1)l-1}^{il} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{k+1,k}\rfloor_{(i+1)l-1+k}^{il + k}\right\rgroup\right.\\ &\left. + \lfloor e_{p-1}\rfloor_{(i+1)l-1 +(p-1)}^{il+(p-1)} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{p -k-1,p-k}\rfloor_{(i+1)l-1+(p-1)+k}^{il +(p-1)+ k}\right\rgroup\right)\Big)\\ &\circ\left(\sum\nolimits_{i'\geq 0} \lfloor e_{p-1} \rfloor_{(p-1)(l-1)+li'}^{(p-1)-1+li'}+ \sum\nolimits_{i'\geq 0} \lfloor e_{1} \rfloor_{(p-1)(l-1) + (p-1) + li'}^{-1+2(p-1)+li'}\right)\\ =\,& \sum\nolimits_{i\geq 0} \lfloor e_{p-1} \rfloor_{(p-1)(l-1)+l(i+1)}^{il+(p-1)}+ \sum\nolimits_{i\geq 0} \lfloor e_{1} \rfloor_{(p-1)(l-1) + (p-1) + li}^{il}\\ =\,& \sum\nolimits_{i\geq 0} \lfloor e_{p-1} \rfloor_{(p+i-1)l+(p-1)}^{il+(p-1)}+ \sum\nolimits_{i\geq 0} \lfloor e_{1} \rfloor_{(p+i-1)l}^{il} \end{align*} \begin{align*} γ_{p-1}\circ χ=\,& \left(\sum\nolimits_{i'\geq 0} \lfloor e_{p-1} \rfloor_{(p+i'-2)l+(p-1)}^{(p-1)-1+li'}+ \sum\nolimits_{i'\geq 0} \lfloor e_{1} \rfloor_{(p+i'-1)l}^{-1+2(p-1)+li'}\right) \\ &\circ\Big(\sum\nolimits_{i\geq 0}\left(\lfloor e_1 \rfloor_{(i+1)l-1}^{il} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{k+1,k}\rfloor_{(i+1)l-1+k}^{il + k}\right\rgroup\right.\\ &\left. + \lfloor e_{p-1}\rfloor_{(i+1)l-1 +(p-1)}^{il+(p-1)} + \left\lgroup\sum\nolimits_{k=1}^{p-2}\lfloor e_{p -k-1,p-k}\rfloor_{(i+1)l-1+(p-1)+k}^{il +(p-1)+ k}\right\rgroup\right)\Big)\\ =\,& \sum\nolimits_{i'\geq 0} \lfloor e_{p-1} \rfloor_{(p+i'-1)l-1+(p-1)}^{(p-1)-1+li'}+ \sum\nolimits_{i'\geq 0} \lfloor e_{1} \rfloor_{(p+i')l-1}^{-1+2(p-1)+li'}\\ =\,& \sum\nolimits_{i'\geq 0} \lfloor e_{p-1} \rfloor_{(p+i'-1)l+p-2}^{p-2+i'l}+ \sum\nolimits_{i'\geq 0} \lfloor e_{1} \rfloor_{(p+i'-1)l+l-1}^{i'l+l-1} \end{align*} \begin{align*} γ_k\circ γ_{p-k}=\,& \left(\sum\nolimits_{i\geq 0} \lfloor e_k \rfloor_{(i+k-1)l+l-k}^{k-1+li} + \sum\nolimits_{i\geq 0} \lfloor e_{p-k} \rfloor_{(i+k)l+ (p-1)-k}^{k-1+(p-1)+li}\right)\\ &\circ\left(\sum\nolimits_{i'\geq 0} \lfloor e_{p-k} \rfloor_{(p-k)(l-1)+li'}^{p-k-1+li'} + \sum\nolimits_{i'\geq 0} \lfloor e_{k} \rfloor_{(p-k)(l-1) + (p-1) + li'}^{-k+2(p-1)+li'}\right)\\ =\,&\sum\nolimits_{i\geq 0} \lfloor e_k \rfloor_{(p-k)(l-1)+(p-1)+l(i+k-1)}^{k-1+li} + \sum\nolimits_{i\geq 0} \lfloor e_{p-k} \rfloor_{(p-k)(l-1)+l(i+k)}^{k-1+(p-1)+li}\\ =\,&\sum\nolimits_{i\geq 0} \lfloor e_k \rfloor_{(p-k+i+k-1)l -(p-k)+(p-1)}^{k-1+li} + \sum\nolimits_{i\geq 0} \lfloor e_{p-k} \rfloor_{(p-k+i+k)l-(p-k)}^{k-1+(p-1)+li}\\ =\,&\sum\nolimits_{i\geq 0} \lfloor e_k \rfloor_{(p+i-1)l + k-1}^{k-1+li} + \sum\nolimits_{i\geq 0} \lfloor e_{p-k} \rfloor_{(p+i-1)l+ k-1 + (p-1)}^{k-1+(p-1)+li}\,. \end{align*} Thus \begin{align*} χ\circ γ_{p-1} +& γ_{p-1}\circ χ + \sum\nolimits_{k=2}^{p-2} γ_k\circ γ_{p-k}\\ =\,&\sum\nolimits_{i\geq 0} \sum\nolimits_{k=0}^{p-2}\left(\lfloor e_{k+1} \rfloor_{(p+i-1)l + k}^{k+li} + \lfloor e_{p-k-1} \rfloor_{(p+i-1)l+ k + (p-1)}^{k+(p-1)+li}\right)\\ =\,& \sum\nolimits_{i\geq 0} \sum\nolimits_{k'=0}^{l-1} \lfloor e_{ω(k')}\rfloor_{(p-1+i)l+k'}^{k'+li} \overset{\text{P.}\ref{pp:iota}(a)}{=} ι^{p-1} \end{align*} and \begin{align*} Ξ_p(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_p}})=(-1)^pι^{p-1+j_1+…+j_p}\,. \end{align*} So we conclude using \cref{lem:finfrelmod} by \begin{align*} \begin{array}{rcl} (f_1\circ m'_p + Φ_p)(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_p}}) &\overset{\text{L.}\ref{lem:phichi},\text{D.}\ref{defall}}{=} &(-1)^p ι^{p-1+j_1+…+j_p} \\ &\overset{\text{D.}\ref{defall}}{=}& (m_1\circ f_p + Ξ_p)(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_p}}). \end{array} \end{align*} \end{proof} \begin{lemma} \label[lemma]{lem:f6} Condition \eqref{finfrel}$[n]$ holds for $n∈[p+1,2(p-1)]$ and arguments\\* $\mbox{$\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}}$} ∈\mathfrak{B}^{\otimes n}=\{\overline{χ^{a_1}ι^{j_1}}\otimes … \otimes \overline{χ^{a_n}ι^{j_n}} ∈ ({\operatorname{H}}^*A)^{\otimes n} \mid a_i∈\{0,1\} \text{ and } j_i∈\mathbb{Z}_{\geq 0} \text{ for all }i∈[1,n] \}$. \end{lemma} \begin{proof} As $f_k=0$ for $k\geq p$, we have \begin{align*} Ξ_n&(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}}) = \sum\nolimits_{k=n-p+1}^{p-1} (-1)^{nk}m_2(f_k\otimes f_{n-k}) (\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}}) \end{align*} The right side is a linear combination of terms of the form $γ_k\circ γ_{n-k}$ for $k∈[n-p-1,p-1]$. We have
\begin{align*} γ_k\circ γ_{n-k} =\,& \left(\sum\nolimits_{i\geq 0} \lfloor e_k \rfloor_{k(l-1)+li}^{k-1+li} + \sum\nolimits_{i\geq 0} \lfloor e_{p-k} \rfloor_{k(l-1) + (p-1) + li}^{k-1+(p-1)+li}\right)\\ &\circ\left(\sum\nolimits_{i'\geq 0} \lfloor e_{n-k} \rfloor_{(n-k)(l-1)+li'}^{n-k-1+li'} + \sum\nolimits_{i'\geq 0} \lfloor e_{p-n+k} \rfloor_{(n-k)(l-1) + (p-1) + li'}^{n-k-1+(p-1)+li'}\right) \end{align*} A necessary condition for that term to be non-zero is $k(l-1)\equiv_{p-1}n-k-1$ as $l=2(p-1)$. We have \begin{align*} k(l-1)-(n-k-1) \equiv_{p-1}\,& -k-n+k+1 = 1-n \not\equiv_{p-1} 0, \end{align*} since $p\leq n-1\leq 2(p-1)-1$. So $γ_k\circ γ_{n-k}= 0$ and $Ξ_n(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}})=0$. We conclude using \cref{lem:finfrelmod} by \begin{align*} (f_1\circ m'_n + Φ_n)(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}}) \overset{\text{L.}\ref{lem:phichi}, \text{D.}\ref{defall}}{=} 0 \overset{\text{D.}\ref{defall}}{=} (m_1\circ f_n + Ξ_n)(\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}}). \end{align*} \end{proof} One could formulate a lemma similar to \cref{lem:f6} for the case $n> 2(p-1)$ as then the sum $ \sum_{k=n-p+1}^{p-1} (-1)^{nk} m_2(f_k\otimes f_{n-k}) (\overline{χι^{j_1}}\otimes…\otimes\overline{χι^{j_n}})$ is in fact empty. Instead we use \cref{lem:finffinite} to prove \eqref{finfrel}[$n$] for $n>2p-2$: \begin{proof}[Proof of \cref{tm:statement}] \cref{lem:f1,lem:f2,lem:f3,lem:f4,lem:f5,lem:f6} ensure that \eqref{finfrel}[$n$] holds for \mbox{$n∈[1,2p-2]$}. Then \cref{lem:finffinite} with $k=p$ proves that \eqref{finfrel}[$n$] holds for all $n∈[1,∞]$, cf. also \cref{defall}. By \cref{lem:f1inj}, $f_1$ is injective. By \cref{lem:degree}, the degrees are as required in \cref{lem:aaut}. \cref{lem:aaut} proves that $({\operatorname{H}}^* A, (m'_n)_{n\geq 1})$ is an ${\rm A}_∞$-algebra and $(f_n)_{n\geq 1}$ is an ${\rm A}_∞$-morphism from $({\operatorname{H}}^* A, (m'_n)_{n\geq 1})$ to $(A, (m_n)_{n\geq 1})$. By \cref{lem:f1}, we have $m'_1=0$. Thus $({\operatorname{H}}^* A, (m'_n)_{n\geq 1})$ is a minimal ${\rm A}_∞$-algebra. By \cref{lem:f1}, the complex morphism $f_1:({\operatorname{H}}^* A,m'_1)→(A,m_1)$ is a quasi-isomorphism which induces the identity in homology. So the ${\rm A}_∞$-morphism $(f_n)_{n\geq 1}: ({\operatorname{H}}^* A, (m'_n)_{n\geq 1}) → (A, (m_n)_{n\geq 1})$ is a quasi-isomorphism and the proof of \cref{tm:statement} is complete. \end{proof}
\subsection{At the prime \texorpdfstring{$2$}{2}} \label{prime2} We examine the case at the prime $2$.
We use a direct approach. Note that $\Sy_2$ is a cyclic group so the theory of cyclic groups applies as well.
We have $ {\mathbb{F}_{\!2}} \Sy_2= \{0, (\text{id}),(1,2),(\text{id})+(1,2)\}$. We have maps given by \begin{align*} \begin{array}{rrcl} ε:& {\mathbb{F}_{\!2}} \Sy_2 & \longrightarrow & {\mathbb{F}_{\!2}} \\ & a(\text{id})+b(1,2) &\longmapsto & a+b \\
D:& {\mathbb{F}_{\!2}} \Sy_2 & \longrightarrow & {\mathbb{F}_{\!2}} \Sy_2\\ & a(\text{id})+b(1,2) &\longmapsto & (a+b)\left((\text{id})+(1,2)\right). \end{array} \end{align*} We see that $ε$ is surjective and $\ker ε = \ker D = \im D = \{0, (\text{id})+(1,2)\}$. The maps $ε$ and $D$ are $ {\mathbb{F}_{\!2}} \Sy_2$-linear, where $ {\mathbb{F}_{\!2}} $ is the $ {\mathbb{F}_{\!2}} \Sy_2$-module that corresponds to the trivial representation of $\Sy_2$. So we have a projective resolution of $ {\mathbb{F}_{\!2}} $ with augmentation $ε$ by \begin{align*} \pres {\mathbb{F}_{\!2}} := (\cdots \xrightarrow{D} \underbrace{ {\mathbb{F}_{\!2}} \Sy_2}_{1} \xrightarrow{D} \underbrace{ {\mathbb{F}_{\!2}} \Sy_2}_{0} \rightarrow \underbrace{0}_{-1} \rightarrow \cdots), \end{align*} where the degrees are written below.
We set $e_1$ to be the identity on $ {\mathbb{F}_{\!2}} \Sy_2$.
Let $A:=\Hom^*_{ {\mathbb{F}_{\!2}} \Sy_2}(\pres {\mathbb{F}_{\!2}} ,\pres {\mathbb{F}_{\!2}} )$ and let the ${\rm A}_∞$-structure on $A$ be $(m_n)_{n\geq 1}$ (cf. \cref{lem:cai}). Recall the conventions concerning $\Hom_B^k(C,C')$ for complexes $C,C'$ and $k∈\mathbb{Z}$. \begin{lemma} An $ {\mathbb{F}_{\!2}} $-basis of ${\operatorname{H}}^*A$ is given by $\{\overline{ξ^j} \mid j\geq 0\}$ where \begin{align*} ξ := \sum\nolimits_{i\geq 0} \lfloor e_1\rfloor_{i+1}^i∈A. \end{align*} \end{lemma} \begin{proof} Straightforward induction yields, for $j\geq 0$, \begin{align*} ξ^j = \sum\nolimits_{i\geq 0} \lfloor e_1 \rfloor_{i+j}^i\,. \end{align*} We have \begin{align*} m_1(ξ^j) =\,& d \circ ξ^j - (-1)^j ξ^j \circ d \overset{} = d \circ ξ^j + ξ^j \circ d\\ =\,& \left(\sum\nolimits_{i\geq 0} \lfloor D\rfloor_{i+1}^i\right)\circ \left(\sum\nolimits_{i\geq 0} \lfloor e_1 \rfloor_{i+j}^i\right) + \left(\sum\nolimits_{i\geq 0} \lfloor e_1 \rfloor_{i+j}^i\right)\circ \left(\sum\nolimits_{i\geq 0} \lfloor D\rfloor_{i+1}^i\right)\\ =\,& \sum\nolimits_{i\geq 0} \lfloor D\rfloor_{i+j+1}^i + \sum\nolimits_{i\geq 0} \lfloor D\rfloor_{i+j+1}^i = 0, \end{align*} so $ξ^j$ is a cycle. As $\Hom_{ {\mathbb{F}_{\!2}} \Sy_2}( {\mathbb{F}_{\!2}} \Sy_2, {\mathbb{F}_{\!2}} ) = \{0,ε\}$ and $ε\circ D=0$, the differentials of $\Hom^*(\pres {\mathbb{F}_{\!2}} , {\mathbb{F}_{\!2}} )$ (cf.\ \cref{prpr-prm}) are all zero. So $\{ε\}$ is an $ {\mathbb{F}_{\!2}} $-basis of ${\operatorname{H}}^k \Hom^*(\pres {\mathbb{F}_{\!2}} , {\mathbb{F}_{\!2}} )$ for $k\geq 0$. Since in the notion of \cref{prpr-prm}, $\bar{Ψ}_k(\overline{ξ^k}) = ε$, the set $\{\overline{ξ^k}\}$ is an $ {\mathbb{F}_{\!2}} $-basis of ${\operatorname{H}}^k\Hom^*(\pres {\mathbb{F}_{\!2}} ,\pres {\mathbb{F}_{\!2}} )$ for $k\geq 0$. For $k<0$ we have ${\operatorname{H}}^k\Hom^*(\pres {\mathbb{F}_{\!2}} ,\pres {\mathbb{F}_{\!2}} ) \cong {\operatorname{H}}^k \Hom^*(\pres {\mathbb{F}_{\!2}} , {\mathbb{F}_{\!2}} ) = 0$. So $\{\overline{ξ^j} \mid j\geq 0\}$ is an $ {\mathbb{F}_{\!2}} $-basis of ${\operatorname{H}}^* A$. \end{proof} We define families of maps $(f_n:({\operatorname{H}}^*A)^{\otimes n}→A)_{n\geq 1}$ and $(m'_n:({\operatorname{H}}^*A)^{\otimes n} → {\operatorname{H}}^* A)_{n\geq 1}$ as follows. $f_1$ and $m'_2$ are given on a basis by \begin{align*} f_1(\overline{ξ^j}) :=\,& ξ^j &&\text{for $j\geq 0$}\\ m'_2(\overline{ξ^j}\otimes \overline{ξ^k}) :=\,& \overline{ξ^{j+k}} &&\text{for $j,k\geq 0$}. \end{align*}
All other maps are set to zero.
It is straightforward to check that $({\operatorname{H}}^* A, (m'_n)_{n\geq 1})$ is a pre-${\rm A}_∞$-algebra and $(f_n)_{n\geq 1}$ is a pre-$A_∞$-morphism from ${\operatorname{H}}^*A$ to $A$. As $m'_2$ is associative, $({\operatorname{H}}^* A, (m'_n)_{n\geq 1})$ is a dg-algebra, so in particular an ${\rm A}_∞$-algebra. As $f_k=0$ for $k\neq 1$, \eqref{finfrel}[$n$] simplifies to \begin{align*} f_1\circ m'_n =\,& m_n\circ (\underbrace{f_1\otimes \cdots \otimes f_1}_{n\text{ factors}}). \end{align*} As $m'_n=0$ and $m_n=0$ for $n\geq 3$, \eqref{finfrel}[$n$] is satisfied for all $n\geq 3$. For $n∈\{1,2\}$, we have \begin{align*} f_1\circ m'_1 =\,& m_1\circ f_1\\ f_1\circ m'_2 =\,& m_2(f_1\otimes f_1). \end{align*} The second equation follows immediately from the definition of $m'_2$ and $f_1$. The first equation holds as $m'_1=0$ and the images of $f_1$ are all cycles. So \eqref{finfrel}[$n$] holds for all $n$ and $(f_n)_{n\geq 1}$ is an ${\rm A}_∞$-morphism from $({\operatorname{H}}^* A, (m'_n)_{n\geq 1})$ to $(A, (m_n)_{n\geq 1})$. By the construction of $f_1$, it induces the identity on homology. Thus $({\operatorname{H}}^* A, (m'_n)_{n\geq 1})$ is a minimal model of $(A, (m_n)_{n\geq 1})$.
\begin{bem}[Comparison with primes $p\geq 3$] \label[bem]{bem:comp} At a prime $p \geq 3$, we have constructed a projective resolution with period length $l=2(p-1)$ in \eqref{presform1}.
If one constructs a projective resolution of $\mathbb{Z}_{(2)}$ analogous to the case $p\geq 3$, we have a sequence of the form \[\cdots \rightarrow \mathbb{Z}_{(2)}\Sy_2 \xrightarrow{\hat e_{2,2}^*} \mathbb{Z}_{(2)}\Sy_2\xrightarrow{\hat e_{2,2}}\mathbb{Z}_{(2)}\Sy_2 \xrightarrow{\hat e_{2,2}^*} \mathbb{Z}_{(2)}\Sy_2\xrightarrow{\hat e_{2,2}}\mathbb{Z}_{(2)}\Sy_2 \rightarrow 0 \rightarrow \cdots\] with a period length of $2$, where \begin{align*} \hat e_{2,2}\colon&\,(\text{id})\,\longmapsto\, (\text{id})-(1,2)\\ \hat e_{2,2}^*\colon&\,(\text{id})\,\longmapsto\, (\text{id})+(1,2). \end{align*} However, modulo $2$ the differentials $\hat e_{2,2}$ and $\hat e_{2,2}^*$ reduce to the same map $D: {\mathbb{F}_{\!2}} \Sy_2\rightarrow {\mathbb{F}_{\!2}} \Sy_2$, so we obtain a period length of $1$.
The maps $ι$ resp.\ $χ$ from \cref{pp:iota} may be identified with $ξ^2$ resp.\ $ξ$. This way, the definition of $m'_2$ at the prime $2$ is readily compatible with \cref{defall}. \end{bem}
\kommentar{\appendix \nsection{On the bar construction}
\subsection{Applications. Kadeishvili's algorithm and the minimality theorem.} In this subsection we will discuss the construction of minimal models of ${\rm A}_∞$-algebras. Firstly, \cref{lem:aaut} states that certain pre-${\rm A}_n$-structures and pre-${\rm A}_n$-morphisms that arise in the construction of minimal models are actually ${\rm A}_n$-structures and ${\rm A}_n$-morphisms. Secondly, we give a proof of \cref{tm:kadeishvili}. We will review Kadeishvili's original proof of \cite{Ka82} as it gives a an algorithm for constructing minimal models which can be used for the direct calculation of examples. Note that Lefèvre-Hasegawa has given a generalization of the minimality theorem, see \cite[Théorème 1.4.1.1]{Le03}, which we will not cover.
}
\end{document} | arXiv |
Trapped surface
Closed trapped surfaces are a concept used in black hole solutions of general relativity[1] which describe the inner region of an event horizon. Roger Penrose defined the notion of closed trapped surfaces in 1965.[2] A trapped surface is one where light is not moving away from the black hole. The boundary of the union of all trapped surfaces around a black hole is called an apparent horizon.
A related term trapped null surface is often used interchangeably. However, when discussing causal horizons, trapped null surfaces are defined as only null vector fields giving rise to null surfaces. But marginally trapped surfaces may be spacelike, timelike or null.[3]
Definition
They are spacelike surfaces (topological spheres, tubes, etc.) with restricted bounds, their area tending to decrease locally along any possible future direction and with a dual definition with respect to the past. The trapped surface is a spacelike surface of co-dimension 2, in a Lorentzian spacetime. It follows[4] that any normal vector can be expressed as a linear combination of two future directed null vectors, normalised by:
k+ · k− = −2
The k+ vector is directed “outwards” and k− “inwards”. The set of all such vectors engenders one outgoing and one ingoing null congruence. The surface is designated trapped if the cross sections of both congruences decrease in area as they exit the surface; and this is apparent in the mean curvature vector, which is:
Hɑ= −θ+k−ɑ − θ−k+ɑ
The surface is trapped if both the null expansions θ± are negative, signifying that the mean curvature vector is timelike and future directed. The surface is marginally trapped if the outer expansion θ+ = 0 and the inner expansion θ− ≤ 0.
Trapped null surface
A trapped null surface is a set of points defined in the context of general relativity as a closed surface on which outward-pointing light rays are actually converging (moving inwards).
Trapped null surfaces are used in the definition of the apparent horizon which typically surrounds a black hole.
Definition
We take a (compact, orientable, spacelike) surface, and find its outward pointing normal vectors. The basic picture to think of here is a ball with pins sticking out of it; the pins are the normal vectors.
Now we look at light rays that are directed outward, along these normal vectors. The rays will either be diverging (the usual case one would expect) or converging. Intuitively, if the light rays are converging, this means that the light is moving backwards inside of the ball. If all the rays around the entire surface are converging, we say that there is a trapped null surface.
More formally, if every null congruence orthogonal to a spacelike two-surface has negative expansion, then such surface is said to be trapped.
See also
• Null hypersurface
• Raychaudhuri equation
References
1. Senovilla, Jose M. M. (September 15, 2011). "Trapped Surfaces". International Journal of Modern Physics D. 20 (11): 2139–2168. arXiv:1107.1344. Bibcode:2011IJMPD..20.2139S. doi:10.1142/S0218271811020354. S2CID 119249809.
2. Penrose, Roger (January 1965). "Gravitational collapse and space-time singularities". Phys. Rev. Lett. 14 (3): 57–59. Bibcode:1965PhRvL..14...57P. doi:10.1103/PhysRevLett.14.57.
3. Nielsen, Alex B. (February 10, 2014). "Revisiting Vaidya Horizons". Galaxies. 2 (1): 62–71. Bibcode:2014Galax...2...62N. doi:10.3390/galaxies2010062.
4. Bengtsson, Ingemar (December 22, 2011). "Some Examples of Trapped Surfaces". arXiv:1112.5318 [gr-qc].
• S. W. Hawking & G. F. R. Ellis (1975). The large scale structure of space-time. Cambridge University Press. This is the gold standard in black holes because of its place in history. It is also quite thorough.
• Robert M. Wald (1984). General Relativity. University of Chicago Press. ISBN 9780226870335. This book is somewhat more up-to-date.
Roger Penrose
Books
• The Emperor's New Mind (1989)
• Shadows of the Mind (1994)
• The Road to Reality (2004)
• Cycles of Time (2010)
• Fashion, Faith, and Fantasy in the New Physics of the Universe (2016)
Coauthored books
• The Nature of Space and Time (with Stephen Hawking) (1996)
• The Large, the Small and the Human Mind (with Abner Shimony, Nancy Cartwright and Stephen Hawking) (1997)
• White Mars or, The Mind Set Free (with Brian W. Aldiss) (1999)
Academic works
• Techniques of Differential Topology in Relativity (1972)
• Spinors and Space-Time: Volume 1, Two-Spinor Calculus and Relativistic Fields (with Wolfgang Rindler) (1987)
• Spinors and Space-Time: Volume 2, Spinor and Twistor Methods in Space-Time Geometry (with Wolfgang Rindler) (1988)
Concepts
• Twistor theory
• Spin network
• Abstract index notation
• Black hole bomb
• Geometry of spacetime
• Cosmic censorship
• Weyl curvature hypothesis
• Penrose inequalities
• Penrose interpretation of quantum mechanics
• Moore–Penrose inverse
• Newman–Penrose formalism
• Penrose diagram
• Penrose–Hawking singularity theorems
• Penrose inequality
• Penrose process
• Penrose tiling
• Penrose triangle
• Penrose stairs
• Penrose graphical notation
• Penrose transform
• Penrose–Terrell effect
• Orchestrated objective reduction/Penrose–Lucas argument
• FELIX experiment
• Trapped surface
• Andromeda paradox
• Conformal cyclic cosmology
Related
• Lionel Penrose (father)
• Oliver Penrose (brother)
• Jonathan Penrose (brother)
• Shirley Hodgson (sister)
• John Beresford Leathes (grandfather)
• Illumination problem
• Quantum mind
| Wikipedia |
\begin{document}
\title{Stability of ideal lattices from quadratic number fields} \author{Lenny Fukshansky}\thanks{The author was partially supported by the NSA Young Investigator Grant \#1210223 and a collaboration grant from the Simons Foundation (\#208969 to Lenny Fukshansky).}
\address{Department of Mathematics, 850 Columbia Avenue, Claremont McKenna College, Claremont, CA 91711} \email{[email protected]}
\subjclass[2010]{11H06, 11R11, 11E16, 11H55} \keywords{semi-stable lattices, ideal lattices, quadratic number fields}
\begin{abstract} We study semi-stable ideal lattices coming from real quadratic number fields. Specifically, we demonstrate infinite families of semi-stable and unstable ideal lattices of trace type, establishing explicit conditions on the canonical basis of an ideal that ensure stability; in particular, our result implies that an ideal lattice of trace type coming from a real quadratic field is semi-stable with positive probability. We also briefly discuss the connection between stability and well-roundedness of Euclidean lattices. \end{abstract}
\maketitle
\def{\mathcal A}{{\mathcal A}} \def{\mathfrak A}{{\mathfrak A}} \def{\mathcal B}{{\mathcal B}} \def{\mathcal C}{{\mathcal C}} \def{\mathcal D}{{\mathcal D}} \def{\mathfrak E}{{\mathfrak E}} \def{\mathcal F}{{\mathcal F}} \def{\mathcal H}{{\mathcal H}} \def{\mathcal I}{{\mathcal I}} \def{\mathfrak I}{{\mathfrak I}} \def{\mathcal J}{{\mathcal J}} \def{\mathcal K}{{\mathcal K}} \def{\mathfrak K}{{\mathfrak K}} \def{\mathcal L}{{\mathcal L}} \def{\mathfrak L}{{\mathfrak L}} \def{\mathcal M}{{\mathcal M}} \def{\mathfrak m}{{\mathfrak m}} \def{\mathfrak M}{{\mathfrak M}} \def{\mathcal N}{{\mathcal N}} \def{\mathcal O}{{\mathcal O}} \def{\mathfrak O}{{\mathfrak O}} \def{\mathfrak P}{{\mathfrak P}} \def{\mathcal R}{{\mathcal R}} \def{\mathcal P_N({\mathbb R})}{{\mathcal P_N({\mathbb R})}} \def{\mathcal P^M_N({\mathbb R})}{{\mathcal P^M_N({\mathbb R})}} \def{\mathcal P^d_N({\mathbb R})}{{\mathcal P^d_N({\mathbb R})}} \def{\mathcal S}{{\mathcal S}} \def{\mathcal V}{{\mathcal V}} \def{\mathcal X}{{\mathcal X}} \def{\mathcal Y}{{\mathcal Y}} \def{\mathcal Z}{{\mathcal Z}} \def{\mathcal H}{{\mathcal H}} \def{\mathbb C}{{\mathbb C}} \def{\mathbb N}{{\mathbb N}} \def{\mathbb P}{{\mathbb P}} \def{\mathbb Q}{{\mathbb Q}} \def{\mathbb Q}{{\mathbb Q}} \def{\mathbb R}{{\mathbb R}} \def{\mathbb R}{{\mathbb R}} \def{\mathbb Z}{{\mathbb Z}} \def{\mathbb Z}{{\mathbb Z}} \def{\mathbb A}{{\mathbb A}} \def{\mathbb F}{{\mathbb F}} \def{\it \Delta}{{\it \Delta}} \def{\mathfrak K}{{\mathfrak K}} \def{\overline{\mathbb Q}}{{\overline{\mathbb Q}}} \def{\overline{K}}{{\overline{K}}} \def{\overline{Y}}{{\overline{Y}}} \def{\overline{\mathfrak K}}{{\overline{\mathfrak K}}} \def{\overline{U}}{{\overline{U}}} \def{\varepsilon}{{\varepsilon}} \def{\hat \alpha}{{\hat \alpha}} \def{\hat \beta}{{\hat \beta}} \def{\tilde \gamma}{{\tilde \gamma}} \def{\tfrac12}{{\tfrac12}} \def{\boldsymbol e}{{\boldsymbol e}} \def{\boldsymbol e_i}{{\boldsymbol e_i}} \def{\boldsymbol c}{{\boldsymbol c}} \def{\boldsymbol m}{{\boldsymbol m}} \def{\boldsymbol k}{{\boldsymbol k}} \def{\boldsymbol i}{{\boldsymbol i}} \def{\boldsymbol l}{{\boldsymbol l}} \def{\boldsymbol q}{{\boldsymbol q}} \def{\boldsymbol u}{{\boldsymbol u}} \def{\boldsymbol t}{{\boldsymbol t}} \def{\boldsymbol s}{{\boldsymbol s}} \def{\boldsymbol v}{{\boldsymbol v}} \def{\boldsymbol w}{{\boldsymbol w}} \def{\boldsymbol x}{{\boldsymbol x}} \def{\boldsymbol X}{{\boldsymbol X}} \def{\boldsymbol z}{{\boldsymbol z}} \def{\boldsymbol y}{{\boldsymbol y}} \def{\boldsymbol Y}{{\boldsymbol Y}} \def{\boldsymbol L}{{\boldsymbol L}} \def{\boldsymbol a}{{\boldsymbol a}} \def{\boldsymbol b}{{\boldsymbol b}} \def{\boldsymbol\eta}{{\boldsymbol\eta}} \def{\boldsymbol\xi}{{\boldsymbol\xi}} \def{\boldsymbol 0}{{\boldsymbol 0}} \def{\boldsymbol 1}{{\boldsymbol 1}} \def{\boldsymbol 1}_L{{\boldsymbol 1}_L} \def\varepsilon{\varepsilon} \def\boldsymbol\varphi{\boldsymbol\varphi} \def\boldsymbol\psi{\boldsymbol\psi} \def\operatorname{rank}{\operatorname{rank}} \def\operatorname{Aut}{\operatorname{Aut}} \def\operatorname{lcm}{\operatorname{lcm}} \def\operatorname{sgn}{\operatorname{sgn}} \def\operatorname{span}{\operatorname{span}} \def\operatorname{mod}{\operatorname{mod}} \def\operatorname{Norm}{\operatorname{Norm}} \def\operatorname{dim}{\operatorname{dim}} \def\operatorname{det}{\operatorname{det}} \def\operatorname{Vol}{\operatorname{Vol}} \def\operatorname{rk}{\operatorname{rk}} \def\operatorname{ord}{\operatorname{ord}} \def\operatorname{ker}{\operatorname{ker}} \def\operatorname{div}{\operatorname{div}} \def\operatorname{Gal}{\operatorname{Gal}} \def\operatorname{GL}{\operatorname{GL}} \def\operatorname{SNR}{\operatorname{SNR}} \def\operatorname{WR}{\operatorname{WR}} \def\operatorname{IWR}{\operatorname{IWR}} \def\operatorname{\left< \Gamma \right>}{\operatorname{\left< \Gamma \right>}} \def\operatorname{Sim_{WR}(\Lambda_h)}{\operatorname{Sim_{WR}(\Lambda_h)}} \def\operatorname{C_h}{\operatorname{C_h}} \def\operatorname{C_h(\theta)}{\operatorname{C_h(\theta)}} \def\operatorname{\left< \Gamma_{\theta} \right>}{\operatorname{\left< \Gamma_{\theta} \right>}} \def\operatorname{\left< \Gamma_{m,n} \right>}{\operatorname{\left< \Gamma_{m,n} \right>}} \def\operatorname{\Omega_{\theta}}{\operatorname{\Omega_{\theta}}} \def\operatorname{mn}{\operatorname{mn}} \def\operatorname{disc}{\operatorname{disc}} \def\operatorname{\tau_{\rho'}}{\operatorname{\tau_{\rho'}}}
\section{Introduction and statement of results} \label{intro}
Let $\Lambda \subset {\mathbb R}^n$ be a lattice of rank $n \geq 2$. For each $1 \leq i \leq n$, the $i$-th successive minimum of $\Lambda$ is defined as $$\lambda_i = \min \left\{ \lambda \in {\mathbb R}_{>0} : \operatorname{dim} \left( \operatorname{span}_{{\mathbb R}} \left\{ \Lambda \cap B_n(\lambda) \right\} \right) \geq i \right\},$$ where $B_n(\lambda)$ is a closed ball of radius $\lambda$ centered at the origin in~${\mathbb R}^n$. Then clearly \begin{equation} \label{s-min} \lambda_1 \leq \dots \leq \lambda_n, \end{equation} and we say that $\Lambda$ is well-rounded (abbreviated WR) if there is equality throughout in~\eqref{s-min}. Two lattices $\Lambda$ and $\Omega$ are said to be similar, written $\Lambda \sim \Omega$, if there exists a positive real number $\gamma$ and an $n \times n$ real orthogonal matrix~$U$ such that~$\Lambda = \gamma U \Omega$. It is easy to see that ratios of successive minima, and hence well-roundedness, are preserved under similarity.
On the other hand, the lattice $\Lambda$ is called semi-stable if for each sublattice $\Omega \subseteq \Lambda$, \begin{equation} \label{semi_def} \operatorname{det}(\Lambda)^{1/\operatorname{rk}(\Lambda)} \leq \operatorname{det}(\Omega)^{1/\operatorname{rk}(\Omega)}. \end{equation} For instance, when $\operatorname{rk}(\Lambda)=2$ the defining inequality~\eqref{semi_def} can be restated as \begin{equation} \label{semi_def-2} \lambda_1 \geq \operatorname{det}(\Lambda)^{1/2}, \end{equation}
since for each sublattice $\Omega = \operatorname{span}_{{\mathbb Z}} \left\{ {\boldsymbol z} \right\} \subset \Lambda$ of rank~1, $\operatorname{det}(\Omega) = \|{\boldsymbol z}\| \geq \lambda_1$. Semi-stability, the same as well-roundedness, is preserved under similarity. If a lattice is not semi-stable, we will say that it is unstable.
The notion of semi-stability was originally introduced by Stuhler~\cite{stuhler} in the context of reduction theory and later used by Grayson~\cite{grayson} in the study of arithmetic subgroups of semi-simple algebraic groups (see also~\cite{casselman} for an excellent survey of Stuhler's and Grayson's work). As indicated in~\cite{andre}, semi-stability heuristically means that the successive minima are not far from each other (see~\cite{borek} for a detailed investigation of this connection), i.e., inequality~\eqref{s-min} is not far from equality. As a first observation, however, we note that the converse is not true; in other words, successive minima being close to each other does not necessarily imply stability. Specifically, we prove the following lemma.
\begin{lem} \label{wr_stable} All WR full-rank lattices in~${\mathbb R}^2$ are semi-stable. On the other hand, for each $n \geq 3$ there exist infinitely many similarity classes of unstable WR lattices of rank~$n$ in ${\mathbb R}^n$. \end{lem}
\proof First suppose that $\Lambda \subset {\mathbb R}^2$ is WR. Then there exists a basis ${\boldsymbol x}_1,{\boldsymbol x}_2$ for $\Lambda$ consisting of vectors corresponding to successive minima, i.e.
$$\lambda_1 = \|{\boldsymbol x}_1\| = \|{\boldsymbol x}_2\| = \lambda_2.$$ Let $\theta$ be the angle between these vectors, then
$$\operatorname{det}(\Lambda) = \|{\boldsymbol x}_1\| \|{\boldsymbol x}_2\| \sin \theta = \lambda_1^2 \sin \theta \leq \lambda_1^2.$$ and so $\Lambda$ is semi-stable by~\eqref{semi_def-2}. This shows that all WR lattices in~${\mathbb R}^2$ are semi-stable.
Next suppose $n \geq3$ and let ${\boldsymbol e}_1,\dots,{\boldsymbol e}_n$ be the standard basis vectors in~${\mathbb R}^n$. We construct a family of examples of WR lattices of rank $n$ in ${\mathbb R}^n$, which are unstable. From our simple construction, it becomes immediately clear that many other such examples are possible. Let $\theta \in [\pi/3,\pi/2)$, and let $${\boldsymbol x}_{\theta} = \cos \theta {\boldsymbol e}_1 + \sin \theta {\boldsymbol e}_2,$$ and define $$\Lambda_{\theta} = \operatorname{span}_{{\mathbb Z}} \left\{ {\boldsymbol e}_1,{\boldsymbol x}_{\theta},{\boldsymbol e}_3,\dots,{\boldsymbol e}_n \right\}.$$ It is easy to see that $\Lambda_{\theta}$ is WR with $$\lambda_1= \dots = \lambda_n = 1,$$ where ${\boldsymbol e}_1,{\boldsymbol x}_{\theta},{\boldsymbol e}_3,\dots,{\boldsymbol e}_n$ are the vectors corresponding to successive minima. Consider a sublattice $\Omega_{\theta} = \operatorname{span}_{{\mathbb Z}} \left\{ {\boldsymbol e}_1,{\boldsymbol x}_{\theta} \right\} \subset \Lambda$ of rank 2, and notice that $$\operatorname{det}(\Lambda_{\theta})^{1/n} = \left( \sin \theta \right)^{1/n} > \left( \sin \theta \right)^{1/2} = \operatorname{det}(\Omega_{\theta})^{1/2},$$ since $\sqrt{3}/2 \leq \sin \theta < 1$. Hence $\Lambda_{\theta}$ is unstable, and two such lattices $\Lambda_{\theta_1}$ and $\Lambda_{\theta_2}$ are similar if and only if $\theta_1 = \theta_2$. \endproof
\begin{rem} \label{kim} A particularly important subclass of WR lattices are perfect lattices, which figure prominently as potential candidates for extremum points of the sphere packing density function on the space of lattices, as well as in other related optimization problems. Y. Kim recently showed~\cite{ykim} that, while all perfect lattices in dimensions $\leq 7$ are semi-stable, there exists one 8-dimensional perfect lattice which is not semi-stable. \end{rem}
In~\cite{andre}, the author remarks that, while semi-stable lattices have been investigated in several arithmetic and geometric contexts, they have not yet been seriously studied in the scope of classical lattice theory. A goal of this note is to partially remedy this situation. One important construction widely used in lattice theory is that of ideal lattices coming from number fields. Ideal lattices have been extensively studied in a series of papers by Eva Bayer-Fluckiger and her co-authors in the 1990's and 2000's (see, for instance, \cite{bayer1}, \cite{bayer2}, \cite{bayer_nebe}). Here we consider a restricted notion of ideal lattices coming from quadratic number fields, called ideal lattices of trace type. Let $K$ be a quadratic number field, and let us write ${\mathcal O}_K$ for its ring of integers. Then $K={\mathbb Q}(\sqrt{D})$ (real quadratic) or $K={\mathbb Q}(\sqrt{-D})$ (imaginary quadratic), where $D$ is a positive squarefree integer. The embeddings $\sigma_1, \sigma_2 : K \to {\mathbb C}$ can be used to define the standard Minkowski embedding $\sigma_K$ of $K$ into ${\mathbb R}^2$: if $K={\mathbb Q}(\sqrt{D})$, then $\sigma_K : K \to {\mathbb R}^2$ is given by $\sigma_K = (\sigma_1,\sigma_2)$; if $K={\mathbb Q}(\sqrt{-D})$, then $\sigma_2=\overline{\sigma_1}$, and $\sigma_K=(\Re(\sigma_1), \Im(\sigma_1))$, where $\Re$ and $\Im$ stand for real and imaginary parts, respectively. Each nonzero ideal $I \subseteq {\mathcal O}_K$ becomes a lattice of full rank in ${\mathbb R}^2$ under this embedding, which we will denote by $\Lambda_K(I) := \sigma_K(I)$. These are the ideal lattices we consider.
WR ideal lattices were studied in~\cite{lf:petersen} and~\cite{wr_ideal-2}, where in particular it was shown that a positive proportion of quadratic number fields contain ideals giving rise to WR lattices. In view of Lemma~\ref{wr_stable}, it is interesting to understand which ideal lattices coming from quadratic number fields are semi-stable. An inequality connecting successive minima of an ideal lattice and the norm of its corresponding ideal~$I$ in the ring of integers of a fixed number field~$K$ follows from Lemma~3.2 of~\cite{lf:petersen}: \begin{equation} \label{s_min-to-norm} \lambda_1(\Lambda_K(I))^2 \geq (r_1+r_2) {\mathbb N}(I)^{\frac{1}{r_1+r_2}}. \end{equation} Here $r_1$ is the number of real embeddings and~$r_2$ is the number of pairs of complex conjugate embeddings of~$K$; ${\mathbb N}(I)$ stands for the norm of the ideal~$I$ in~${\mathcal O}_K$. A direct adaptation of Lemma~2 on p.115 of~\cite{lang} implies that \begin{equation} \label{det_norm}
\operatorname{det}(\Lambda_K(I)) = 2^{-r_2} |\Delta_K|^{\frac{1}{2}} {\mathbb N}(I), \end{equation} where $\Delta_K$ is the discriminant of $K$.
In this note, we discuss the case of real quadratic fields. When $K$ is a real quadratic number field, $r_1=2$ and $r_2=0$, and so combining~\eqref{s_min-to-norm} with~\eqref{det_norm}, we only obtain
$$\lambda_1(\Lambda_K(I)) \geq \frac{\sqrt{2}}{|\Delta_K|^{1/8}}\ \operatorname{det}(\Lambda_K(I))^{1/4}.$$ Hence the situation is more complicated and requires more detailed analysis and additional notation. Let~$D > 1$ be a squarefree integer and let $K={\mathbb Q}(\sqrt{D})$. We have ${\mathcal O}_K={\mathbb Z}[\delta]$, where \begin{equation} \label{delta} \delta = \left\{ \begin{array}{ll} - \sqrt{D} & \mbox{if $K={\mathbb Q}(\sqrt{D})$, $D \not\equiv 1 (\operatorname{mod} 4)$} \\ \frac{1-\sqrt{D}}{2} & \mbox{if $K={\mathbb Q}(\sqrt{D})$, $D \equiv 1 (\operatorname{mod} 4)$.} \end{array} \right. \end{equation} Now $I \subseteq {\mathcal O}_K$ is an ideal if and only if \begin{equation} \label{I_abg} I = I(a,b,g) := \{ ax + (b+g\delta)y : x,y \in {\mathbb Z} \}, \end{equation} for some $a,b,g \in {\mathbb Z}_{\geq 0}$ such that \begin{equation} \label{abg} b < a,\ g \mid a,b,\text{ and } ag \mid {\mathbb N}(b+g\delta). \end{equation} Such integral basis $a,b+g\delta$ is unique for each ideal $I$ and is called the canonical basis for~$I$ (see Section~6.3 of~\cite{buell} for details). In Section~\ref{real} we prove the following result.
\begin{thm} \label{real_quad} Let $K = {\mathbb Q}(\sqrt{D})$ be a real quadratic number field. Then there exist infinitely many ideals $I \subseteq {\mathcal O}_K$ for which the corresponding ideal lattice~$\Lambda_K(I)$ is semi-stable, as well as infinitely many such ideals with the corresponding lattice unstable. Specifically, let $\gamma \in {\mathbb R}_{>0}$ and define the functions $$u_{\gamma}(b) = \left\{ \begin{array}{ll} \frac{\gamma(2b+1)}{2} & \mbox{if $D \equiv 1 (\operatorname{mod} 4)$} \\ \gamma b & \mbox{if $D \not\equiv 1 (\operatorname{mod} 4)$,} \end{array} \right.$$ $$v(b) = \left\{ \begin{array}{ll} \frac{(2b+1)^2+D}{2\sqrt{D}} & \mbox{if $D \equiv 1 (\operatorname{mod} 4)$} \\ \frac{b^2+D}{\sqrt{D}} & \mbox{if $D \not\equiv 1 (\operatorname{mod} 4)$,} \end{array} \right.$$ $$h(b) = \left\{ \begin{array}{ll} \frac{(2b+1)^2-D}{2} & \mbox{if $D \equiv 1 (\operatorname{mod} 4)$} \\ b^2-D & \mbox{if $D \not\equiv 1 (\operatorname{mod} 4)$.} \end{array} \right.$$ Then there exists an absolute constant $\gamma > 1$ such that if \begin{equation} \label{stable-1} u_{\gamma}(b) \leq a \leq v(b), \end{equation} then the lattice $\Lambda_K(I(a,b,g))$ is semi-stable for every triple $a,b,g$ satisfying~\eqref{abg}. On the other hand, if \begin{equation} \label{nonstable-1} v(b) < a \leq h(b), \end{equation} then the lattice $\Lambda_K(I(a,b,g))$ is unstable for every triple $a,b,g$ satisfying~\eqref{abg}. \end{thm}
\noindent In fact, Remark~\ref{proportion} below shows that the probability of an arbitrary ideal lattice~$\Lambda_{{\mathbb Q}(\sqrt{D})}(I(a,b,g))$ being semi-stable is positive (specifically, the probability is at least $1/\gamma$ as $b \to \infty$).
In Section~\ref{auxil} we prove a technical lemma on distribution of divisors of integers of the form $x^2-D$, which is useful to us later in our main argument. In Section~\ref{real_lemmas} we establish Proposition~\ref{ab_stable}, which is the core of our argument. Finally, we use this proposition in Section~\ref{real} to prove Theorem~\ref{real_quad}. We are now ready to proceed.
\section{A divisor lemma} \label{auxil}
In this section we make an observation on the finiteness of the set of integers of the form $x^2 \pm D$ which have divisors in small intervals around their square root. This result is later used in the proof of Theorem~\ref{real_quad}. The proof of this lemma was suggested to me by Florian Luca.
\begin{lem} \label{fin_div} Let $|D| > 1$ be a squarefree integer and $0 < {\varepsilon} < 1/2$ a real number. Then the set $$\left\{ x \in {\mathbb Z}_{>0} : \exists\ b \mid x^2-D \text{ such that } x < b \leq x+x^{1/2-{\varepsilon}} \right\}$$ is finite. \end{lem}
\proof Since there are only finitely many positive integers less than any fixed constant, we can assume without loss of generality that
$$x > \max \left\{ |D|, 2^{1/{\varepsilon}} \right\}.$$ Let us write $x^2-D = bd$, where $b \in (x, x+x^{1/2-{\varepsilon}}]$, then $d \in [x-x^{1/2-{\varepsilon}},x)$. Notice that $b=d+a$, where $a \in [0,2x^{1/2-{\varepsilon}}]$. Therefore $$x^2-D = d(d+a) = d^2 + 2d \frac{a}{2} + \left( \frac{a}{2} \right)^2 - \left( \frac{a}{2} \right)^2 = \left( d+\frac{a}{2} \right)^2 - \left( \frac{a}{2} \right)^2,$$ and therefore $$(2x)^2 = (2d+a)^2 + (4D- a^2),$$ meaning that $$(2x-(2d+a))(2x+(2d+a)) = 4D - a^2.$$
Taking absolute values, we see that the left hand side cannot be equal to zero; since $|D| > 1$, the assumption that $4D - a^2 = 0$ would imply that $D = (a/2)^2 > 1$, which would contradict $D$ being squarefree. Since $2x-(2d+a)$ is an integer, $|2x-(2d+a)| \geq 1$, which means that
$$|(2x-(2d+a))(2x+(2d+a))| \geq 2x+(2d+a) > 2x.$$ On the other hand,
$$|4D - a^2| \leq 4|D|+a^2 < 4|D| + 4x^{1-2{\varepsilon}},$$ and so we have
$$2x < 4x^{1-2{\varepsilon}} + 4|D|.$$ Therefore, since $x > 2^{1/{\varepsilon}}$,
$$x < 2x(1 - 2x^{-2{\varepsilon}}) < 4|D|,$$
meaning that there are at most $4|D|$ such integers $x$. \endproof
\section{Lemmas on stability of some planar lattices} \label{real_lemmas}
Our goal here is to develop a collection of lemmas that will allow us to treat ideal lattices coming from any real quadratic number field simultaneously. Throughout this section, let~$D > 1$ be fixed a squarefree integer. For each pair of integers $(a,b)$ such that \begin{equation} \label{ab1} 0 < b < a,\ a \mid b^2-D, \end{equation} define the lattice \begin{equation} \label{Lab} \Lambda(a,b) = \begin{pmatrix} a & b-\sqrt{D} \\ a & b+\sqrt{D} \end{pmatrix} {\mathbb Z}^2. \end{equation} We want to understand for which pairs $(a,b)$ satisfying~\eqref{ab1} the corresponding lattice $\Lambda(a,b)$ is semi-stable. Let $$S(D) = \left\{ (a,b) \in {\mathbb Z}^2 : (a,b) \text{ satisfies \eqref{ab1}} \right\}.$$ We prove the following result.
\begin{prop} \label{ab_stable} For infinitely many pairs $(a,b) \in S(D)$, the corresponding lattice $\Lambda(a,b)$ is semi-stable, and for infinitely many pairs it is unstable. Specifically, there exists an absolute constant $\gamma > 1$ such that if \begin{equation} \label{fm7} \gamma b \leq a \leq \frac{b^2+D}{\sqrt{D}}, \end{equation} then the lattice $\Lambda(a,b)$ is semi-stable. On the other hand, if \begin{equation} \label{fm8} \frac{b^2+D}{\sqrt{D}} < a \leq b^2-D, \end{equation} then the lattice $\Lambda(a,b)$ is unstable. \end{prop}
To establish Proposition~\ref{ab_stable}, notice that for each $(a,b) \in S(D)$, $\operatorname{det}(\Lambda(a,b)) = 2a\sqrt{D}$, and so $\Lambda(a,b)$ is semi-stable if and only if $$\lambda_1(\Lambda(a,b))^2 \geq 2a \sqrt{D}.$$ The norm form of $\Lambda(a,b)$ corresponding to the choice of basis as in~\eqref{Lab} is $$Q(x,y) = Q_{(a,b)}(x,y) := 2(x a + y b)^2 + 2y^2 D,$$ then $$\lambda_1^2 = \min \left\{ Q(x,y) : (x,y) \in {\mathbb Z}^2 \setminus \{ (0,0) \} \right\}.$$ Let $(\alpha,\beta) \in {\mathbb Z}^2$ be a point at which this minimum is achieved, i.e., $$Q(\alpha,\beta) = 2 \min \left\{ (x a + y b)^2 + y^2 D : (x,y) \in {\mathbb Z}^2 \setminus \{ (0,0) \} \right\},$$ then $\gcd(\alpha,\beta)=1$, and semi-stability is equivalent to the inequality \begin{equation} \label{form_min} Q(\alpha,\beta) \geq 2 a \sqrt{D}. \end{equation}
\begin{lem} \label{123} $(\alpha,\beta)$, the minimum of $Q(x,y)$ falls into one of the following three categories: \begin{enumerate} \item $(\alpha,\beta) = (1,0)$, \item $(\alpha,\beta) = (0,1)$,
\item $0 < \alpha \leq b,\ \alpha \leq |\beta| \leq a,\ \beta < 0$. \end{enumerate} \end{lem}
\proof Assume (I) and (II) do not hold, which means that $\alpha \beta \neq 0$. Then $\alpha \beta < 0$, since otherwise $$Q(\alpha,\beta) \geq 2 (a+b)^2 + 2D > 2(b^2+D) = Q(0,1).$$
Hence we can assume without loss of generality that $\beta < 0$, since $Q(\alpha,\beta) = Q(-\alpha,-\beta)$. If $|\beta| > a$, then $$Q(\alpha,\beta) > 2a^2D > 2a^2 = Q(1,0).$$ Now consider
$$f(\alpha) = Q(\alpha,\beta) = 2(\alpha a - |\beta| b)^2 + 2\beta^2 D$$
as a function of $\alpha$. Notice that it is increasing when $\alpha > |\beta| b/a$. Since $|\beta| \leq a$, $\alpha > |\beta| b/a$ when $\alpha > b$, meaning that $Q(\alpha,\beta)$ cannot achieve its minimum for such values of~$\alpha$. Finally, assume that $\alpha > |\beta|$ and recall that $a > b$. Then
$$Q(\alpha,\beta) = 2(\alpha a - |\beta| b)^2 + 2\beta^2 D \geq 2(2a-b)^2 + 2D = 8(a^2-ab) + 2(b^2+D) > Q(0,1).$$ Hence we established that the inequalities (III) hold, which proves the lemma. \endproof
Let us define three sets of pairs $(a,b) \in S(D)$, corresponding to each of the three cases above: $$S_1 = S_1(D) := \left\{ (a,b) \in S(D) : (\alpha,\beta) \text{ is as in (I)} \right\},$$ $$S_2 = S_2(D) := \left\{ (a,b) \in S(D) : (\alpha,\beta) \text{ is as in (II)} \right\},$$ $$S_3 = S_3(D) := \left\{ (a,b) \in S(D) : (\alpha,\beta) \text{ is as in (III)} \right\}.$$
\noindent We can write $a = C(b^2-D)$ for some $C \in {\mathbb R}_{>0}$, $b/(b^2-D) < C \leq 1$. Then $\Lambda(a,b)$ is semi-stable if and only if $Q(\alpha,\beta) \geq 2 C(b^2-D) \sqrt{D}$, which is equivalent to \begin{equation} \label{fm1} C \left( \alpha^2 C(b^2-D) + 2\alpha \beta b - \sqrt{D} \right) \geq - \frac{\beta^2(b^2+D)}{b^2-D}. \end{equation} The right hand side of~\eqref{fm1} is always non-positive and $C > 0$.
\begin{lem} \label{lem-S1} The set $S_1$ is finite, and the lattice $\Lambda(a,b)$ is semi-stable for every pair $(a,b) \in S_1$ with $b > \sqrt{D}$. \end{lem}
\proof Let $(a,b) \in S_1$ with $b > \sqrt{D}$, then $\beta = 0$, $\alpha = 1$ and~\eqref{fm1} holds for all values of~$C$. Hence the lattice $\Lambda(a,b)$ is semi-stable.
Now we show that $S_1$ is finite. Notice that for each $(a,b) \in S_1$, $$\frac{1}{2} Q(1,0) = C^2 (b^2-D)^2 \leq \frac{1}{2} Q(0,1) = b^2+D,$$ and so $C \leq \frac{\sqrt{b^2+D}}{b^2-D}$, which means that $$b < a \leq \sqrt{b^2+D} < b + \sqrt{D}.$$ Therefore $b$ is an integer such that $b^2-D$ has a divisor $a \in (b, b + \sqrt{D})$, and clearly $\sqrt{D} < b^{1/2-{\varepsilon}}$ for any ${\varepsilon} > 0$ for all but finitely many $b$. There are only finitely many such integers $b$ by Lemma~\ref{fin_div}, and so the set of such pairs $(a,b)$ is finite, since $a$ is bounded by~$\sqrt{b^2+D}$. \endproof
\begin{lem} \label{lem-S2} Let $(a,b) \in S_2$ and $a = C(b^2-D)$ as above. Then $\Lambda(a,b)$ is semi-stable if and only if $C \leq \frac{b^2+D}{(b^2-D)\sqrt{D}}$. \end{lem}
\proof Suppose $\alpha = 0$, then $\beta = 1$, then~\eqref{fm1} holds if and only if \begin{equation} \label{fm2} C \leq \frac{b^2+D}{(b^2-D)\sqrt{D}}. \end{equation} \endproof
\begin{lem} \label{lem-S3} Let $(a,b) \in S_3$ and $a = C(b^2-D)$ as above. There exists an absolute real constant $\gamma > 1$ such that if $C \geq \frac{\gamma b}{b^2-D}$, then $\Lambda(a,b)$ is semi-stable. \end{lem}
\proof If the set $S_3$ is finite, there is nothing to prove, so assume it is infinite. Let $$S_3' = \left\{ b \in {\mathbb Z}_{>0} : \exists\ a \in {\mathbb Z}_{>0} \text{ such that } (a,b) \in S_3 \right\}.$$ In the asymptotic argument below, when we consider $b$ getting large or tending to infinity, we always mean that $b$ stays in $S_3'$ and $a = C(b^2-D)$ is such that $(a,b) \in S_3$.
For each $(a,b) \in S_3$, the corresponding $\alpha,\beta \neq 0$ are such that $\beta < 0 < \alpha \leq |\beta|$. The inequality~\eqref{fm1} certainly holds when \begin{equation} \label{fm3} \alpha^2 C(b^2-D) + 2\alpha \beta b - \sqrt{D} \geq 0, \end{equation} which is true whenever \begin{equation} \label{fm4}
C \geq \frac{2 \alpha |\beta| b + \sqrt{D}}{\alpha^2 (b^2-D)} = \left( \frac{|\beta|}{\alpha} \right) \left( \frac{2 b}{b^2-D} \right) + \frac{\sqrt{D}}{\alpha^2 (b^2-D)}. \end{equation}
\begin{claim} \label{clm1} There exists an absolute constant $\rho$ so that $1 \leq \frac{|\beta|}{\alpha} \leq \rho$ for all~$(a,b) \in S_3$. \end{claim}
\proof Suppose not, then there exists some monotone increasing unbounded real-valued function $f(b)$ such that \begin{equation} \label{rho}
\liminf_{b \to \infty} \frac{|\beta|}{\alpha f(b)} = 1. \end{equation} Hence we can assume that there exists an infinite subsequence of positive integers $b$ for which $\beta \sim - \alpha f(b)$ as $b \to \infty$. Then for all sufficiently large $b$, \begin{eqnarray} \label{fm5} & & \frac{1}{2} \min \left\{ Q(x,y) : (x,y) \in {\mathbb Z}^2 \setminus {(0,0)} \right\} \nonumber \\ & = & \frac{1}{2} Q(\alpha,\beta) \sim \left( \alpha C (b^2-D) - \alpha b f(b) \right)^2 + \alpha^2 f(b)^2 D \nonumber \\ & = & \alpha^2 f(b)^2 \left(b^2 \left( \frac{C(b^2-D)}{b f(b)} - 1 \right)^2 + D \right) > b^2+D = \frac{1}{2} Q(0,1), \end{eqnarray} unless $\frac{C(b^2-D)}{b f(b)} \to 1$ as $b \to \infty$. Suppose this is the case, then
$$\frac{C(b^2-D)}{b f(b)} = \left( \frac{|\beta|}{\alpha f(b)} \right) \frac{\alpha C(b^2-D)}{ |\beta| b} \to 1,$$
and since $|\beta|/\alpha f(b) \to 1$ and $a = C(b^2-D)$, we have $\frac{a}{b} \times \frac{\alpha}{|\beta|} \to 1$ as $b \to \infty$. Since $a,b,\alpha,\beta$ are integers, we must have \begin{equation} \label{aabb}
\frac{a}{b} = \frac{|\beta|}{\alpha} \end{equation} for all sufficiently large $b$. Since $\alpha$ and $\beta$ are relatively prime, we must have $\alpha=b/d$, $\beta=-a/d$, where $d=\gcd(a,b) \mid D$ by~\eqref{ab1}. Then $$\frac{1}{2} Q(\alpha,\beta) = \frac{a^2 D}{d^2} \leq b^2+D = \frac{1}{2} Q(0,1),$$ and so \begin{equation} \label{aabb-1} \frac{a}{b} \leq d \sqrt{ \frac{1}{D} + \frac{1}{b^2} } < d \sqrt{2} \leq D\sqrt{2}. \end{equation}
Now \eqref{aabb-1} combined with \eqref{aabb} implies that $|\beta|/\alpha \leq D\sqrt{2}$. This completes the proof. \endproof
Thus we conclude that $|\beta|/\alpha \leq \rho$ for all~$b \in S_3'$. Then~\eqref{fm4} implies that for all ~$b \in S_3'$, if \begin{equation} \label{fm6} C \geq \frac{2 \rho b}{b^2-D} + \frac{\sqrt{D}}{\alpha^2 (b^2-D)}, \end{equation} then the lattice $\Lambda(a,b)$ is semi-stable. In other words, there exists some real constant $\gamma > \rho \geq 1$ such that whenever $a=C(b^2-D)$ for $C \in [\gamma b /(b^2-D), 1]$ so that $(a,b) \in S_3$, the lattice $\Lambda(a,b)$ is semi-stable. \endproof
\proof[Proof of Proposition~\ref{ab_stable}] Let $\gamma$ be the constant as in the statement of Lemma~\ref{lem-S3}. First, let $(a,b) \in S(D)$ as above with $b > \sqrt{D}$, and assume that~\eqref{fm7} is satisfied. Notice that $(a,b)$ is either in $S_1$, $S_2$, or $S_3$. Then the result follows by combining Lemmas~\ref{lem-S1}, \ref{lem-S2}, and~\ref{lem-S3}.
Next, assume that~\eqref{fm8} holds, then $$\operatorname{det}(\Lambda(a,b)) = 2a \sqrt{D} > 2(b^2+D) = Q(0,1) \geq \lambda_1(\Lambda(a,b))^2,$$ and so $\Lambda(a,b)$ is unstable.
To construct an infinite family of pairs $(a,b) \in S(D)$ giving rise to unstable lattices, simply take $a=b^2-D$ for each integer $b > \sqrt{\frac{D+D\sqrt{D}}{\sqrt{D}-1}}$; the resulting lattice is unstable since~\eqref{fm8} is satisfied.
On the other hand, for each $m \in {\mathbb Z}_{>0}$ let $b=mD$ and take $a = \frac{b^2-D}{D} = m^2D-1$. Let $\gamma$ be the constant as in the statement of Lemma~\ref{lem-S3}. For each $m \geq \frac{\gamma D + \sqrt{\gamma^2 D^2 + 4D}}{2D}$, the inequality~\eqref{fm7} is satisfied, and hence the resulting lattice is semi-stable by the argument above. \endproof
\begin{rem} \label{small_a} In the argument above, we constructed a family of unstable lattices $\Lambda(a,b)$ with $a$ large comparing to $b$. On the other hand, there also exist unstable lattices $\Lambda(a,b)$ with $a$ close to $b$. For instance, let $D=13$ and consider the pair $(a,b)=(276,259) \in S(D)$. Then $$\lambda_1^2 \leq Q_{(a,b)}(1,-1) = 604 < 2a\sqrt{D} = 552\sqrt{13} = \operatorname{det}(\Lambda(a,b)),$$ and so the lattice $\Lambda(276,259)$ is unstable. \end{rem}
\section{The case of real quadratic number fields} \label{real}
In this section we prove Theorem~\ref{real_quad}. Let $D>1$ be a squarefree integer, $K={\mathbb Q}(\sqrt{D})$, integers $a,b,g \geq 0$ satisfying~\eqref{abg}, and the ideal $I=I(a,b,g) \subseteq {\mathcal O}_K$ as in~\eqref{I_abg}. Then \begin{equation} \label{B1} \Lambda_K(I) = \begin{pmatrix} a & b-g\sqrt{D} \\ a & b+g\sqrt{D} \end{pmatrix} {\mathbb Z}^2, \end{equation} if $D \not\equiv 1 (\operatorname{mod} 4)$, and \begin{equation} \label{B2} \Lambda_K(I) = \begin{pmatrix} a & \frac{2b+g}{2} - \frac{g\sqrt{D}}{2} \\ a &\frac{2b+g}{2} + \frac{g\sqrt{D}}{2} \end{pmatrix} {\mathbb Z}^2, \end{equation} if $D \equiv 1 (\operatorname{mod} 4)$. Notice that $I=gI'$, where $I'$ has canonical basis $\frac{a}{g}, \frac{b}{g}+\delta$ and $\Lambda_K(I) \sim \Lambda_K(I')$. Hence we can assume without loss of generality that $g=1$.
First assume that $D \not\equiv 1 (\operatorname{mod} 4)$, then $$I = \{ ax + (b-\sqrt{D})y : x,y \in {\mathbb Z} \} \subseteq {\mathcal O}_K.$$ Here the pair $(a,b)$ satisfies the conditions of~\eqref{ab1} and $\Lambda_K(I) = \Lambda(a,b)$. The statement of Theorem~\ref{real_quad} in this case readily follows from Proposition~\ref{ab_stable}.
Now assume that $D \equiv 1 (\operatorname{mod} 4)$, then $$I = \left\{ ax + \left( \frac{2b + 1-\sqrt{D}}{2} \right) y : x,y \in {\mathbb Z} \right\} \subseteq {\mathcal O}_K,$$ where \begin{equation} \label{abg-2} b < a,\ a \mid \frac{1}{4} \left( (2b+1)^2 - D \right), \end{equation} and \begin{equation} \label{basis-2} \Lambda_K(I) = \begin{pmatrix} a & \frac{2b+1}{2} - \frac{\sqrt{D}}{2} \\ a &\frac{2b+1}{2} + \frac{\sqrt{D}}{2} \end{pmatrix} {\mathbb Z}^2. \end{equation} Let $a_1=2a$, $b_1=2b+1$, and notice that the pair $(a_1,b_1)$ satisfies the conditions of~\eqref{ab1} and $\Lambda_K(I) = \frac{1}{2} \Lambda(a_1,b_1)$. Observe that $\Lambda_K(I)$ is semi-stable if and only if $\Lambda(a_1,b_1)$ is semi-stable, and hence the statement of Theorem~\ref{real_quad} in this case again follows from Proposition~\ref{ab_stable}.
\begin{rem} \label{proportion} In fact, Theorem~\ref{real_quad} implies that an arbitrary ideal lattice~$\Lambda_{{\mathbb Q}(\sqrt{D})}(I)$ is semi-stable with positive probability.
Indeed, for $x > 0$ let \begin{equation} M(D,x) = \left\{ q \in {\mathbb Z}_{>0} : q < x, D \text{ is a quadratic residue} \operatorname{mod} q \right\}, \end{equation} then $q < x$ is in $M(D,x)$ if and only if $D$ is a quadratic residue modulo every prime dividing~$q$. Professor Gang Yu pointed out to me that an argument essentially identical to the proof of the main result of~\cite{james} shows that there exists a positive real constant $C(D)$ such that \begin{equation} \label{MDx}
|M(D,x)| \sim C(D) \left( \frac{x}{\sqrt{\log x}} \right) \end{equation} as $x \to \infty$ (the set $M(D,x)$ can also be compared to the set $S(x)=M(-1,x)$ in the definition of Landau-Ramanujan constant~\cite{landau}, where the same classical asymptotic emerges). Then~\eqref{MDx} implies that for $0 < k_1 < k_2 < 1$,
$$\left| M(D,x) \cap [k_1x,k_2x] \right| \sim C(D) (k_2-k_1) \left( \frac{x}{\sqrt{\log x}} \right),$$ and so
$$\lim_{x \to \infty} \frac{\left| M(D,x) \cap [k_1x,k_2x] \right|}{\left| M(D,x) \right|} = k_2-k_1,$$ which means that elements of $M(D,x)$ are equidistributed in subintervals of $[1,x]$. In other words, as $x \to \infty$, every subinterval $[k_1x,k_2x]$ with $0 < k_1 < k_2 < 1$ will contain a $(k_2-k_1)$-proportion of integers $q$ such that $D$ is quadratic residue modulo~$q$. This implies that probability of such a modulus $q$ to be in the interval $[k_1x,k_2x]$ tends to $k_2-k_1$ as $x \to \infty$.
Now let $K = {\mathbb Q}(\sqrt{D})$, let $I = I(a,b,1) \subseteq {\mathcal O}_K$ be an ideal, and let $$a_1 = \left\{ \begin{array}{ll} a & \mbox{if $D \not\equiv 1 (\operatorname{mod} 4)$} \\ 2a & \mbox{if $D \equiv 1 (\operatorname{mod} 4)$,} \end{array} \right.$$ and $$b_1 = \left\{ \begin{array}{ll} b & \mbox{if $D \not\equiv 1 (\operatorname{mod} 4)$} \\ 2b+1 & \mbox{if $D \equiv 1 (\operatorname{mod} 4)$.} \end{array} \right.$$ Then $a_1 \mid b_1^2 - D$ and $b_1 < a_1 \leq b_1^2 -D$. Let $d_1 = (b_1^2-D)/a_1$, so $d_1 \mid b_1^2 - D$ and $1 \leq d_1 < b_1$. Theorem~\ref{real_quad} implies that if \begin{equation} \label{d_bnd} \sqrt{D} \left( \frac{b_1^2-D}{b_1^2+D} \right) \leq d_1 \leq \frac{1}{\gamma} b_1 - \frac{D}{\gamma b_1}, \end{equation} then the lattice $\Lambda_K(I)$ is semi-stable. In other words,~\eqref{d_bnd} implies that for each ${\varepsilon} > 0$ there exists $B \in {\mathbb R}_{>0}$ such that for all $b_1 > B$, if \begin{equation} \label{d_bnd-1} d_1 \in \left[ \frac{\sqrt{D}}{b_1} b_1, \frac{1}{\gamma} b_1 - {\varepsilon} \right], \end{equation} then $\Lambda_K(I)$ is semi-stable. Since $d_1 \in M(D,b_1)$, our argument above suggests that the probability of~\eqref{d_bnd-1} holding tends to $\frac{1}{\gamma}$ as $b_1 \to \infty$. \end{rem}
{\bf Acknowledgment.} I would like to thank Professor Florian Luca for suggesting the proof of Lemma~\ref{fin_div}, as indicated above. I also thank Professors Gang Yu and David Speyer, whose comments were instrumental to the formulation of Remark~\ref{proportion}. Finally, I thank the referee for many useful suggestions which improved the quality of the paper.
\end{document} | arXiv |
\begin{document}
\title{Almost complete and equable heteroclinic networks}
\author{Peter Ashwin\thanks{Centre for Systems, Dynamics and Control, Department of Mathematics, University of Exeter, Exeter EX4 4QF, UK}, Sofia B.S.D. Castro\thanks{Faculdade de Economia and Centro de Matem\'atica, Universidade do Porto, Rua Dr.\ Roberto Frias, 4200-464 Porto, Portugal} \& Alexander Lohse\thanks{Fachbereich Mathematik, Universit\"at Hamburg, Bundesstra{\ss}e 55, 20146 Hamburg, Germany}~\thanks{Corresponding author, mailto:[email protected]}}
\maketitle
\begin{abstract} Heteroclinic connections are trajectories that link invariant sets for an autonomous dynamical flow: these connections can robustly form networks between equilibria, for systems with flow-invariant spaces. In this paper we examine the relation between the heteroclinic network as a flow-invariant set and directed graphs of possible connections between nodes. We consider realizations of a large class of transitive digraphs as robust heteroclinic networks and show that although robust realizations are typically not complete (i.e.\ not all unstable manifolds of nodes are part of the network), they can be almost complete (i.e.\ complete up to a set of zero measure within the unstable manifold) and equable (i.e.\ all sets of connections from a node have the same dimension). We show there are almost complete and equable realizations that can be closed by adding a number of extra nodes and connections. We discuss some examples and describe a sense in which an equable almost complete network embedding is an optimal description of stochastically perturbed motion on the network. \end{abstract}
\noindent {\em Keywords:} heteroclinic cycle, heteroclinic network, directed graph
\noindent {\em AMS classification:} 34C37, 34D45, 37C29, 37C75
\section{Introduction}
Heteroclinic cycles and networks appear in a range of dynamical models posed as ordinary differential equations (ODEs) that try to capture ``intermittent'' behaviour, for example, in the onset of fluid turbulence, encoding of neural states or species competition in ecosystems: see for example Krupa \cite{Kru97}. They manifest as attracting dynamics where the state remains close to saddle equilibria for long periods of time, interspersed with rapid switches between equilibria. This behaviour can remain robust to perturbations that preserve some symmetries or other structures of the system: see for example Weinberger and Ashwin \cite{Weinberger2018} for a recent review.
In most cases, heteroclinic networks have been found and studied from analysis of a given system of ODEs. However, in an attempt to understand general properties of heteroclinic networks Ashwin and Postlethwaite \cite{AshPos13} suggest that the converse problem of {\em designing a system of ODEs that realize (i.e.\ embed) a given directed graph as a heteroclinic network} is of interest. It is also of potential interest in applications such as design of computational systems that permit only certain transitions. Several recent papers, Ashwin and Postlethwaite \cite{AshPos13,AshPos16a}, Bick \cite{Bic18} and Field \cite{Fie15, Fie17}, have considered several approaches to the design of systems that have specific heteroclinic networks. These approaches to the realization of a graph as a heteroclinic network typically result in networks that are not asymptotically stable or even contain unstable manifolds of all saddles. This is discussed in \cite{Fie17} where a heteroclinic network is called {\em clean} if it is compact and equal to the union of the unstable manifolds of its equilibria. In the present article, we consider networks that are typically not compact -- we call them {\em complete} if they contain all unstable manifolds of their equilibria. Thus, a network is clean if and only if it is compact and complete. The notion of completeness is related to whether the network can be visible as an attractor: indeed it is necessary for a network to be clean/complete for it to be asymptotically stable \cite[Remark 1.4]{Fie17}.
This paper introduces some concepts, results and examples that aim to clarify the structure and dynamics of heteroclinic networks by showing that although we typically cannot realize arbitrary directed graphs (from a large class) as clean heteroclinic networks, we can achieve {\em almost completeness} (the network contains almost all of the unstable manifolds) and in addition ensure {\em equability} (a property of a node meaning that all outgoing connections from that node have the same dimension) of all nodes in the network.
To introduce this more precisely, we consider a system of ordinary differential equations \begin{equation} \dot{x}=f(x) \label{eq:ode} \end{equation} in $\mathbb{R}^n$ ($n<\infty$) with smooth $f$ and a bounded globally attracting open set: we will write $\varphi(t,x_0)$ to denote the flow generated by solutions $x(t)$ of (\ref{eq:ode}) starting at $x_0$. Clearly a variety of invariant sets may appear and be of importance for the asymptotic behavior of typical initial conditions. These not only organize the autonomous dynamics but also allow one to understand how the dynamics behave under small perturbations of various types.
If (\ref{eq:ode}) is equivariant under the action of a compact Lie group $\Gamma$ acting orthogonally on $\mathbb{R}^n$, then there is an extensive literature considering many heteroclinic networks with the remarkable property that they are persistent or robust under perturbations of $f$ that respect the symmetries $\Gamma$: see for example the work of Krupa and Melbourne \cite{Kru97,KruMel95a,KruMel04}.
There are several possible ways to understand a heteroclinic network in a graph-theoretic manner. Note that in many cases the directed graph (digraph) is referred to simply as a graph. Our approach gives the minimal possible graph one could naturally associate with a heteroclinic network: we say there is an edge between vertices if there is at least one connection between the corresponding nodes. Another approach is to have an edge for every connection between nodes: this is appropriate for many cases investigated in the literature (e.g.\ \cite{KruMel95a}), but typically results in infinite graphs for the networks we consider here. Yet another choice could be to have an edge for each connected component of the set of connections between nodes. In some instances this may also result in a more complicated graph.
Many papers in the literature consider heteroclinic networks as unions of heteroclinic cycles (e.g.\ Hoyle \cite{Hoy06}) and in cases where there are one-dimensional unstable manifolds this is highly appropriate. In this paper we take a different view however -- we consider the heteroclinic network as the fundamental definition and show in Lemma~\ref{lem:cycles} that a network is a union of cycles, or cycles are cyclic subsets of a network.
This paper is structured as follows: in Section \ref{sec:as-stab} we discuss the relation between directed graphs and heteroclinic cycles and networks, introducing the properties of complete, almost complete, equable and exclusive nodes and networks and give examples of these. We also recall the definition of a clean network. Section \ref{sec:completion} shows in Theorem~\ref{thm:completion} that the simplex method of \cite{AshPos13} allows one to construct realizations of a large class of directed graphs as an almost complete and equable heteroclinic network that is part of a larger closed network. We conjecture, in Section~\ref{sec:discussion}, that this result can be strengthened by (a) widening the class of directed graphs and (b) providing a stronger result -- that the embedding network is not just closed but clean.
In Section~\ref{sec:example} we discuss a number of examples that clarify and illustrate these results and concepts. Section~\ref{sec:application} presents a simple stochastic model of randomly perturbed dynamics on a heteroclinic network. For this model, typical trajectories will only explore an almost complete and equable subnetwork. In this sense, the almost complete and equable subnetwork can be seen as an optimal realization of the network with added noise.
Section~\ref{sec:discussion} concludes with a discussion.
\section{Heteroclinic networks and directed graphs}\label{sec:as-stab}
Given the close relation between heteroclinic networks and directed graphs (see for instance \cite{AshPos13} or \cite{Fie17}), we begin with a section that establishes terminology and notation that allows for an easy transition between the two. A substantial part of this section is not original. We include it because we believe that it is useful for most readers to have the relevant concepts framed in a convenient way.
We denote by $\alpha(x)$ (resp. $\omega(x)$) the usual limit set of the trajectory through $x$ as $t\rightarrow -\infty$ (resp. $\infty$). For a heteroclinic network, there is a natural graph structure between nodes representing the equilibria, such that edges in the graph correspond to connections between equilibria in the network. However, the correspondence is more subtle than one might suppose as the set of connections may consist of many, or even a continuum of trajectories \cite{AshCho98,AshFie99}.
We define the unstable and stable sets of an equilibrium $\xi$ as usual $$ W^u(\xi)=\{x\in\mathbb{R}^n~:~\alpha(x)=\xi\}, ~~~W^s(\xi)=\{x\in\mathbb{R}^n~:~\omega(x)=\xi\} $$ and note that for hyperbolic equilibria $\xi$ these are flow-invariant manifolds with dimension
corresponding to the dimensions of unstable and stable eigen\-spaces of $\xi$. Suppose we have a finite collection of hyperbolic equilibria $$ N=\{\xi_1, \hdots, \xi_m\} $$ for (\ref{eq:ode}). We define the {\em full set of connections from $\xi_i$ to $\xi_j$} ($\xi_i, \xi_j \in N$) as $$ C_{ij}=W^u(\xi_i)\cap W^s(\xi_j). $$ This is a flow-invariant (possibly empty) set: if $i\neq j$ we refer to each trajectory in $C_{ij}$ as a \emph{connection from $\xi_i$ to $\xi_j$}. We include cases where $C_{ij}$ is continuum of connections \cite{AshCho98}. In the case $i=j$ we call a connection {\em homoclinic}, otherwise we say it is {\em heteroclinic}.\footnote{In equivariant systems, if $\xi_i\neq\xi_j$ but they are in the same group orbit, then some authors consider the connection homoclinic. We do not make this distinction until Section \ref{sec:example}.}
The {\em full set of connections between equilibria in $N$} is defined $$ C(N)=\bigcup_{i\neq j} C_{ij}. $$ In what follows we use the notation $C(.)$ to describe the connections associated with the object in brackets.
{\bf We make a standing assumption} that there are no homoclinic connections, i.e.\ we assume that $C_{ii}=\{\xi_i\}$ for all $i$.
Many references in the literature use the following definitions: A {\em heteroclinic cycle} is a union of finitely many hyperbolic equilibria connected by trajectories in a cyclic way. A {\em heteroclinic network} is a connected union of finitely many heteroclinic cycles.
When studying heteroclinic networks from directed graphs, another definition may be convenient. The relation between the two is clarified in Lemma~\ref{lem:cycles}. Recall that an invariant set $\Sigma$ is {\em indecomposable} (cf \cite{AshFie99}) for the dynamics of (\ref{eq:ode}) if for every $\epsilon>0$ and pair of points $a,b\in \Sigma$ there is a directed $\epsilon$-chain from $a$ to $b$ within $\Sigma$, where an $\epsilon$-chain is a sequence of points $\{x_k\}_{k=1}^n$ in $\Sigma$ and times $\{t_k>1\}_{k=1}^{n-1}$ such that $x_1=a$, $x_n=b$ and $|\varphi(t_k,x_k)-x_{k+1}|<\epsilon$ for $k=1,\ldots,n-1$. By $N(\Sigma)$ we denote the set of equilibria in $\Sigma$ which we will assume is finite. The following definition is a special case of \cite[Definition 2.26]{AshFie99} where the nodal set is $N(\Sigma)$ and the ``depth'' \cite[Definition 2.22]{AshFie99} is one because all trajectories are either in the nodal set, or limit to the nodal set.\footnote{More general heteroclinic networks, in the sense of \cite{AshFie99}, can have higher depth connections in that they can contain trajectories that limit to connections.}
\begin{definition}\label{def:alternative} We say $\Sigma$ is a {\em heteroclinic network} between equilibria $N(\Sigma)$ if it is an indecomposable flow-invariant set such that $$ N(\Sigma)\subset \Sigma\subset N(\Sigma) \cup C(N(\Sigma)). $$ \end{definition}
We refer to the equilibria $N(\Sigma)=\{\xi_1,\ldots,\xi_k\}$ as the {\em nodes} of the network and define $$ C_{ij}(\Sigma)=C_{ij}\cap \Sigma. $$ as the {\em connection from $\xi_i$ to $\xi_j$ within the network} $\Sigma$. Note that there may be many connecting trajectories between $\xi_i$ and $\xi_j$ in $\Sigma$ and also some that we do not include in a particular $\Sigma$. Note the decomposition $$ \Sigma=N(\Sigma)\cup C(\Sigma) $$ is a disjoint union, where $C(\Sigma):=C(N(\Sigma))\cap \Sigma$ denotes the {\em connections within the network}.
To structure our discussion of graphs associated with heteroclinic networks, we
use the following notation to go between these concepts: \begin{itemize} \item $G(\Sigma)$ to denote the graph related to a given heteroclinic network $\Sigma$; \item $\Sigma_G$ to denote a heteroclinic network associated to a given graph $G$ (this may not be unique). \end{itemize} We start with graphs.
Associated with any heteroclinic network $\Sigma$ there is a digraph $G(\Sigma)=(V,E)$ with vertices $V=\{v_1,\ldots,v_k\}$, where $v_j$ corresponds to node $\xi_j\in N(\Sigma)$, and the set $E$ of {\em directed edges}, where $[v_i\to v_j] \in E$ corresponds to $C_{ij}(\Sigma) \neq \emptyset$ with $i\neq j$. For a given $G(\Sigma)$ we write $N(v)$ to denote the equilibrium corresponding to vertex $v$. Note that $C_{ij}(\Sigma)$ is the full set of connections from $\xi_i$ to $\xi_j$ in $\Sigma$ corresponding to the edge $[v_i\to v_j]$.
As usual\footnote{There are several good references for graph theory, we refer the reader to for example \cite{Die05,Fou92}.}, we say that a {\em cycle} is a sequence of vertices and edges $\{v_1,[v_1\to v_2], v_2,[v_2\to v_3], v_3,\ldots,[v_{m-1}\to v_m],v_m\}$ such that $v_1=v_m$ and all other vertices are distinct. A cycle with $m$ edges is called an {\em $m$-cycle}. A $3$-cycle is also called a {\em triangle}. In the context of digraphs, we reserve the term $m$-cycle for those that are transitive, that is, an oriented circuit through all the vertices; we use triangle for both the transitive and the non-transitive case. Recall that $G$ is {\em transitive} if for any two distinct vertices $v_i$, $v_j$ there is a directed path from $v_i$ to $v_j$ within $G$.
\begin{definition} Suppose that $G=(V,E)$ is a digraph. \begin{itemize}
\item $G$ is an {\em $\Delta$-clique} if it is a triangle that is not transitive (see Figure~\ref{fig:graph_triangles}). \item Let $V'=\{w,v_1,\ldots,v_k\}$ be the subset of all the distinct vertices of $V$ that $w$ connects to. If the only edges of the graph induced on $V'$ have the form $[w\to v_j]$ for $j=1,\ldots,k$, then we say $w$ is a \emph{splitting vertex of order $k\geq 2$}.
\end{itemize} \end{definition}
\begin{figure}
\caption{Two triangles: a $3$-cycle (left) and a $\Delta$-clique (right).}
\label{fig:graph_triangles}
\end{figure}
The use of $\Delta$ in $\Delta$-clique should not be mistaken for the maximum degree, usually denoted by this symbol in graph theory. The symbol $\Delta$ in the present context has a visual association with the dynamics involved.
Note that $w$ being a splitting vertex of order $k$ is a somewhat stronger assumption than simply saying $w$ has out-degree $k:=\#\{j~:~[w \to v_j]\in E\}$ since it also makes assumptions on nearby edges. More precisely:
\begin{lemma} Suppose $G=(V,E)$ is a transitive digraph. Consider a vertex $w$ and all $v_1,\ldots,v_k$ such that there are edges $[w\to v_j]\in E$. Then $w$ is a splitting vertex for $G=(V,E)$ if and only if the digraph induced on $V'=\{w,v_1,\ldots,v_k\}$ has no $\Delta$-clique or $2$-cycle. \end{lemma}
\proof Suppose that $w$ and $V'$ are as above and there is at least one additional edge in the graph $G'=(V',E')$ induced on $V'$ to those required for a splitting vertex. Then either there is an edge $[v_j\to w]$ and hence there is a $2$-cycle, or there is an edge $[v_j\to v_l]$ and hence there is a $\Delta$-clique on $\{w,v_j,v_l\}$. \qed
We say $\xi$ is a {\em splitting node} for $\Sigma$ if the corresponding vertex is a splitting vertex for $G(\Sigma)$.
\bigbreak
Now suppose $G(\Sigma)=(V,E)$ and consider a subset $N'\subset N(\Sigma)$ of nodes. The induced subgraph $G'=(V',E')$ consists of all edges in $G(\Sigma)$ between vertices in $N'$. This can be used to construct an invariant set $$ \Sigma_{G'}=\left(\bigcup_{[v_i\to v_j]\in E'} C_{ij}(\Sigma)\right) \cup\left(\bigcup_{v\in V'} N(v)\right) $$ however, there is no guarantee that $\Sigma_{G'}$ is necessarily transitive or even connected. As an illustration, consider the Kirk and Silber graph (see Figure~\ref{Fig:KScomplete} (left)). If $N'=\{v_3,v_4\}$, then $E'=\emptyset$ and $\Sigma_{G'}$ is not connected. If $N'=\{v_2,v_3,v_4\}$, then $E'=\{[v_2 \to v_3],[v_2 \to v_4]\}$ and $\Sigma_{G'}$ is connected but not transitive.
We say $\Sigma$ is a {\em heteroclinic cycle} if $G(\Sigma)$ is a $k$-cycle for some $k\geq 2$. Note that in such a case, the invariant set $\Sigma$ is not necessarily a topological circle because it may contain multiple connections between two nodes and still be a cycle in our definition. Even worse, it is possible that connections may accumulate on each other away from the equilibria (if this is not the case, then the nodes are exclusive in a sense we define later). We now give a lemma that characterizes the relation between heteroclinic networks and digraphs.
\begin{lemma} For any heteroclinic network $\Sigma$ on $N$, the graph $G(\Sigma)$ is a transitive digraph between vertices $N(\Sigma)$. For any transitive subgraph $H=(V_H,E_H)\subset G(\Sigma)$ there is a heteroclinic network $\Sigma_H\subset \Sigma$ such that $G(\Sigma_H)=H$. \end{lemma}
\proof Transitivity of the graph $G(\Sigma)$ follows from $\Sigma$ being indecomposable. The network $\Sigma_H\subset \Sigma$ can be found by taking the union of equilibria in $V_H$ and connections corresponding to $E_H$: $$ \Sigma_H=\left(\bigcup_{[v_i\to v_j]\in E_H} C_{ij}(\Sigma)\right) \cup\left(\bigcup_{v\in V_H} N(v)\right). $$ \qed
This can be used to show the following result, which -- as mentioned above -- is often used as part of the definition of a heteroclinic network.
\begin{lemma} A heteroclinic network (according to Definition~\ref{def:alternative}) is a connected union of heteroclinic cycles. \label{lem:cycles} \end{lemma}
\proof Consider a decomposition of a transitive graph $G(\Sigma)$ into a finite union of cycles $G_1,\ldots,G_k$. Each of the $\Sigma_{G_j}\subset \Sigma$ is a heteroclinic cycle but the union of cycles contains $\Sigma$, that is, $$ \Sigma\subset \bigcup_{j=1}^{k} \Sigma_{G_j} $$ as it contains all connections and nodes within $\Sigma$. Hence $\Sigma$ is precisely this union. \qed
The minimum length cycles in $\Sigma$ are of interest: we say that the heteroclinic network $\Sigma$ {\em contains a $k$-cycle} for some $k> 1$ if $G(\Sigma)$ contains a $k$-cycle. Note that the decomposition of a heteroclinic network into cycles is usually not unique. The proof of Lemma~\ref{lem:cycles} implicitly uses a decomposition into cycles that have length equal to the minimum length cycle that returns to any edge but equally there may be a decomposition using longer cycles. Our standing assumption means that $G(\Sigma)$ only contains $k$-cycles for $k\geq 2$.
\subsection{Properties of nodes of heteroclinic networks}
We now consider some properties of connections from nodes within heteroclinic networks.
\begin{definition}\label{def:nodes} Suppose that $\Sigma$ is a heteroclinic network and $\xi_i$ a node in that network. We define the following: \begin{itemize} \item $\xi_i$ is \emph{complete in $\Sigma$} if $W^u(\xi_i) \subset \Sigma$ (see Figure~\ref{complete-node-figure}). \item $\xi_i$ is \emph{almost complete in $\Sigma$} if $W^u(\xi_i) \setminus \Sigma$ is of measure zero (with respect to Lebesgue measure for any volume form on $W^u(\xi_i)$). \item $\xi_i$ is \emph{equable in $\Sigma$} if $C_{ij}$ is a union of manifolds of the same dimension and this dimension $\dim(C_{ij}(\Sigma))$ is equal for all $j$ with $C_{ij}(\Sigma) \neq \emptyset$ (see Figure~\ref{splitting-node-figure}). \item $\xi_i$ is \emph{exclusive in $\Sigma$} (see Figure~\ref{splitting-node-figure})
if for all $j$ where $C_{ij}(\Sigma)$ is non-empty we have \begin{equation*} \overline{C_{ij}(\Sigma)}\cap N(\Sigma) = \{\xi_i,\xi_j\}. \end{equation*} \item $\Sigma$ is a {\em complete}/{\em almost complete}/{\em equable}/{\em exclusive} network if all nodes are respectively complete/ almost complete/ equable/exclusive. \item $\Sigma$ is called {\em clean} \cite[Definition 1.3]{Fie17} if it is compact and complete. \end{itemize} \end{definition}
We note that the graph of a complete network is not necessarily a complete graph (where every pair of vertices is directly connected by an edge). The network in Figure~\ref{Fig:KScomplete} (right) is complete but the corresponding graph is not ($\xi_3$ and $\xi_4$ are not directly connected, for instance).
If a node $\xi_i$ is not exclusive, then there exist connections in $C_{ij}$ that are arbitrarily close to a node in the network other than $\xi_i$ and $\xi_j$. Note also that an equable network may have connections of different dimensions. We comment on the effects of equability on the dynamics in Section \ref{sec:application}.
\begin{figure}
\caption{Let $\Sigma$ be the network with nodes $\xi_i$, $i=1,2,3,4$ and connections between them in $\mathbb{R}^3$. The node $\xi_2$ on the left is complete as the origin is unstable and the 1-dimensional unstable manifold of $\xi_2$ is contained in $\Sigma$. The node $\xi_2$ on the right is not complete as some points in $W^u(\xi_2)$ move away from $\Sigma$.}
\label{complete-node-figure}
\end{figure}
\begin{figure}\label{splitting-node-figure}
\end{figure}
The well-known network of Kirk and Silber \cite{KirSil94} provides an example of an equable network that is not complete, we comment on this in Subsection \ref{subsec:KS}. Other authors have implicitly noted the importance of graph structures such as $\Delta$-cliques for properties of clean heteroclinic networks \cite{Bic18,Fie17}. Note that a clean network need not be equable: we give an example for this in Subsection \ref{subsec:B3B3C4}.
For a given set of equilibria $N$ it is not necessary that $C(N) \cup N$ is complete or even closed -- this can be for a variety of reasons. Although it may not be true that for a given $N$ there is $N'$ containing $N$ such that $N' \cup C(N')$ is complete, in Section~\ref{sec:completion}, we find constructions such that $N'\cup C(N')$ is at least closed.
The following result highlights that a splitting node $\xi_i$ in a complete and equable network is either very simple and the splitting is of order $2$, or it is not exclusive -- the closure of $C_{ij}$ contains a node $\xi_k$ that is neither $\xi_i$ nor $\xi_j$. If this is the case, then there will be some $\ell\not\in\{i,j\}$ such that $C_{i\ell}\neq\emptyset$.
\begin{lemma} \label{lem:completenode} Suppose that the node $\xi_i$ is complete in $\Sigma$. Then $\xi_i$ is almost complete in $\Sigma$. If in addition $\xi_i$ is exclusive, equable and a splitting node of order $k\geq 2$, then $k=2$ and $\dim(W^u(\xi_i))=1$. \end{lemma}
\proof Suppose $\dim(W^u(\xi_i))=d$. If $\xi_i$ is complete in $\Sigma$, then $\ell(W^u(\xi_i)\setminus \Sigma)=\ell(\emptyset)=0$ where $\ell$ denotes $d$-dimensional Lebesgue measure on $W^u(\xi_i)$ and so $\xi_i$ is almost complete in $\Sigma$.
For the second part, pick some small $\delta>0$ such that $S:=\{x\in W^u(\xi_i)~:~|x-\xi_i|=\delta\}$ is diffeomorphic to a $(d-1)$-sphere. Note that the $(d-1)$-sphere is connected for $d\geq 2$ and has two components for $d=1$. If $\xi_i$ is also an equable splitting node of order $k$, then all connections from $\xi_i$ must have the same dimension. If $\xi_i$ is exclusive then $\overline{C_{ij}}\cap S$ will not intersect $\overline{C_{im}}$ for $m \in \{1,\ldots,k\}$, $m\neq j$: hence there is a partition of $S$ into $k$ closed disjoint sets. This is only possible for $k=2$ and $d=1$. \qed \medbreak
If $\Sigma$ is a complete heteroclinic network, then by Lemma~\ref{lem:completenode} it is almost complete: moreover, in such a case it is maximal in the sense that $$ \Sigma=\bigcup_{\xi\in N(\Sigma)} W^u(\xi) \;\; \mbox{ and } \;\; C_{ij}(\Sigma)=C_{ij}. $$ There is also a partition of $W^u(\xi)$ into a union of connections from $\xi$. Moreover, from the proof it is easy to see that if $\xi_i$ is a splitting node and $W^u(\xi_i)$ is at least 2-dimensional, then $\xi_i$ is not complete. If $W^u(\xi_i)$ is 2-dimensional and $\xi_i$ is complete, then it is not a splitting node.
Note that asymptotic stability of a compact heteroclinic network implies that it is clean \cite{Fie17}. The opposite is not true: heteroclinic objects can lose asymptotic stability through resonance bifurcations. Take the well-known Guckenheimer-Holmes cycle in \cite{GucHol88} for instance, which is complete, even clean, but unstable when condition (c) in \cite[Lemma 3]{GucHol88} is broken. By contrast, a transverse bifurcation involves a sign change for some eigenvalue (or its real part) and thus affects the completeness of a cycle/network.
It follows from Definition \ref{def:nodes} that if a node $\xi_i$ is such that $W^u(\xi_i)=1$, then $\xi_i$ is equable; such a $\xi_i$ is also exclusive if the nodes are equilibria and the network is of depth one in the sense of \cite{AshFie99}.
If $G$ has a $\Delta$-clique, then it does not follow that there is a non-exclusive or a non-equable node in $\Sigma_G$. Take a $B_2^+$ cycle (see Subsection \ref{subsec:B2B2}) between equilibria $\xi_a$ and $\xi_b$ for example. Add a fourth space dimension with an equilibrium $\xi$ on the extra axis, such that $\dim(W^u(\xi))=1$ and there are connections from $\xi$ to $\xi_a$ and $\xi_b$. Then the three equilibria form a $\Delta$-clique, but $\xi$ is exclusive and equable. This can be embedded in a heteroclinic network in a higher dimensional space.
\section{Realization as almost complete equable heteroclinic networks}\label{sec:completion}
The problem of realizing abstract digraphs as heteroclinic networks was raised in \cite{AshPos13,AshPos16a} and \cite{Fie15}, and several methods have been proposed. Suppose $G=(V,E)$ is an arbitrary transitive digraph. We say the dynamics of (\ref{eq:ode}) {\em realizes} $G$ as the heteroclinic network $\Sigma$ if there is a choice of $f$ and $\Sigma$ such that $G(\Sigma)=G$. Without loss of generality we can choose $\Sigma$ to be the maximal choice, i.e. $$ \Sigma=\left(\bigcup_{[v_i\to v_j]\in E} C_{ij}(\Sigma) \right) \cup\left(\bigcup_{v\in V} N(v)\right). $$ In \cite{AshPos13}, two methods are presented to show that, under minimal assumptions, a digraph $G$ can be realized as a heteroclinic network. The simplex method embeds the graph in a simplex by placing the nodes on the coordinate axes. This method realizes the graph provided it has neither $1$- nor $2$-cycles. The cylinder method places the nodes along one coordinate axis and realizes any graph provided it has no $1$-cycles.
In this section we show that the simplex construction, for a certain choice of parameters, gives a realization that is an almost complete, equable subnetwork of a closed heteroclinic network, and this is robust under certain equivariant perturbations. We state and prove this as Theorem~\ref{thm:completion} and later, in Subsection \ref{subsec:KS} give an example that elucidates the result and method of proof.
According to \cite[Proposition 1]{AshPos13} any graph $G$ without 1- and 2-cycles can be realized as a heteroclinic network $\Sigma=C(N) \cup N$ on a set of equilibria $N$. The resulting vector field on $\mathbb{R}^n$, $n=\# N$, is $\mathbb{Z}_2^n$-equivariant and yields an equilibrium on each coordinate axis and connections in coordinate planes. Theorem \ref{thm:completion} shows that under the additional hypothesis that there are no $\Delta$-cliques and with an appropriate choice of parameters, this can be done in such a way that $\Sigma$ is an almost complete, equable subnetwork of a closed network $\Sigma'$. Although the vector field is as in \cite[Proposition 1]{AshPos13} our method of proof involves the construction of Lyapunov-type functions that use the additional hypotheses.
\begin{theorem}\label{thm:completion} Let $G$ be a transitive directed graph on $n$ vertices with no 1-cycles, 2-cycles or $\Delta$-cliques. Then there exists a $\mathbb{Z}_2^n$-equivariant vector field $f$ on $\mathbb{R}^n$ that realizes $G$ as a network $\Sigma(N)$ between nodes $N=\{\xi_1,\ldots,\xi_n\}$. This realization is robust to $\mathbb{Z}_2^n$-equivariant perturbations. The vector field can be chosen such that there is an additional set of nodes $N'$ and a closed heteroclinic network $\Sigma'$ between $N\cup N'$ such that $\Sigma$ is an almost complete, equable subnetwork of $\Sigma'$. \end{theorem}
\begin{proof} For $j=1,\ldots,n$ we define the smooth vector field on $\mathbb{R}^n$ \begin{equation} \dot{x}_j = f_j(x):= x_j F_j(x) \label{eq:system} \end{equation} where \begin{equation} F_j(x):=1+\sum_{i} [(\epsilon +\eta) A_{ij}-\eta(1-\delta_{ij})-1]x_i^2. \label{eq:defFj} \end{equation} We set $A_{ij}=1$ if $G$ prescribes a connection from $\xi_i$ to $\xi_j$, and $A_{ij}=0$ otherwise, while $\delta_{ij}$ is the Kronecker symbol and the constants $\epsilon, \eta$ satisfy $0< \epsilon <1$ and $\eta >0$.
Since the vector field \eqref{eq:system} satisfies the hypotheses of \cite{AshPos13}, only the last statement requires proof.
There are equilibria at $\xi_j$ corresponding to the unit basis for $\mathbb{R}^n$. Let $N=\{\xi_j\}$ denote these equilibria of (\ref{eq:system}) on the coordinate axes.
Lack of 1- and 2-cycles can be expressed as $A_{ii}=0$, $A_{ij}A_{ji}=0$ for all $i$ and $j$, while lack of $\Delta$-cliques means that $A_{ij}A_{jk}=1$ implies $A_{ik}=0$ for any $i,j,k$. We write $$ O_j:=\{k\in\{1,\ldots,n\}~:~A_{jk}=1\} $$ for the non-empty set of indices corresponding to the outgoing directions from $\xi_j$. The proof proceeds in the following steps. \paragraph{Step 1 -- existence of an absorbing region for the dynamics:}
We write $R:=|x|^2=\sum_j x_j^2$ and calculate \begin{eqnarray*}
\frac{\textnormal{d}}{\textnormal{d}t}R&=& \sum_j 2x^2_jF_j(x)\\
&=&\sum_j 2x_j^2 \left[1+\sum_i [(\epsilon+\eta) A_{ij}-\eta(1-\delta_{ij})-1]x_i^2 \right]\\
&=&2R-2R^2+2\sum_{i,j}\left[(\epsilon+\eta) A_{ij}-\eta(1-\delta_{ij})]x_i^2x_j^2 \right]. \end{eqnarray*}
But note that $$ -\eta\leq(\epsilon+\eta) A_{ij}-\eta(1-\delta_{ij})\leq \epsilon $$ and so $$ -\eta R^2\leq \sum_{i,j} [(\epsilon+\eta) A_{ij}-\eta(1-\delta_{ij})]x_i^2x_j^2\leq \epsilon R^2, $$ which implies $$ 2R(1-R-\eta R)\leq \frac{\textnormal{d}}{\textnormal{d}t}R \leq 2R(1-R+\epsilon R) .$$ This means that there is an absorbing region $R_0<R<R_1$ where $$ R_0:=\frac{1}{1+\eta} \leq R \leq R_1:=\frac{1}{1-\epsilon}. $$ Therefore, for any $\eta>0$ and $0<\epsilon<1$ there is an absorbing spherical annulus $$
S:=\left\{x~:~\frac{1}{1+\eta} \leq |x|^2 \leq \frac{1}{1-\epsilon}\right\}. $$ \bigbreak If we fix $j$ and define the invariant subspace $$ \Omega_j:=\{x~:~x_k=0 ~\mbox{ if }k\not \in O_j\} $$ then $\xi_j$ has an unstable manifold contained within the invariant subspace \begin{equation} Q_j:=\Omega_j \oplus \langle \xi_j\rangle. \label{eq:defQj} \end{equation}
\paragraph{Step 2 -- $\Omega_j$ attracts almost all initial conditions in $Q_j$:} In fact, every trajectory in $Q_j$ that is not in the one dimensional subspace spanned by $\xi_j$ limits to $\Omega_j$. We define a function $\Phi_j: Q_j \rightarrow \mathbb{R}$ by
$$ \tan \Phi_j:= \frac{x_j^2}{\sum_{i\in O_j} x_i^2} $$ and note that for any $x\in Q_j$ and $i\in O_j$ we have \begin{eqnarray*}
\dot{x}_j&=&x_j\left[1+\sum_{k\in O_j}[-\eta-1]x_k^2-x_j^2 \right]\\
\dot{x}_i&=&x_i\left[1+[\epsilon -1]x_j^2+\sum_{i\neq k\in O_j}[-\eta-1]x_k^2-x_i^2 \right]. \end{eqnarray*}
Note that $$ \frac{\textnormal{d}}{\textnormal{d}t}\left[ \tan \Phi_j\right] = \frac{\textnormal{d}}{\textnormal{d}t}\left[\frac{x_j^2}{\sum_{i\in O_j} x_i^2}\right] =(1+\tan^2 \Phi_j) \frac{\textnormal{d}}{\textnormal{d}t}\Phi_j, $$ so that $$ \frac{\textnormal{d}}{\textnormal{d}t}\Phi_j = \dfrac{1}{1+\tan^2 \Phi_j}\frac{\textnormal{d}}{\textnormal{d}t}\left[ \tan \Phi_j\right]. $$ Hence we have \begin{eqnarray*}
\frac{[\sum_{i\in O_j}x_i^2]^2+x_j^4}{[\sum_{i\in O_j}x_i^2]^2}\frac{\textnormal{d}}{\textnormal{d}t}\Phi_j &=& 2\frac{x_j\dot{x}_j[\sum_{i\in O_j}x_i^2]- x_j^2\sum_{i\in O_j}x_i\dot{x}_i }{[\sum_{i\in O_j}x_i^2]^2} \end{eqnarray*} for all $x\in Q_j\setminus \{0\}$. This means that \begin{eqnarray*}
\frac{\textnormal{d}}{\textnormal{d}t}\Phi_j &=& 2\frac{x_j\dot{x}_j[\sum_{i\in O_j}x_i^2]- x_j^2\sum_{i\in O_j}x_i\dot{x}_i }{[\sum_{i\in O_j}x_i^2]^2+x_j^4}\\
&=& 2\frac{-\eta x_j^2\sum_{k\in O_j}x_k^2[\sum_{i\in O_j}x_i^2]- x_j^2\sum_{i\in O_j}x_i^2[\epsilon x_j^2-\eta\sum_{i\neq k\in O_j}x_k^2]}{[\sum_{i\in O_j}x_i^2]^2+x_j^4}\\
&=& -2\frac{x_j^2\sum_{i\in O_j}x_i^2(\eta x_i^2+\epsilon x_j^2)}{[\sum_{i\in O_j}x_i^2]^2+x_j^4}. \end{eqnarray*} For any $\eta>0$ and $\epsilon>0$ this quantity is clearly finite as long as $x\neq 0$ and non-positive except when $x_i^2=0$ for all $i\in O_j$. Hence $\Phi_j$ decreases monotonically to $0$ for any initial conditions in $Q_j\cap S$ except when $x_i^2=0$ for all $i\in O_j$. This implies that all initial conditions except those on the $x_j$-axis converge to $\Omega_j$.
\paragraph{Step 3 -- the dynamics restricted to $\Omega_j$ is a gradient flow:} Suppose $x\in \Omega_j$ so that $R=\sum_{i\in O_j}x_i^2$. Let $$ V_j=-\frac{1}{2}R+\frac{1}{4}R^2+\frac{1}{4}\eta \sum_{k\in O_j} x_k^2\left [\sum_{l\in O_j,~l\neq k} x_l^2\right] $$ and note that for any $i\in O_j$ and $x\in \Omega_j$ we have $$ -\frac{\partial}{\partial x_i} V_j = x_i(1-R) - \eta x_i \sum_{k\neq i}x_k^2= x_j F_j(x). $$ Hence the flow (\ref{eq:system}) is a gradient flow when restricted to any $\Omega_j$.
To conclude the proof, note that the only minima of $V_j$ on $\Omega_j$ correspond to stable equilibria of the vector field which are at $x=\xi_i$ for each $i\in O_j$. These equilibria are linearly stable, meaning they are quadratic minima for $V_j$ on $\Omega_j$. All other stationary points of $V_j$ are quadratically non-degenerate and correspond to saddles or repellers of (\ref{eq:system}) on $\Omega_j$. This means that the flow on $\Omega_j$ is Morse-Smale and robust to perturbations. We define the separating nodes $N'$ to be the union of all additional stationary points of $V_j$, and we define the heteroclinic network $\Sigma'$ to be the closures of the unstable manifolds of the $\xi_j$.
Note that $Q_j$ contains $W^u(\xi_j)$ and almost any trajectory in $Q_j$ limits to an equilibrium in $\Omega_j$. Hence by including all equilibria in $Q_j$ in $N'$ we ensure that all $W^u(\xi_j)$ consist of connections between equilibria in the network. More precisely, any initial condition in $Q_j$ that is in $$ T_i=\{x\in Q_j~:~ x_i^2>x_k^2 \mbox{ for all }k\in O_j\} $$ is asymptotic to $\xi_i$. Because $W^u(\xi_j)$ is transverse to the radial direction $\langle \xi_j\rangle$, almost all trajectories in $W^u(\xi_j)$ limit to one of the stable equilibria $\xi_i$, ensuring that $\xi_j$ is equable in $\Sigma$. \end{proof}
Note that the network $\Sigma'$ is not just closed but clean if all separating nodes $N'$ have unstable manifolds that are entirely contained within $Q_j$ for some $j$. We see in Subsection~\ref{subsec:KS} that this need not be the case, even for a simple but nontrivial network.
\section{Examples}\label{sec:example}
In this section we discuss several examples to illustrate what it means for a network/node to be (in)complete and/or equable. In an equivariant setting, simple heteroclinic cycles\footnote{Cycles are defined as simple in \cite{KruMel04} if the nodes are in different connected components of 1-dimensional fixed-point spaces and the connections are in 2-dimensional fixed-point spaces.} have been classified into types A, B or C by Krupa and Melbourne \cite{KruMel04}. We use their terminology here to indicate the type of a cycle (through the respective letter) and its number of equilibria (as a subscript). The superscript $+/-$ encodes information about the symmetry group that is not relevant for our discussion.
All of our examples are equivariant under the action of some symmetry group $\Gamma$. We identify objects in the same group orbit so that when the graph has vertices $\xi_i$ and $\xi_j$ and an edge $[\xi_i \to \xi_j]$, the network has connections between the corresponding elements in the group orbits $\Gamma.\xi_i$ and $\Gamma.\xi_j$. These connections are symmetric images of one another.
\subsection{The Kirk and Silber/ $(B_3^-,B_3^-)$ network}\label{subsec:KS}
The heteroclinic network of Kirk and Silber \cite{KirSil94} consists of two cycles of type $B_3^-$ with connections (typically viewed as one-dimensional) between equilibria $\xi_1, \xi_2, \xi_3, \xi_4 \in \mathbb{R}^4$. Note that aspects of this network were previously discussed in \cite[Examples 2.10]{Fie17} and \cite[Case I]{Kiretal12}.
It realizes the graph given in Figure~\ref{Fig:KScomplete} (left). The vector field realizing the network robustly has symmetry $\mathbb{Z}^4_2$ where the group acts as multiplication by $-1$ in each coordinate. The node $\xi_2$ is a splitting node and not complete.\footnote{When a connection $[\xi_i \to \xi_j]$ exists, the connection $[\xi_i \to -\xi_j]$ also exists. However, under the identification of objects in the same group orbit, only $\xi_2$ is a splitting node.} The closure of its unstable manifold contains a separating node $\zeta$ (in the plane containing $\xi_3$ and $\xi_4$) and connections $[\xi_2 \to \zeta]$, $[\zeta \to \xi_3]$ and $[\zeta \to \xi_4]$, see Figure~\ref{Fig:KScomplete} (right).
\begin{figure}
\caption{The $(B_3^-,B_3^-)$ network has an incomplete splitting node at $\xi_2$ (left). Adding a node $\zeta$ and connections $[\xi_2 \to \zeta]$, $[\zeta \to \xi_3]$ and $[\zeta \to \xi_4]$ to the network makes $\xi_2$ complete (right). On the left, $\xi_2$ is equable and exclusive whereas on the right it is not. The numbers in brackets correspond to the dimension of the connection when this is greater than one. }
\label{Fig:KScomplete}
\end{figure}
To better illustrate the construction in the proof of Theorem~\ref{thm:completion}, we apply it to this graph/network. With the given connection structure we obtain the system (\ref{eq:system},\ref{eq:defFj}) which can be written \begin{eqnarray}
\dot{x}_1&=& x_1[1-|x|^2+\epsilon (x_3^2+x_4^2)-\eta x_2^2]\nonumber\\
\dot{x}_2&=& x_2[1-|x|^2+\epsilon x_1^2 -\eta (x_3^2 + x_4^2)]\nonumber\\
\dot{x}_3&=& x_3[1-|x|^2+\epsilon x_2^2 -\eta(x_1^2+x_4^2)]\nonumber\\
\dot{x}_4&=& x_4[1-|x|^2+\epsilon x_2^2 -\eta(x_1^2+x_3^2)].\label{eq:KSrealise} \end{eqnarray} As required this system has four equilibria $\xi_i$ on the unit coordinate axes. We note that $$ \Omega_1=\{(0,a,0,0)\},~\Omega_2=\{(0,0,a,b)\},~\Omega_3=\Omega_4=\{(a,0,0,0)\}. $$ and (\ref{eq:defQj}) means that if $x\in Q_2$ then $x=(0,x_2,x_3,x_4)$. Hence, if $x\in Q_2$, we have \begin{eqnarray*}
\dot{x}_2&=& x_2[1-|x|^2-\eta(x_3^2+x_4^2)]\\
\dot{x}_3&=& x_3[1-|x|^2+\epsilon x_2^2 -\eta x_4^2]\\
\dot{x}_4&=& x_4[1-|x|^2+\epsilon x_2^2 -\eta x_3^2]. \end{eqnarray*} In fact the only attractors in $Q_2$ are $\xi_3$ and $\xi_4$: consider $$ \tan \Phi_2:= \frac{x_2^2}{x_3^2+x_4^2}, $$ then $$ \frac{\textnormal{d}}{\textnormal{d}t}\Phi_2= -2x_2^2\frac{\epsilon(x_3^2+x_4^2)x_2^2+\eta(x_3^4+x_4^4)}{x_2^4+(x_3^2+x_4^2)^2 } $$ which on $Q_2$ is clearly decreasing to $x_2=0$ unless $x_3^2+x_4^2=0$. Finally, if we define $$ V_2(0,0,x_3,x_4): =-R/2+R^2/4+\eta x_3^2x_4^2/2 $$ then on $\Omega_2$ we have \begin{eqnarray*}
\dot{x}_3&=&-\frac{\partial V_2}{\partial x_3}\\
\dot{x}_4&=&-\frac{\partial V_2}{\partial x_4} \end{eqnarray*} giving a gradient flow in $\Omega_2$, with almost all trajectories converging to $\xi_3$ and $\xi_4$ (minima of $V_2$).
Restricting the flow to $\Omega_2$ we find a separating node $\zeta = (0,0,x_3,x_4)$ such that $x_3^2=x_4^2=\frac{1}{2+\eta}$. As illustrated in the right panel of Figure~\ref{Fig:KScomplete}, $\zeta$ is a saddle in $\Omega_2$. Its unstable space includes however the direction of $\xi_1$ showing that the unstable manifold of $\zeta$ is not contained in $Q_2$. In this case a clean network can be obtained by also including $W^u(\zeta)$ which means having a two-dimensional connection $[\zeta \to \xi_1]$ in the network.
\subsection{The $(B_3^-,B_3^-,C_4^-)$ network}\label{subsec:B3B3C4}
This network, along with other examples, is discussed in the context of clean networks in \cite[Examples 2.10]{Fie17}. It also appears in Brannath \cite{Bra94} and Castro and Lohse \cite{CasLoh16}. Its graph is given in Figure~\ref{B3B3C4complete} and the simplex method provides a vector field with symmetry $\mathbb{Z}_2^4$ as above that realizes it. The $(B_3^-,B_3^-,C_4^-)$ network has two $\Delta$-cliques: one involving the nodes $\xi_2$, $\xi_3$ and $\xi_4$; the other involving the nodes $\xi_3$, $\xi_1$ and $\xi_4$. Hence, it does not satisfy the hypotheses of Theorem~\ref{thm:completion}. However, the network is clean since the nodes with out-degree greater than 1, $\xi_2$ and $\xi_3$, are complete. Note that $\xi_2$ and $\xi_3$ are not equable: $\dim(\Sigma \cap C_{23})=1 \neq 2 =\dim(\Sigma \cap C_{24})$ and $\dim(\Sigma \cap C_{34})=1 \neq 2 =\dim(\Sigma \cap C_{31})$.
\begin{figure}
\caption{The $(B_3^-,B_3^-,C_4^-)$ network has no splitting nodes and is clean. There are two $\Delta$-cliques involving the non-equable, but complete, nodes $\xi_2$ and $\xi_3$. The numbers in brackets correspond to the dimension of the connection when this is greater than one.}
\label{B3B3C4complete}
\end{figure}
There are infinitely many instances of the $(B_3^-, B_3^-)$ network with one-dimensional connections as equable subnetworks. Their union forms an almost complete, but non-equable subnetwork with the same $(B_3^-, B_3^-)$ graph, but there is no subnetwork of the $(B_3^-, B_3^-, C_4^-)$ network that is both equable and almost complete.
\subsection{The $(B_2^+,B_2^+)$ network}\label{subsec:B2B2}
A network with two cycles of type $B_2^+$ is described in Castro and Lohse \cite{CasLoh14}. Note that even though this object is usually referred to as a heteroclinic network, our Definition~\ref{def:alternative} classifies it as a heteroclinic cycle. In this sense, our definition of heteroclinic cycle is less strict than many definitions in the literature. According to results in \cite{AshPos13}, the cylinder method can be used to provide a vector field in $\mathbb{R}^4$ realizing the corresponding graph. The network is clean and equable. It has no splitting nodes since all connections are between the same two equilibria.
The vector field supporting the $(B_2^+,B_2^+)$ network has symmetry $\mathbb{Z}_2^3$ where the action of $\mathbb{Z}_2$ is multiplication by $-1$ of each of the last three coordinates of $\mathbb{R}^4$. There is a one-dimensional connection $[\xi_a \to \xi_b]$. The full set of connections $C_{ba}$ consists in three types (distinguished by isotropy) of connections: a one-dimensional connection contained in the $(x_1,x_3)$-plane, another one-dimensional connection in the $(x_1,x_4)$-plane and a two-dimensional connection in the $(x_1,x_3,x_4)$-space. See Figure~\ref{B2B2complete}.
\begin{figure}
\caption{The $(B_2^+,B_2^+)$ network is clean, equable and exclusive. There are no splitting nodes, the connection from $\xi_a$ to $\xi_b$ is one-dimensional, while the connection back is two-dimensional. The shaded area shows a two-dimensional set of the connections in $C_{ba}$. The connection $C_{ab}$ (dashed) is contained in $P_{12}$.}
\label{B2B2complete}
\end{figure}
Certainly many more examples can be found in the literature. For instance, Kirk et al.\ \cite{Kiretal10} discuss a non-simple network in $\mathbb{R}^4$ with six equilibria and $\mathbb{Z}_2^3$ symmetry that is clean, but not equable. It is obtained by neither the simplex nor the cylinder method and there are nodes where the linearization has complex eigenvalues.
\section{A Markov switching process and almost complete equable networks}
\label{sec:application}
To give some more insight to the importance of almost complete and equable networks, we consider a heteroclinic network $\Sigma=C(N)\cup N$ and define the following idealized (but somewhat natural) discrete-time model of stochastic dynamics on a network.
For each node $\xi_i\in N$ we consider a probability measure $\rho_i(x)$ that is supported and absolutely continuous on $W^u(\xi_i)$ with respect to a Lebesgue measure, and whose density is non-zero in some neighbourhood of $\xi_i$. We define a one-step discrete-time {\em Markov switching process} $\Xi=\{\xi(n)\in N\cup \{e\}\}_{n\in\mathbb{Z}}$ on $\Sigma$ where $e$ represents an ``escaped" state. We define the switching probability from $\xi_j$ to $\xi_k$ by \begin{equation}
\mathcal{P}(\xi(n+1)=\xi_k|\xi(n)=\xi_j)=\rho_j(C_{jk}(\Sigma)). \label{eq:transitions} \end{equation} Note that if a node is not almost complete, then paths of the process can ``leak out" from that node: If we define \begin{equation*}
\mathcal{P}(\xi(n+1)=e|\xi(n)=\xi_j)=1-\sum_{k}\rho_j(C_{jk}(\Sigma)), \end{equation*}
then this may be non-zero. Finally, we assume $\mathcal{P}(\xi(n+1)=e|\xi(n)=e)=1$.
The following lemma shows that in cases where this process almost surely does not escape, it explores an almost complete equable subnetwork of $\Sigma$. This subnetwork is obtained by ignoring for each node all lower-dimensional connections (and corresponding nodes) that make it non-equable, e.g.\ $\zeta$ and the connections leading to and from it in Figure \ref{splitting-node-figure}. It is maximal in the sense that it contains all other equable, almost complete subnetworks.
\begin{proposition} Consider a heteroclinic network $\Sigma$ supporting a Markov switching process $\Xi$. If $\Xi$ starting at any point on $N(\Sigma)$ almost surely avoids escape, then $\Sigma$ is almost complete. Moreover, there is an equable almost complete subnetwork $\Sigma^*$ such that only transitions within $\Sigma^*$ are seen with positive probability. \label{prop:markov} \end{proposition}
\proof Note that by definition, $\Xi$ only gives positive probability to transitions that correspond to positive measure subsets of $W^u(\xi_j)$ for all $j$.
If $\mathcal{P}(\xi(n+1)=e|\xi(n)=\xi_j)=0$, then $\sum_k\rho_j(C_{jk}(\Sigma))=1$ for all $j$ and so $\Sigma$ is almost complete. Finally, note that (\ref{eq:transitions}) implies that the only connections with non-zero probability of appearing correspond to those with positive measure within $W^u(\xi_i)$ and hence those within some equable almost complete subnetwork. \qed
We note that Proposition~\ref{prop:markov} is not an equivalence. The converse, i.e.\ that for an equable, almost complete network the Markov process almost surely avoids escape and explores the entire network, will not hold if there is a connecting set $C_{ij}$ of dimension $d$ but zero $d$-dimensional measure.
Consider the stochastic differential equation (SDE) \begin{equation*} \textnormal{d}x=f(x)\textnormal{d}t+\alpha \textnormal{d}W_t, \end{equation*} where $f$ realizes a given graph as an attracting heteroclinic network for (\ref{eq:ode}), $\alpha>0$ is some small constant and $W_t$ a standard $n$-dimensional Wiener process. As an example, Figure~\ref{FigKSnoise} shows a typical trajectory for the realization of the Kirk-Silber network (\ref{eq:KSrealise}) on $\mathbb{R}^4$ with added noise \cite{ArmStoKir03}. The figure shows a single trajectory exploring almost all directions of exit from the saddle $\xi_2=(0,\pm 1,0,0)$. For smaller noise level $\alpha$, observe that the links become more concentrated around the one-dimensional connections, but still other regions of the manifold are visited with apparent non-zero probability.
Note that a noise-forced heteroclinic system need not behave as a Markov switching process on the last visited node, even in the low noise limit due to the effect of ``lift-off'' \cite{ArmStoKir03,Bak11}. Nonetheless we do expect the Markov switching process to be a reasonable model for the long-term behaviour of solutions of the SDE in the low noise case if all saddles are ``locally stable'', i.e.\ if the real parts of all expanding eigenvalues at the saddle are smaller in magnitude than the real part of the weakest contracting eigenvalue. This should be valid for the constructions in the proof of Theorem~\ref{thm:completion}, though for other choices of parameters it may no longer be the case.
\begin{figure}
\caption{Trajectories on the unstable manifold of $\xi_2=(0,\pm 1,0,0)$ for the realisation (\ref{eq:KSrealise}) of the Kirk-Silber network with $\epsilon=0.02$, $\eta=0.05$ and increasing noise amplitude $\alpha$. Repeated visits of a single trajectory to
$x_1^2(t)<0.1$ are shown -- these apparently fill out the 2D unstable manifold of $\xi_2$, and visit arbitrarily closely to the additional node $\zeta$ on the diagonal, as shown in Figure~\ref{Fig:KScomplete}. This simulation uses a Heun integrator with timestep $0.2$. }
\label{FigKSnoise}
\end{figure}
\section{Discussion}\label{sec:discussion}
In summary, we highlight that not only is it possible to realize quite general directed graphs as heteroclinic networks, also these realizations can be maximal in the sense of being almost complete and equable. In addition to the main result Theorem~\ref{thm:completion} and examples in Section~\ref{sec:example}, we present a Markov model and a sense in which almost complete and equable networks can be seen as optimal models of heteroclinic networks perturbed by noise.
While an assumption of no 1-cycles in $G$ is necessary for a robust realization of $G$ as a heteroclinic network, the lack of 2-cycles or $\Delta$-cliques assumed in Theorem~\ref{thm:completion} is presumably not necessary. Indeed, other realization methods \cite{AshPos13,AshPos16a,Fie15} give robust realizations for $G$ purely on an assumption of no 1-cycles. We conjecture there are parameter choices that give an equivalent result to that in Theorem~\ref{thm:completion} in this more general case. This suggests the following:
\begin{conj} The conclusion of Theorem~\ref{thm:completion} holds even for directed graphs $G$ that may contain $2$-cycles and $\Delta$-cliques. \label{conj:completion} \end{conj}
Explicit constructions are shown as the cylinder method of \cite{AshPos13} and the two layer network \cite{AshPos16a}. These show the existence of networks $\Sigma$ that are equable subnetworks realizing $G$ as long as $G$ has no $1$-cycles, and these realizations can be made robust to certain symmetric perturbations. The problem remains to show that the network is almost complete. Note that the $(A_2^+,A_2^+)$- and $(B_2^+,B_2^+)$-networks are simple and may be created by the cylinder method. However, cycles or networks with more than two equilibria that are generated in this way are not simple, because all equilibria are on the same coordinate axis $L$ -- violating the condition that every connected component of $L \setminus \{0\}$ contains at most one equilibrium. Finally, the unstable manifolds for the cylinder construction are highly curved and it seems much harder to find suitable Lyapunov-type functions as used in the proof of Theorem~\ref{thm:completion}.
Finally, we remark that the construction in Theorem~\ref{thm:completion} (or a strengthened version Conjecture~\ref{conj:completion}) can presumably be strengthened in the following way: It should be possible to show that under the same (or weakened) hypotheses of Theorem~\ref{thm:completion}, an explicit realization can be chosen such that the embedding network $\Sigma'$ is clean. The main obstruction to showing this is explicitly making the separating nodes in $N'\cap Q_j$ transversely stable to $Q_j$. Although it is clear that this only requires a local change to the transverse stability at all separating nodes, it is still a challenge to explicitly give the construction.
\paragraph{Acknowledgements:}
We thank Chris Bick, Mike Field and Claire Postlethwaite for their insightful questions and comments. The second author was partly supported by CMUP (UID/MAT/00144/2013), funded by the Portuguese Government through the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia (FCT) with national (MEC) and European structural funds through the programs FEDER, under the partnership agreement PT2020. The second and third authors benefitted from DAAD-CRUP funding through ``A\c{c}\~ao Integrada Luso-Alem\~a A10/17'', respectively DAAD-project 57338573 PPP Portugal 2017, sponsored by the Federal Ministry of Education and Research (BMBF). Partial support for a visit to Exeter is gratefully acknowledged from the Centre for Predictive Modelling in Healthcare (EPSRC grant number EP/N014391/1).
\end{document} | arXiv |
\begin{document}
\title{Primeness and Dynamics of Some Classes of Entire Functions}
\author[K. S. Charak]{Kuldeep Singh Charak} \address{ \begin{tabular}{lll} &Kuldeep Singh Charak\\ &Department of Mathematics\\ &University of Jammu\\ &Jammu-180 006\\ &India\\ \end{tabular}} \email{[email protected]}
\author[M. Kumar]{Manish Kumar} \address{ \begin{tabular}{lll} &Manish Kumar\\ &Department of Mathematics\\ &University of Jammu\\ &Jammu-180 006\\ &India\\ \end{tabular}} \email{[email protected]} \author[A. Singh]{Anil Singh} \address{ \begin{tabular}{lll} &Anil Singh\\ &Department of Mathematics\\ &University of Jammu\\ &Jammu-180 006\\ &India \end{tabular}} \email{[email protected] }
\begin{abstract}
In this paper we investigate the primeness of a class of entire functions and discuss the dynamics of a periodic member $f$ of this class with respect to a transcendental entire function $g$ that permutes with $f.$ In particular we show that the Julia sets of $f$ and $g$ are identical. \end{abstract} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \footnotetext{2010 {\it Mathematics Subject Classification}. 30D15, 37F10 } \footnotetext{{\it Keywords and phrases}: Prime entire functions, Nevanlinna theory, Fatou and Julia sets. } \footnotetext{The work of first author is partially supported by Mathematical Research Impact Centric Support (MATRICS) grant, File No. MTR/2018/000446, by the Science and Engineering Research Board (SERB), Department of Science and Technology (DST), Government of India. }
\maketitle
\section{Introduction}
A meromorphic function $F$ is said to be factorizable with $f$ and $g$ as left and right factors respectively, if it can be expressed as $F:=f\circ g$ where $f$ is meromorphic and $g$ is entire ($g$ may be meromorphic when $f$ is rational). $F$ is said to be prime (left-prime and right prime), if every factorization of $F$ of the above form implies either $f$ is bilinear or $g$ is linear ($f$ is bilinear whenever $g$ is transcendental and $g$ is linear whenever $f$ is transcendental). When the factors $f$ and $g$ of $F$ are restricted to entire functions, the factorization is said to be in the entire sense. Accordingly, we may use the term $F$ is prime (left-prime and right prime) in the entire sense. For factorization of meromorphic functions one may refer to Gross \cite{gross} and Chuang and Yang \cite{chuang}.
Suppose $H$ and $\alpha $ are entire functions satisfying \begin{enumerate} \item[(a)] $\alpha^{'}$ has at least one zero,\\ \item[(b)] $T(r,\alpha)=o(T(r,H)) \mbox{ as } r\rightarrow \infty $,\\ \item[(c)] $H^{'}$ and $\alpha^{'}$ have no common zeros. \end{enumerate} Now consider the class $$\mathcal{F}:= \{F_a(z)=H(z)+a\alpha (z): a\in \mathbb{C} \}.$$ $\mathcal{F}$ is a most general class containing both periodic as well as non-periodic prime entire functions. Urabe \cite{urabe1} proved that the periodic entire functions of the form \begin{equation} F_a(z)=H(z)+\frac{a}{2}\sin 2z, \label{eq1} \end{equation} where $$H(z)=\sin z h(\cos z)$$ in which $$ h(w)=\exp {\psi\left((2w^{2}-1)^{2}\right)} \mbox{ for some entire function } \psi,$$ are prime for most values of $a\in \mathbb{C}$. We shall denote by $\mathcal{U}$, the class of all functions of the form (\ref {eq1}).\\ Also, Qiao \cite{Qiao} proved that the periodic entire functions of the form \begin{equation} G_a(z):=\cos z\left(H(\sin z)+2a\right), \label{eq2} \end{equation} where $H$ is an odd transcendental entire function such that order of $H(\sin z)$ is finite, is prime for most values of $a\in \mathbb{C}.$ Let us denote by $\mathcal{Q}$, the class of functions of the form (\ref {eq2}).
It is quite simple to note that the classes $\mathcal{U}$ and $\mathcal{Q}$ are subclasses of $\mathcal{F}$. Also $\mathcal{F}$ contains the class of non-periodic prime entire functions due to Singh and Charak \cite{apcharak}. Now it is natural to investigate the primeness of members of $\mathcal{F}$. The importance of this investigation lies in the fact that if $F_a\in\mathcal{F}$ happens to be prime in entire sense and $g$ is any entire function permutable with $F_a$, then Julia sets of $g$ and $F_a$ are identical for most values of $a$; for example see \cite{ng, noda, urabe}.
Let $f$ be an entire function. We denote by $f^n$ the $nth$ iterate of $f$. By a Julia set $J(f)$ of an entire function $f$ we mean the complement $\mathbb{C}\setminus F(f)$ of Fatou set $F(f) $ of $f$ defined as $$F(f):=\left\{z\in\mathbb{C}:\{f^n\}\mbox{ is normal in a neighborhood of }z\right\}.$$ In this paper, we shall find out some more subclasses of $\mathcal{F}$ which are prime and study the dynamics of such subclasses with respect to any non-linear entire function permuting with a given member of these subclasses.
We shall adopt the following notations in our discussions throughout: \begin{enumerate} \item $\mathcal{H}(D): \mbox{ the class of all holomorphic functions on a domain } D\subset\mathbb{C}.$ \item $\mathbb{C}^{*}:= \mathbb{C}\setminus \left\{0\right\},$ the punctured plane. \item $\mathcal{E}_{T}: \mbox{ the class of all transcendental entire functions.}$ \end{enumerate}
\section{Factorization of some periodic subclasses of $\mathcal{F}$}
The subclasses $\mathcal{U}\mbox{ and }\mathcal{Q}$ of $\mathcal{F}$ consist of prime periodic entire functions. We shall prove the primeness of a more general subclass of $\mathcal{F}$ containing $\mathcal{U}.$ Also, we shall investigate some interesting properties of the functions in this general subclass as well as $\mathcal{Q}.$ Actually, the purpose of this investigation is to study the dynamics of these subclasses of $\mathcal{F}$ in the next section. \begin{lemma} \label{l1} Let $H$ and $\alpha$ be entire functions such that $H'$ and $\alpha'$ have no common zeros. Let $H_a(z):= H(z)+a\alpha (z)$, where $a\in \mathbb{C}$. Then there exists a countable set $E\subset \mathbb{C}$ such that for each $c\in \mathbb{C}$ and for any $ z, w \in \{z\in \mathbb{C}: H_{a}(z)=c, H_{a}^{'}(z)=0\}$, $\alpha (z)=\alpha(w)$ for each $a\notin E$. \end{lemma} \begin{proof} For any critical point $z_0$ of $H_a$, we have
$$ a = -\frac{H^{'}(z_0)}{\alpha^{'}(z_0)}$$ Define
\begin{flalign}
m(z):=-\frac{H^{'}(z)}{\alpha^{'}(z)}, ~~~~~ z\in \mathbb{C}.\label{eq:20} \end{flalign}
Set $A:= \mathbb{C} \setminus \{z\in \mathbb{C}: m'(z)=0 \mbox{ or } \infty\}.$ Then $A$ is an open subset of $\mathbb{C}$. Let $\left\{G_i: i\in \mathbb{N}\right\}$ be an open covering of $A$ such that $m|_{G_{i}} $ is univalent and each $D_{i}=m(G_{i})$ is an open disk. Now consider \begin{flalign}
M(z)&:=H(z)+m(z)\alpha(z)\label{eq:21}\\
x_{i}(w)&:=(m|_{G_{i}})^{-1}(w),~ w \in D_{i}~ (i=1,2,3,\cdots)\label{eq:22}\\ y_{i}(w)&:=M(x_{i}(w)),~ w \in D_{i} ~(i=1,2,3,\cdots) \label{eq:23}\\ I&:=\{(i,j)\in \mathbb{N}\times\mathbb{N}: D_{i}\cap D_{j}\neq \phi, y_{i}\not\equiv y_{j}\mbox{ on } D_{i}\cap D_{j}\}\label{eq:24}\\ S_{ij}&:= \{w\in D_{i}\cap D_{j}: y_{i}(w)=y_{j}(w)\}, ~ (i,j)\in I \label{eq:25}\\ E_{0}&:=\left(\bigcup_{i=1}^{\infty}D_{i}\right)\setminus\left(\left\{m(p): m^{'}(p)=0;p\in \mathbb{C} \right\}\cup \left(\cup_{(i,j)\in I}S_{ij}\right)\right).\label{eq:26} \end{flalign} \\ Then $E=\mathbb{C}\setminus E_{0}$ is an at most countable subset of $\mathbb{C}$.\\ Using $\left(\ref{eq:20}\right)$ in $\left(\ref{eq:21}\right)$, we get
\begin{flalign} M^{'}(z)=H^{'}(z)+m^{'}(z)\alpha(z)+m(z)\alpha^{'}(z)=m^{'}(z)\alpha(z).\label{eq:27} \end{flalign} By $\left(\ref{eq:23}\right)$ with the help of $\left(\ref{eq:21} \right)$ and $\left(\ref{eq:22}\right)$, we obtain \begin{flalign}
y_{i}(w)&=H(x_i(w))+m(x_i(w))\alpha(x_i(w)) \nonumber\\
&=H(x_i(w))+w\alpha(x_i(w))\label{eq:28} \end{flalign} Also by $\left(\ref{eq:23}\right)$ together with $\left(\ref{eq:22}\right)$ and $ \left(\ref{eq:27}\right)$, we have \begin{flalign} \label{eq:29} y_{i}^{'}(w) &= M^{'}(x_i(w))\cdot x_{i}^{'}(w) \nonumber\\
&=m^{'}(x_{i}(w))\cdot \alpha(x_i(w))\cdot x_{i}^{'}(w) \nonumber\\
&= \alpha(x_{i}(w)). \end{flalign} Since $a\in E_0$, we have $\{z\in \mathbb{C}: H'_{a}(z)=0\}=\bigcup_{i=1}^{\infty}\left\{x_{i}(a)\right\}$. This can be verified as follows. By assumption, $H^{'}$ and $\alpha^{'}$ have no common zeros. Then, since by (\ref{eq:22}) $m(x_{i}(a))=a$ for $a\in D_{i}$, $\alpha^{'}(x_{i}(a))\neq0$ and hence $H^{'}_{a}(x_i(a))=0.$ Conversely, let $H^{'}_a(z_0)=0$. Then $H^{'}(z_0)+a\alpha^{'}(z_0)=0$ which implies that $m(z_0)=a$. Since $a\in E_0$, $m^{'}(z_0)\neq 0$ and so $x_i(a)=z_0$ for $a\in D_{i}$ for some $i$.
Moreover, if $y_{i}(a)=y_{j}(a)$ for some $a\in E_0$, then by (\ref{eq:24}), (\ref{eq:25}) and (\ref{eq:26}), it follows that $y^{'}_{i}(a)=y^{'}_{j}(a)$. From (\ref{eq:29}), we have
\begin{flalign} \label{eq:30} \alpha(x_{i}(a))=\alpha(x_{j}(a)). \end{flalign} Hence, there exists a countable set $E$ in $\mathbb{C}$ such that for each $c\in \mathbb{C}$, for any $z, w \in \{z\in \mathbb{C}: H_{a}(z)=c, H_{a}^{'}(z)=0\}$ such that $\alpha (z)=\alpha(w)$, provided $a\notin E$. \end{proof}
Considering $H_a(z):=\cos z\cdot h(\sin{z})+a\sin{z}$, where $h\in \mathcal{E}_{T}$, in Lemma \ref{l1} and redefine $E_0$ in (\ref{eq:26}) as follows: \begin{flalign} \label{eq:31} E_{0}=\left(\bigcup_{i=1}^{\infty}D_{i}\right)\setminus\left(\left\{m(p): m^{'}(p)=0;p\in \mathbb{C} \right\}\cup \left(\cup_{(i,j)\in I}S_{ij}\right)\cup\left(\cup_{i=1}^{\infty}\left\{p: h(\sin (x_i(p)))=0\right\}\right)\right). \end{flalign} By (\ref{eq:30}) , we have $$\sin x_i(a)=\sin x_j(a), \mbox{ and hence } \cos x_i(a)h(\sin x_i(a))=\cos x_j(a)h(\sin x_j(a)).$$ Again by (\ref{eq:31}), we obtain $$\cos x_i(a)=\cos x_j(a)$$ and hence we obtain:
\begin{lemma} \label{l2} Let $h\in \mathcal{E}_{T}$ such that $h(\pm 1)\neq 0.$ Put $H_a(z):=\cos z\cdot h(\sin{z})+a\sin{z}$, where $a\in \mathbb{C}$. Then, there exists a countable set $E$ of complex numbers such that any two roots $u$ and $v$ of the simultaneous equations $$H_a(z)=c,\ H^{'}_{a}(z)=0,$$ $\cos u=\cos v$ and $\sin u=\sin v$ for any constant $c\in \mathbb{C}$, provided $a\notin E.$ \end{lemma}
Let $f(z)$ be periodic entire function of period $\lambda \neq 0$. Then for $z\in\mathbb{C}$, we shall denote by $[z]$, the set $\{z+n\lambda:n\in\mathbb{Z}\}$ and for $A\subset \mathbb{C}$, by$[A]$, the set $\{[z]:z\in A\}.$
\begin{theorem} \label{t1} Let $H\in \mathcal{E}_{T}$ such that $H(-z)=-H(z), H^{'}(0)\neq 0,$ and the order of $H(\sin z)$ is finite. Put $H_a(z):=\cos z(H(\sin z)+2a)$, where $a\in \mathbb{C}$. Then there exists a countable set $E\subset \mathbb{C}$ such that $H_a$ satisfies the following properties for each $a\notin E$. \begin{itemize} \item[(i)] $H_a$ is prime. \item[(ii)] $\#\left\{[z]: H^{'}_a(z)=0\right\}=\infty$ \item[(iii)] for any $z_1, z_2 \in \left\{z\in \mathbb{C}: H_a(z)=c, H^{'}_a(z)=0 \right\},$ $\cos z_1=\cos z_2$ for any $c\in \mathbb{C}$. \item[(iv)] $H^{'}_a$ has only simple zeros. \end{itemize} \end{theorem}
\begin{proof} $(i)$ and $(iii)$ follow from [\cite{Qiao}, Theorem 1] and Lemma \ref{l1} respectively.
Define $$k(w)=\frac{w^2+1}{2w}H\left(\frac{w^{2}-1}{2iw}\right).$$ Then $k\in \mathcal{H}(\mathbb{C}^{*})$ with essential singularities at $0$ and $\infty$, and $$H_a(z)=\left(k(w)+a\left(\frac{w^2+1}{w}\right)\right)\circ e^{iz}.$$ Put $$H_{a}(z)=k_{a}(e^{iz}),$$ where $k_a(w)=k(w)+a(\frac{w^2+1}{w})$. Since $H^{'}_{a}(z)=k^{'}_a(e^{iz})\cdot i e^{iz}$, we have \begin{equation}\label{eq:1a} H^{'}_a(z)=0 \Leftrightarrow k^{'}_a(e^{iz})=0. \end{equation}
Thus $k^{'}_a(w)=0$ if and only if $$\frac{w^{2}k^{'}(w)}{w^{2}-1}=-a.$$ By Picard's big theorem, we have \begin{equation}\label{eq:2a} \#\left\{w\in \mathbb{C}^{*}: k^{'}_{a}(w)=0\right\}=\infty \end{equation}
for every $a\in \mathbb{C}$ with at most two exceptions.
By (\ref{eq:1a}) and (\ref{eq:2a}), we get
$$\#\left\{[z]: H^{'}_a(z)=0\right\}=\infty,$$ for every $a\in \mathbb{C}$ with at most two exceptions. This proves $(ii).$
To establish $(iv)$ it is enough to prove that $k^{'}_a$ has simple zeros (see (\ref{eq:1a})). Suppose that $k_{a}^{'}(w_0)=0$, $ k_{a}^{''}(w_0)=0$. Then
$$w_0(w^{2}_0-1)k^{''}(w_0)-2k^{'}(w_0)=0,~~~ a=\frac{-w^{2}_0 k^{'}(w_0)}{w^{2}_0-1}$$ $\it{Claim}$: $w(w^2-1)k^{''}(w)-2k^{'}(w)\not\equiv 0$ on $\mathbb{C}^{*}.$\\ Suppose that $w(w^2-1)k^{''}(w)-2k^{'}(w)\equiv 0$ on $\mathbb{C}^{*}$. Then $$k^{''}(w)=\frac{2}{w(w^2-1)}k^{'}(w),~~~w\in \mathbb{C}^{*}.$$
Since $k^{'}(w)$ has no zeros at $w=\pm 1$, $k^{''}(w)$ has poles at $w=\pm 1$ and this is a contradiction. This establishes the claim.
Set $$E=\left\{a=\frac{-w^2k^{'}(w)}{w^2-1}: k^{''}_a(w)=0, w\neq 0\right\}.$$ If $a\notin E$, then $\left\{w\in \mathbb{C}^{*}:k_{a}^{'}(w)=0, k_{a}^{''}(w)=0 \right\}=\phi.$ Therefore, $k^{'}_{a}$ has only simple zeros for $a\notin E$. This proves $(iv).$
Adjoining the exceptions obtained in $(i), \ (ii)$ and $(iii)$ with $E$, the theorem holds for each $a\notin E$. \end{proof}
\begin{theorem} \label{t3}Let $h\in \mathcal{E}_{T}$ such that $h(\pm 1)\neq 0.$ Put $H_a(z):=\cos{z}\cdot h(\sin{z})+a\sin{z}$, where $a\in \mathbb{C}$. Then the set $\{a\in \mathbb{C}: H_{a} \mbox{ is not prime in entire sense }\}$ is at most countable. \end{theorem}
\begin{remark} Theorem \ref{t3} generalizes Theorem 1 of Urabe \cite{urabe}. \end{remark} \textbf{Proof of theorem \ref{t3}}. Define $$h_a(w):=\left(\frac{w^2+1}{2w}\right)h\left(\frac{w^2-1}{2iw}\right)+a\left(\frac{w^2-1}{2iw}\right).$$ Then $h_a\in \mathcal{H}(\mathbb{C}^{*})$ with essential singularities at $0$ and $\infty.$ Let $H_a(z)=h_{a}(e^{iz})$. We can choose a countable subset $E$ of complex plane for which the assertion of Lemma \ref{l2} holds with respect to $H_{a}$. We may assume $0\in E$.
By Second fundamental theorem of Nevanlinna [\cite{Hayman-1}, Theorem 2.3, p.43], we can choose $t\in (0,1)$ such that the inequalities \begin{equation}\label{eq:1} \overline{N}(r,0,H^{'}_{a})\geq tm(r,h^{'}_a(e^{iz})) \end{equation}
and \begin{equation}\label{eq:2} \overline{N}(r,c,H_{a})\geq tm(r,H_a(z)) \end{equation} hold on a set of $r$ of infinite measure for any $c\in\mathbb{C}$.
Suppose $H_{a}(z)=f(g(z))$, we consider the following cases one by one:
{\it Case (i):} When $f,\ g \in \mathcal{E}_{T}$.\\
Since $H_{a}^{'}(z)=f^{'}(g(z))g^{'}(z)$, by (\ref{eq:1}) we find that $f^{'}$ has infinitely many zeros $\{t_{k}\}$, say. Since every solution of $g(z)=t_k$ is also a solution of the simultaneous equations
$$H_{a}(z)=f(t_k),\ H_{a}^{'}(z)=0,$$
by Lemma \ref{l2} it follows that all the roots of $g(z)=t_{k}$ lie on a single straight line $l_n$. The set $\left\{l_{n}:n\in \mathbb{N}\right\}$ is infinite otherwise by Edrei's theorem \cite{edrei}, $g$ would reduce to a polynomial (of degree 2) which is not in reason. Therefore, by Theorem 3 of Kobayashi \cite{kobayashi}, we have
\begin{equation}\label{eq:4}
g(z)=P(e^{Az}), \end{equation} where $P$ is a quadratic polynomial and $A$ is a non-zero constant. Let $\left\{z_{k,j}\right\}_{j=1}^{\infty}$ be the roots of $g(z)=t_k$. Then $\left\{z_{k,j}\right\}_{j=1}^{\infty}$ are also the common roots of the simultaneous equations $$H_{a}(z)=f(t_{k}), \ H^{'}_{a}(z)=0.$$ By Lemma \ref{l2}, we have $$\cos z_{k,i}=\cos z_{k,j} \mbox{ and } \sin z_{k,i}=\sin z_{k,j}.$$ This implies that $$z_{k,i}=z_{k,j}+2\pi m_0,\mbox{ for some }m_0\in \mathbb{Z}.$$ By (\ref{eq:4}), $g$ is a periodic function with period $2\pi i/lA$, where $l=1 $ or $2$. Thus, we have $A=i/N$ for the integer $N=lm_0 $, and $$H_a(z)=h_a(e^{iz})=f(P(e^{iz/N})).$$ Put $w=e^{iz/N}$. Then \begin{equation}\label{eq:E1} h_a(w^{N})=f(P(w))\mbox{ for all } w\neq 0. \end{equation} Note that the left hand side of (\ref{eq:E1}) has an essential singularity at $0$ but the right hand side is holomorphic at $0.$ This contradiction shows that {\it Case(i)} can't occur.
{\it Case (ii):} When $f\in \mathcal{E}_{T}$ and $g$ is a polynomial of degree at least two.\\
By (\ref{eq:1}), $f^{'}$ has infinitely many zeros $\left\{t_k\right\}$, say. Let $p_k$ and $q_k$ be two roots of $g(z)=t_k$. Then $p_k$ and $q_k$ are also common roots of the simultaneous equations \begin{equation}\label{eq:3} H_a(z)=f(t_k), \ H^{'}_a(z)=0. \end{equation}
By Lemma \ref{l2}, it follows that \begin{equation}\label{eq:41} p_k-q_k=2m_k\pi, \end{equation}
for some integer $m_k$. Further, by Renyi's theorem \cite{renyi}, $g(z)=bz^2+cz+d$ for some $b\neq 0, c, d\in \mathbb{C}$ and hence \begin{equation}\label{eq:42} p_k+q_k=-\frac{c}{b}. \end{equation} Now by (\ref{eq:41}) and (\ref{eq:42}), it follows that all $p_k$ and $q_k$ lie on a single straight line (independent of $t_k$, $k\in \mathbb{N}$).
Since $H_{a}^{'}$ is periodic, we have $$N(r,0,H^{'}_{a})\leq N(r,0,f^{'}(g))+N(r,0,g^{'})$$ $$=N(r,0,f^{'}(g))+O(\log r)=o(m(r,h^{'}(e^{iz}))).$$ This contradicts (\ref{eq:1}) showing that {\it Case(ii)} is not possible.
{\it Case (iii):} When $f$ is polynomial of degree $d$ $(\geq 2)$ and $g\in \mathcal{E}_{T}$.\\ By Renyi's Theorem \cite{renyi}, $g$ is periodic and therefore, we can express $g$ as $$g(z)=G(e^{Bz}),$$ $G\in \mathcal{H}(\mathbb{C}^{*})$ with an essential singularity at $0$ or $\infty$ and $B$ is a non-zero constant. Let $w_0$ be a zero of $f^{'}$. Then $G(z)=w_0$ has at most finitely many roots and so by Picard's big theorem it follows that $f^{'}$ has exactly one zero, say $w_0.$ Thus we can express $f^{\prime}$ as
$$f^{'}(w)=b(w-w_0)^{d-1} \mbox{ and hence } f(w)=\alpha (w-w_0)^{d}+c$$ for some constants $\alpha (\neq 0)$ and $c$. Therefore, $$H_a(z)=\alpha \(g(z)-w_0\)^d+c.$$ Hence $N(r,c,H_a)=dN(r,w_0,g).$ Since $G(w)=w_0$ has at most finitely many roots, $N(r,w_0,g)=o(m(r,H_a(z)))$ which is contrary to (\ref{eq:2}) showing that {\it Case(iii)} also fails to occur.
Hence $H_a(z)$ is prime in entire sense. $\qed$
Using Theorem \ref{t3} and Lemma \ref{l2} and following the proofs of $(ii)$ and $(iv)$ in Theorem \ref{t1}, we obtain:
\begin{theorem} \label{t4} Let $h\in \mathcal{E}_{T}$ such that $h(\pm 1)\neq 0$. Put $H_a(z):=\cos{z} \cdot h(\sin{z})+a\sin{z}$, where $a\in \mathbb{C}$. Then there exists a countable set $E\subset \mathbb{C}$ such that $H_a$ possesses the following properties for each $a\notin E$: \begin{itemize} \item[(i)] $H_a$ is prime in entire sense; \item[(ii)] $\#\left\{[z]: H^{'}_a(z)=0\right\}=\infty$; \item[(iii)] for any $z_1, z_2 \in \left\{z\in \mathbb{C}: H_a(z)=c, H^{'}_a(z)=0 \right\}$, $~~\cos z_1=\cos z_2 \mbox{ and }~~\sin z_1=\sin z_2$ for any $c\in \mathbb{C}$; and \item[(iv)] $H^{'}_a$ has only simple zeros. \end{itemize} \end{theorem}
\section{Dynamics of non-linear entire functions permutable with members of subclasses of $\mathcal{F}$} The main result in this section is obtained by utilizing the argument due to Y. Noda \cite{noda}, faithfully, with certain modifications.
Let $D_0:=D\setminus \left\{z: f^{'}(z)g^{'}(z)=0 \right\}$. For $f, g \in \mathcal{H}(D)$, define a relation on $D_0$ with respect to $f$ and $g$, denoted by $\sim_{\left(f,g\right)} $ as follows:
Let $z, w \in D_0$. We write $z \sim_{\left(f,g\right)} w$ if and only if $f(z)=f(w), g(z)=g(w)$ and there are neighborhoods $U_z$ and $U_w$ of $z,$ and $w$ respectively such that $f(U_z)=f(U_w)$, $g(U_z)=g(U_w)$ and $\left(f_{|U_w}\right)^{-1}of_{|U_z}=\left(g_{|U_w}\right)^{-1}\circ g_{|U_z}$ in $U_z$. Then $\sim_{\left(f,g\right)} $ is an equivalence relation on $D_0.$
\indent
Y. Noda [\cite{noda}, Lemma 2.1] proved that for $f, \ g \in \mathcal{H}(D)$ and $z_0 \in \mathbb{C}$, there exist a neighborhood $N_{z_0}$ of $z_0, \ h\in \mathcal{H}(N_{z_0})$ and $\phi, \ \psi \in \mathcal{H}(h(N_{z_0}))$ satisfying \begin{enumerate}
\item $f^{'}(z)\neq 0, g^{'}(z)\neq 0, h^{'}(z)\neq 0$ $z\in N_{z_0}\setminus \{z_0\};$
\item $z\sim_{\left(f,g\right)}w$ if and only if $h(z)=h(w)$ $z,w \in N_{z_0}\setminus \{z_0\};$ and
\item $f=\phi \circ h, g=\psi \circ h.$ \end{enumerate} This information lead Y. Noda \cite{noda} to extend the above equivalence relation to $D$ as follows: \indent Let $z, w\in D$. We write $z\sim_{\left(f,g\right)}w$ if and only if $f(z)=f(w), g(z)=g(w)$ and there exists a conformal map $\phi$ defined in a neighborhood of $h_1(z)$ such that $\phi(h_1(z))=h_2(w)$, $\phi_1=\phi_2\circ\phi$, $\psi_1=\psi_2\circ\phi ,$ where $h_j, \phi_j, \psi_j (j=1,2)$ satisfy the conclusions of Lemma $2.1$ of Noda\cite{noda}( mentioned in the preceding discussion).
Using this equivalence relation, Y. Noda\cite{noda} proved the existence of the greatest common right factor of entire functions: \begin{lemma}[\cite{noda}, p.5]\label{rightfactor} Let $f, \ g \in \mathcal{H}(\mathbb{C}).$ Then there exits $F \in \mathcal{H}(\mathbb{C})$ and $\phi, \ \psi \in \mathcal{H}(F(\mathbb{C}))$ such that $f=\phi \circ F$ and $g=\psi \circ F.$ $F(z)=F(w)$ if and only if $z\sim_{\left(f,g\right)} w.$ \end{lemma} The entire function $F$ in Lemma \ref{rightfactor} is called the {\it greatest common right factor } of $f$ and $g$ (for a more general definition one may refer to [\cite{noda}, p.2]).
We also require the following key lemmas for proving our result in this section:
\begin{lemma}[\cite{baker}, Satz 6] \label{baker0} Let $f\in \mathcal{E}_{T} $ such that $f$ permutes with a polynomial $g$. Then $g(z)=\omega z+\beta$ $\left(\omega=\exp{2\pi ik / p}, \ k,p\in\mathbb{N}, (k,p)=1\right)$. Further, if $\omega\neq 1$, then $f(z)=c+(z-c)F_0\left((z-c)^p\right),$ where $c=\beta/{(1-\omega)}$ and $F_0$ is an entire function. \end{lemma} \begin{lemma}[\cite{baker}, Satz 7]\label{baker} Let $f$ and $g$ be permutable entire functions. Then there exist a positive integer $n$ and $R_0>0$ such that $M\left(r,g\right)<M\left(r,f^n\right)$ for all $r>R_0$. \end{lemma} \begin{lemma}[\cite{clunie},Theorem 1]\label{clunie1} Let $f,\ g\in \mathcal{E}_{T}$. Then $$\limsup_{r\to\infty}\frac{\log{M\left(r,f\circ g\right)}}{\log{M\left(r,g\right)}}=\infty .$$ \end{lemma} \begin{lemma}[\cite{noda}, Lemma 2.5]\label{noda} Suppose that $f, \ g\in \mathcal{H}(\mathbb{C})$ are permutable and $\left(F,S\right)$ be a greatest common right factor of $f$ and $g$. Let there be a subset $A\subset\mathbb{C}$ such that $\#f(A)=1$ and $\#g\left(A\right)=1$ and $r$ be the order of $f$ at some point of $g\(A\)$. Then there exists a subset $A'\subset A$ such that $\#F(A')=1$ and $\#A'\geq \# A/{r}$. \end{lemma} \begin{lemma}[\cite{noda}, Lemma 3.1]\label{urabe1} Let $f\in \mathcal{E}_{T}$ and $A$ be a discrete subset of $\mathbb{C}$ such that $\#f^{-1}\left(A\right)=\infty$. Then $\sup_{w\in A}\#\left(f^{-1}\left(\{w\}\right)\cap A^c\right)=\infty$. \end{lemma} \begin{lemma}[\cite{noda},Lemma 5.3]\label{noda0} Let $f$ be a periodic entire function and $A$ be a discrete subset of $\mathbb{C}$ such that $\#\left[f^{-1}\left(A\right)\right]=\infty$. Then $\sup_{w\in A}\#\left[f^{-1}\left(\{w\}\right)\cap A^c\right]=\infty$. \end{lemma} \begin{lemma} [\cite{noda}, Lemma 5.4] \label{l3} Let $h\in \mathcal{H}(\mathbb{C}^{*})$ satisfying that $\#\left\{w:h^{'}(w)=0\right\}=\infty$. Put $f(z)=h(e^{z})$. Let $g\in \mathcal{E}_{T}$ such that $g$ permutes with $f$. Then for each $N\in \mathbb{N}$, there exists $c$ such that $g^{'}(c)=0, \#\left\{[z]: f(z)=c, f^{'}(g(z))=0\right\}\geq N.$ \end{lemma} \begin{lemma}[\cite{yang}, Lemma 2.1]\label{baker1} Suppose that $f,\ g\in \mathcal{E}_{T}$ such that $g(z)=af(z)+b$, where $a,b\in\mathbb{C}$. If $g$ permutes with $f$, then $J(f)=J(g)$. \end{lemma}
Lemma 5.1 of \cite{noda} can be straight way extended to: \begin{lemma}\label{newperiod} Suppose that $f$ is periodic entire function with a period $\lambda\neq 0$ and $g$ be entire function permutable with $f$ and $\left(F,S\right)$ be a greatest common right factor of $f$ and $\exp {(2\pi i/ \lambda) g}$. Let there be a subset $A\subset\mathbb{C}$ such that $\#f(A)=1$ and $\#\exp {(2\pi i/\lambda )g(A)}=1$ and $r$ be the order of $f$ at some point of $g\(A\)$. Then there exists a subset $A'\subset A$ such that $\#F(A')=1$ and $\#A'\geq \# A/{r}$. \end{lemma}
Now we state and prove the main result of this section: \begin{theorem} \label{t2} Let $f$ be a periodic entire function satisfying the following properties: \begin{itemize} \item[(i)] $f$ is prime in entire sense; \item[(ii)] $\#\left\{[z]: f^{'}(z)=0\right\}=\infty ;$ \item[(iii)] the set $\left\{z\in \mathbb{C}: f(z)=c, f^{'}(z)=0 \right\}$ is distributed over a finite number of distinct straight lines for any $c\in \mathbb{C}$; and \item[(iv)] the multiplicities of zeros of $f^{'}$ are uniformly bounded. \end{itemize} If $g$ is any non-linear entire function permutable with $f$, then $J(g)=J(f)$. \end{theorem}
\begin{proof} Suppose that $\lambda\neq 0$ is the period of $f$ and assume that $g\in \mathcal{E}_{T}$ such that $g$ permutes with $f$. By $(ii)$ and Lemma \ref{l3}, for each positive integer $N$ there exists $c$ such that $$g^{'}(c)=0\mbox{ and }\# \left\{[z]: f(z)=c, f^{'}(g(z))=0\right\}\geq N.$$ Let $\mathcal{A}$ be a subset of $\mathbb{C}$ such that $[z]\neq [w]$ for $z,w\in \mathcal{A}, z\neq w$ and $$[\mathcal{A}]=\left\{[z]: f(z)=c, f^{'}(g(z))=0\right\}.$$ Since $f\circ g(\mathcal{A})=g\circ f(\mathcal{A})=\left\{g(c)\right\},$ we have \begin{flalign} \label{eq:a1} g(\mathcal{A})\subset \left\{z:f(z)=g(c), f^{'}(z)=0 \right\}. \end{flalign}
In $(iii)$, let us assume that the solutions of the simultaneous equations are distributed on $t$ straight lines, then there exists a subset $\mathcal{B}\subset \mathcal{A}$ such that $g(\mathcal{B})$ lie on a single straight line(which is parallel to the line passing through the origin and $\lambda$) with $\#\mathcal{B}\geq N/t$.
{\it Claim 1:} $[g{\mathcal(B)}]$ is finite.\\
Suppose that $[g(\mathcal{B})]$ is infinite. Let $X$ be the set of distinct points such that $[X]=[g(\mathcal{B})]$ and $[z]\neq[w],~~z,w\in X, z\neq w.$ We can choose the set $X$ such that all points of $X$ lie on a small line segment. Since $X$ is infinite, $X$ has an accumulation point which contradicts (\ref{eq:a1}). This establishes the claim.
Let $z_1, \cdots, z_p \in \mathbb{C},(p\geq 1)$ such that $$g(\mathcal{B})\subset \bigcup_{i=1}^{p}\left\{z_i+n\lambda: n\in \mathbb{Z}\right\}.$$ Therefore, we have a subset $\mathcal{C}\subset \mathcal{B} $ and $z_i, $ for some $i\in \left\{1,\cdots,p\right\}$ such that $$g(\mathcal{C})\subset \left\{z_i+n\lambda: n\in \mathbb{Z}\right\}\mbox{ and }\#\mathcal{C}\geq N/pt.$$ This implies that $\#\left(\exp{\left(2\pi i/\lambda\right) g(\mathcal{C})}\right)=1$.
On the other hand, $f(\mathcal{C})=\left\{c\right\}$ and thus $\#f(\mathcal{C})=1$. By Lemma \ref{rightfactor}, Lemma \ref{newperiod} and $(iv),$ there exist $F\in \mathcal{H}(\mathbb{C}), \ \phi, \psi \in \mathcal{H}(F(\mathbb{C}))$ and a subset $\mathcal{D}\subset \mathcal{C}$ such that $f=\phi \circ F, \ \exp{\left(2\pi i/\lambda\right) g}=\psi \circ F$, $\#F(\mathcal{D})=1$, $\# \mathcal{D}\geq N/(s+1)pt $, where $s$ denotes the maximum multiplicity of zeros of $f^{'}$. Since we can choose $N$ arbitrarily large, $F\in \mathcal{E}_{T}$.
We claim that $F(\mathbb{C})=\mathbb{C}.$ For, it is enough to show that $F$ has no exceptional value on $\mathbb{C}$. Suppose on the contrary that there exists $c\in \mathbb{C}$ such that $F=c+e^{Q}$ for some entire function $Q$. Then $f(z)=\phi(c+e^{w})\circ Q(z)$. Since $f$ is prime in entire sense, $Q(z)=a_1 z+a_2(a_1\neq 0).$ Thus $f$ has a period $2\pi i/a_1.$ Since $\lambda$ is the fundamental period of $f$, $2\pi i/a_1= \lambda p$ for some integer $p$. Therefore, $a_1= 2\pi i/{\lambda p}$, $F(z)=c+\exp{(2\pi i/{\lambda p})z+a_2}.$ Since $\#F(\mathcal{D})=1$, we have $(z-w)\in \lambda\mathbb{Z}$ for all $z,w \in \mathcal{D}$. Thus $[z]=[w]$ for all $z,w \in \mathcal{D}$, a contradiction, and therefore, $F(\mathbb{C})=\mathbb{C}.$ Hence, $\phi, \ \psi \in \mathcal{H}(\mathbb{C}).$
Since $\exp{\left(2\pi i/\lambda\right) g}=\psi \circ F$, we have $\psi(z)\neq 0$ for $z\in \mathbb{C}.$ Therefore, $\psi=\exp{G}$ with $G\in \mathcal{H}(\mathbb{C})$ and so $\exp{\left(2\pi i/\lambda\right) g}=\exp G\circ F.$ Hence $$g=\left(\lambda/2\pi i\right)G\circ F+q\lambda$$
for some $q \in \mathbb{Z}.$
Put $K=\left(\lambda/2\pi i\right)G+q\lambda$. Then $g=K\circ F$.
Since $F$ is transcendental and $f$ is prime in entire sense, we see that $\phi$ is linear. Therefore, $g=K\circ \phi ^{-1}\circ f$. Put $g_1=K\circ \phi ^{-1}$. Then $g=g_1\circ f.$ Note that $f\circ g_1=g_1\circ f$. If $g_1$ is transcendental, then by the same argument there exists $g_2\in \mathcal{H}(\mathbb{C})$ such that $g=g_1\circ f=g_2\circ f{}^{2}$. Similarly we have $g=g_n\circ f^{n}(n=1,2,\cdots)$, whenever all $g_n(n=1,2,\cdots)$ are transcendental. This gives a contradiction by Lemma \ref{baker} and Lemma \ref{clunie1}. Thus, $g_n$ is a polynomial for some $n$. By Lemma \ref{baker0}, we find that $g_n(z)=lz+m(l\neq 0)$ and so $g=lf^{n}+m$. Hence by Lemma \ref{baker1} $J(g)=J(f)$. \end{proof}
\begin{remark} \label{r1} Condition $(iii)$ in Theorem \ref{t1} as well as Theorem \ref{t4} implies that the set $\left\{z\in \mathbb{C}: H_a(z)=c, H^{'}_a(z)=0 \right\}$ is distributed over two distinct straight lines and single straight line respectively.
\end{remark}
\begin{remark} Let $f$ be a periodic entire function satisfying the conclusion of Theorem \ref{t1} (or Theorem \ref{t4}). Let $g\in \mathcal{E}_{T}$ such that $g$ permutes with $f$. Then by Theorem \ref{t2} and Remark \ref{r1}, $J(f)=J(g).$ \end{remark}
\section{A non-periodic subclass of $\mathcal{F}$} Let $H(z)=P(z)\cdot F(\alpha(z)),$ where $P(z)$ is polynomial of degree $n$, and $F, \alpha \in \mathcal{H}(\mathbb{C})$ such that $F_{a}(z):=H(z)-a \alpha (z)\in\mathcal{F}$ for any $a\in \mathbb{C}$.
On the similar lines of Lemma \ref{l2} we get: \begin{lemma}\label{KSC} Let $H(z)=P(z)\cdot F(\alpha(z)),$ where $P(z)$ is polynomial of degree $n$ , and $F, \alpha \in \mathcal{H}(\mathbb{C}).$ Put $F_{a}(z)=H(z)-a\alpha(z)$, where $a\in \mathbb{C}$. Suppose that $H^{'}$ and $\alpha^{'}$ have no common zeros. Then $\# \{z\in \mathbb{C}:F_{a}(z)=c, F_{a}^{'}(z)=0\}\leq n$, for all $c \in \mathbb{C} $, provided that $a \notin E.$ \end{lemma}
\begin{lemma}\label{Noda}\cite{ozawa1} Let $F\in \mathcal{E}_{T}$ satisfy $N\(r,0,F^{'}\)> k T\( r,F^{'}\)$ on a set of $r$ of infinite linear measure for some $k>0$. Assume that the simultaneous equations $F(z)=c, F^{'}(z)=0$ have finitely many solutions only for any constant $c$. Then $F$ is left-prime in the entire sense. \end{lemma} \begin{theorem}\label{npc} Let $F_a(z)= H(z)-a\cdot \alpha (z)\in\mathcal{F}$, where $a\in \mathbb{C}, \ H(z)=P(z)\cdot F(\alpha(z))$ and $P(z)$ is polynomial of degree $n$. Then there exists a countable set $E \subset \mathbb{C}$ such that $F_{a}$ satisfies the following properties for each $a\notin E$.\\
$(a)$ $F_{a}$ is left prime in entire sense. $F_a$ happens to be prime if $P(z)$ is a polynomial of degree 1.\\
$(b)$ $\#\{z\in \mathbb{C}: F_{a}(z)=c, F^{'}_{a}(z)=0\}\leq n$ for all $c\in \mathbb{C}$.\\
$(c)$ $F_{a}$ has infinitely many critical points and each is of multiplicity $1.$\\
\end{theorem}
\begin{remark}\label{r4} By (b) and the first half of (c) of Theorem \ref{npc}, it follows that for each $a\notin E$, $F_a$ is non-periodic. \end{remark}
From $(b)$ and first half of $(c)$ in Theorem \ref{npc} we observe that $F_{a}$ can not be of the form $f\circ q$ for some periodic entire function $f$ and polynomial $q$. Suppose on the contrary that \begin{equation}\label{eq:15} F_{a}(z)=f\circ q(z), \end{equation} for some periodic entire function $f$ with period $\lambda$ and for some polynomial $q$. Then
\begin{equation}\label{eq:16} F^{'}_{a}(z)=f^{'}(q(z))\cdot q^{'}(z). \end{equation}
By first half of (c) in Theorem \ref{npc}, $f^{'}$ has infinitely many zeros. Let $f^{'}(z_0)=0$ for some $z_0 \in \mathbb{C}$. Then $f(z_{0}+n\lambda)=f(z_0)$ and $f^{'}(z_{0}+n\lambda)=0$ for all $n\in \mathbb{Z}$. Let $w_{n}\in \mathbb{C}$ be such that $q(w_n)=z_{0}+n\lambda$ for $n\in \mathbb{Z}$. Then $F_{a}(w_n)=f(z_0)$ and $F_{a}^{'}(w_n)=0$ for all $n \in \mathbb{Z}$, which violates (b) of Theorem \ref{npc}.\\
In fact an entire function satisfying $(b)$ and $(c)$ of Theorem \ref{npc} is not of the form $g\circ Q$, where $g$ is periodic entire function and $Q$ is a polynomial. Thus by Ng[\cite{ng}, Theorem 1] it follows that if $f\in \mathcal{E}_{T}$ satisfying $(a), \ (b),$ and $(c)$ of Theorem \ref{npc} and permuting with a non-linear entire function $g,$ then $J(f)=J(g);$ Theorem \ref{npc} provides an illustration of this conclusion.
\textbf{Proof of Theorem \ref{npc}} Let $E_0$ be a countable subset of $\mathbb{C}$ such that the assertions of Lemma \ref{KSC} hold for $F_{a}(z)=H(z)-a\alpha(z)$ as soon as $a\in \mathbb{C}\setminus E_{0}$. Clearly, \begin{equation}\label{eq:11} N(r,0,F_{a}^{'})=N\(r,a,\frac{H^{'}}{\alpha^{'}}\). \end{equation} By the second fundamental theorem of Nevanlinna [\cite{Hayman-1}, Theorem 2.3, p.43], it follows that for any $k\in (0,1)$, \begin{equation}\label{eq:12} N\(r,a,\frac{H^{'}}{\alpha^{'}}\)> k T \(r,\frac{H^{'}}{\alpha^{'}}\) \end{equation} hold for every $r$ outside a set of finite linear measure and for every $a$ outside $E_1$, where $E_1$ is at most countable subset of $\mathbb{C}$. Let $E_2=E_{0}\cup E_{1}$. Then $E_2$ is an at most countable subset of $\mathbb{C}$ and (\ref{eq:12}) holds for every $a\in \mathbb{C}\setminus E_2$, and thus from (\ref{eq:11}), we have \begin{equation}\label{eq:13} N(r,0,F_{a}^{'})>k T\(r, \frac{H^{'}}{\alpha^{'}}\). \end{equation} Since $T(r,\alpha)=o\(T(r,H)\)$, (\ref{eq:13}) gives $$N(r,0,F_{a}^{'})>k T\(r, H^{'}\).$$ and hence \begin{equation}\label{eq:14} N(r,0,F_{a}^{'})>k T\(r, F_{a}^{'}\) \end{equation} holds for all $r$ outside a set of finite linear measure and for all $a\in \mathbb{C}\setminus E_2$. Thus, combining Lemma \ref{KSC} with Lemma \ref{Noda}, it follows that $F_{a}(z)$ is left-prime in entire sense, for all $a\in \mathbb{C}\setminus E_2$.
To show that $F_a$ is right prime in entire sense, when $P(z)$ is linear polynomial. Let $F_a=g\circ h$, where $g\in \mathcal{E}_{T}$ and $h$ is a polynomial of degree atleast two. Then from (\ref{eq:14}), $g'$ has infinitely many zeros $\left\{z_n\right\}$. For all $n$ sufficiently large, $h(z)=z_n$ admits atleast two distinct roots which comes out to be the solutions of the simultaneous equations
$$F_a(z)=g(z_n), F_a'(z)=0,$$
contradicting the fact that these simultaneous equations have atmost one solution. Thus $h$ is linear and hence $F_a$ is right prime in entire sense. This shows that $F_a$ is prime in entire sense. By Remark \ref{r4} and Lemma 3.1 of \cite{chuang}, $F_a$ is prime.
(b) follows from Lemma \ref{KSC} whereas the first half of (c) follows from equation (\ref{eq:14}) .\\ To prove the second half of (c), suppose there is $z_0\in \mathbb{C}$ such that $F_{a}^{'}(z_0)=0, F_{a}^{''}(z_0)=0$. Then $$a= \frac{H^{'}(z_0)}{\alpha^{'}(z_0)},~~ \alpha^{'}(z_0)H^{''}(z_0)-\alpha^{''}(z_0)H^{'}(z_0)=0$$ {\it Claim 1 : $\alpha^{'}(z)H^{''}(z)-\alpha^{''}(z)H^{'}(z)\not\equiv0$}.\\
Suppose on the contrary that $\alpha^{'}(z) H^{''}(z)-\alpha^{''}(z)H^{'}(z)\equiv 0.$ Then $$H^{''}=\frac{\alpha^{''}}{\alpha^{'}}H^{'}.$$ Since $\alpha^{'} $ has at least one zero and $ \alpha^{'}$ and $H^{'}$ have no common zero which implies that $H^{''}$ has a pole, a contradiction and this proves the claim. \\ Now it follows that the set $$E_{3}:=\left\{t=\frac{H^{'}(z)}{\alpha^{'}(z)}:F^{''}_{t}(z)=0 \right\}$$ is at most countable and for any $a\notin E_3, \left\{z:\alpha^{'}(z)H^{''}(z)-\alpha^{''}(z)H^{'}=0 \right\}=\phi$. Therefore, $F^{'}_{a}(z)$ has only simple zeros for $a\notin E_3$.\\
The set $E:=E_2\cup E_3$ is at most countable and the above conclusions hold for each $a \notin E$. $\qed$
\end{document} | arXiv |
Is the analysis as taught in universities in fact the analysis of definable numbers?
Ten years ago, when I studied in university, I had no idea about definable numbers, but I came to this concept myself. My thoughts were as follows:
All numbers are divided into two classes: those which can be unambiguously defined by a limited set of their properties (definable) and those such that for any limited set of their properties there is at least one other number which also satisfies all these properties (undefinable).
It is evident that since the number of properties is countable, the set of definable numbers is countable. So the set of undefinable numbers forms a continuum.
It is impossible to give an example of an undefinable number and one researcher cannot communicate an undefinable number to the other. Whatever number of properties he communicates there is always another number which satisfies all these properties so the researchers cannot be confident whether they are speaking about the same number.
However there are probability based algorithms which give an undefinable number as a limit, for example, by throwing dice and writing consecutive numbers after the decimal point.
But the main question that bothered me was that the analysis course we received heavily relied on constructs such as "let $a$ be a number such that...", "for each $s$ in the interval..." etc. These seemed to heavily exploit the properties of definable numbers and as such one can expect the theorems of analysis to be correct only on the set of definable numbers. Even the definitions of arithmetic operations over reals assumed the numbers are definable. Unfortunately one cannot take an undefinable number to bring a counter-example just because there is no example of undefinable number. How can we know that all of those theorems of analysis are true for the whole continuum and not just for a countable subset?
ca.classical-analysis-and-odes
set-theory
lo.logic
mathematical-philosophy
Martin Sleziak
AnixxAnixx
$\begingroup$ I just wrote a long answer to this question, but it was closed just as I was about to click submit. Can we re-open please? I think that there are a number of very interesting issues here. $\endgroup$
– Joel David Hamkins
$\begingroup$ Meta thread: tea.mathoverflow.net/discussion/729/… . I have also voted to reopen. $\endgroup$
– Qiaochu Yuan
$\begingroup$ I disagree with the continuing votes to close. The topic of definability is mathematically rich and forms the basis of huge parts of model theory, particularly where it connects with algebra and algebraic geometry, such as in the deep work of o-minimality. In the set-theoretic context, various technical meta-mathematical issues become prominent. The question is well-motivated, sincere and has mathematically interesting answers. $\endgroup$
$\begingroup$ In particular, I can imagine further technical answers arguing the line that in a model of $V=HOD$, the definable objects indeed form an elementary substructure of the universe, fulfilling the OPs observation that statements of analysis can be viewed as ultimately about definable objects. $\endgroup$
$\begingroup$ Similar issues with definability lay at the heart of this MO question: mathoverflow.net/questions/34710/… $\endgroup$
The concept of definable real number, although seemingly easy to reason with at first, is actually laden with subtle metamathematical dangers to which both your question and the Wikipedia article to which you link fall prey. In particular, the Wikipedia article contains a number of fundamental errors and false claims about this concept. (Update, April 2018: The Wikipedia article, Definable real numbers, is now basically repaired and includes a link to this answer.)
The naive treatment of definability goes something like this: In many cases we can uniquely specify a real number, such as $e$ or $\pi$, by providing an exact description of that number, by providing a property that is satisfied by that number and only that number. More generally, we can uniquely specify a real number $r$ or other set-theoretic object by providing a description $\varphi$, in the formal language of set theory, say, such that $r$ is the only object satisfying $\varphi(r)$.
The naive account continues by saying that since there are only countably many such descriptions $\varphi$, but uncountably many reals, there must be reals that we cannot describe or define.
But this line of reasoning is flawed in a number of ways and ultimately incorrect. The basic problem is that the naive definition of definable number does not actually succeed as a definition. One can see the kind of problem that arises by considering ordinals, instead of reals. That is, let us suppose we have defined the concept of definable ordinal; following the same line of argument, we would seem to be led to the conclusion that there are only countably many definable ordinals, and that therefore some ordinals are not definable and thus there should be a least ordinal $\alpha$ that is not definable. But if the concept of definable ordinal were a valid set-theoretic concept, then this would constitute a definition of $\alpha$, making a contradiction. In short, the collection of definable ordinals either must exhaust all the ordinals, or else not itself be definable.
The point is that the concept of definability is a second-order concept, that only makes sense from an outside-the-universe perspective. Tarski's theorem on the non-definability of truth shows that there is no first-order definition that allows us a uniform treatment of saying that a particular particular formula $\varphi$ is true at a point $r$ and only at $r$. Thus, just knowing that there are only countably many formulas does not actually provide us with the function that maps a definition $\varphi$ to the object that it defines. Lacking such an enumeration of the definable objects, we cannot perform the diagonalization necessary to produce the non-definable object.
This way of thinking can be made completely rigorous in the following observations:
If ZFC is consistent, then there is a model of ZFC in which every real number and indeed every set-theoretic object is definable. This is true in the minimal transitive model of set theory, by observing that the collection of definable objects in that model is closed under the definable Skolem functions of $L$, and hence by Condensation collapses back to the same model, showing that in fact every object there was definable.
More generally, if $M$ is any model of ZFC+V=HOD, then the set $N$ of parameter-free definable objects of $M$ is an elementary substructure of $M$, since it is closed under the definable Skolem functions provided by the axiom V=HOD, and thus every object in $N$ is definable.
These models of set theory are pointwise definable, meaning that every object in them is definable in them by a formula. In particular, it is consistent with the axioms of set theory that EVERY real number is definable, and indeed, every set of reals, every topological space, every set-theoretic object at all is definable in these models.
The pointwise definable models of set theory are exactly the prime models of the models of ZFC+V=HOD, and they all arise exactly in the manner I described above, as the collection of definable elements in a model of V=HOD.
In recent work (soon to be submitted for publication), Jonas Reitz, David Linetsky and I have proved the following theorem:
Theorem. Every countable model of ZFC and indeed of GBC has a forcing extension in which every set and class is definable without parameters.
In these pointwise definable models, every object is uniquely specified as the unique object satisfying a certain property. Although this is true, the models also believe that the reals are uncountable and so on, since they satisfy ZFC and this theory proves that. The models are simply not able to assemble the definability function that maps each definition to the object it defines.
And therefore neither are you able to do this in general. The claims made in both in your question and the Wikipedia page on the existence of non-definable numbers and objects, are simply unwarranted. For all you know, our set-theoretic universe is pointwise definable, and every object is uniquely specified by a property.
Update. Since this question was recently bumped to the main page by an edit to the main question, I am taking this opportunity to add a link to my very recent paper "Pointwise Definable Models of Set Theory", J. D. Hamkins, D. Linetsky, J. Reitz, which explains some of these definability issues more fully. The paper contains a generally accessible introduction, before the more technical material begins.
Joel David HamkinsJoel David Hamkins
$\begingroup$ @Anixx: No, this is not what Joel was saying. He did not say that it is consistent to "postulate in ZFC that undefinable numbers do not exist". What he was saying was that ZFC cannot even express the notion "is definable in ZFC". And no, this has absolutely nothing to do with constructivism (also please note that even in constructivism uncountable means "not countable", whereas you stated that it means "no practical enumeration" whatever that might mean). $\endgroup$
– Andrej Bauer
$\begingroup$ Joel made a very fine answer, please study it carefully. Joel states that there are models of ZFC such that every element of the model is definable. This does not mean that inside the model the statement "every element is definable" is valid. The statement is valid externally, as a meta-statement about the model. Internally, inside the model, we cannot even express the statement. $\endgroup$
$\begingroup$ This is off-topic, but: it makes no sense to claim that "constructivist continuum is countable in ZFC sense". What might be the case is that there is a model of constructive mathematics in ZFC such that the continuum is interpreted by a countable set. Indeed, we can find such a model, but we can also find a model in which this is not the case. Moreover, any model of ZFC is a model of constructive set theory. You see, constructive mathematics is more general than classical mathematics, and so in particular anything that is constructively valid is also classically valid. $\endgroup$
$\begingroup$ A minor technical comment on the first bullet point in Joel's answer: To use the minimal transitive model, one needs to assume that ZFC has well-founded models, not just that it's consistent. The main claim there, that there is a pointwise definable model of ZFC, is nevertheless correct on the basis of mere consistency, essentially by the second bullet point plus the consistency of V=HOD relative to ZFC. $\endgroup$
– Andreas Blass
$\begingroup$ Andrej, one answer that I have heard from both Harvey Friedman and John Steel (in different contexts) is that we study set theory, not "models of set theory". As such we are interested in sets rather than "artificial" constructs like minimal models. (Of course, the study of models of set theory is in itself very interesting, and it is not always so easy to separate one from the other.) $\endgroup$
– Andrés E. Caicedo
You can also talk about arithmetically definable real numbers: those for which the Dedekind cut of rationals is of the form: $$\{m/n: \forall x_1 \exists x_2 \ldots \forall x_{k-1} \exists x_k\, p(m,n,x_1,\ldots,x_k)=0\},$$ where the $x$'s range over integers, and $p$ is a polynomial with integer coefficients.
Then on this definition of definability: $e$ and $\pi$ and all the familiar reals are definable. Only countably many numbers are definable. There must be other real numbers which are undefinable. And it all makes sense, and is even provably consistent, in ordinary set theory.
A standard reference for this way of thinking is the system $ACA_0$ in Simpson's Subsystems of Second-Order Arithmetic.
The cost of this metamathematical simplicity is a small change to the mathematics: any definable bounded sequence of reals has a definable least upper bound, but an uncountable definable set of reals may not. Feferman's notes on Predicative Foundations of Analysis show how to develop standard analysis on this basis. If we changed mathematics as taught in universities to be based on predicative analysis, few undergraduates or people outside the math department would notice much difference.
edited Sep 1, 2017 at 9:15
Matt F.Matt F.
$\begingroup$ Interesting. What are polynomials for $e$ and $\pi$? $\endgroup$
– Gerald Edgar
$\begingroup$ @GeraldEdgar, start from $e$ as $$\{m/n: \exists x\exists y\exists z\ y=(1+x)^x\ \&\ z=x^x\ \&\ m/n<y/z\}.$$ The rest is standard coding, of which the only difficult part is using Godel's $\beta$ lemma to encode lists verifying $y=(1+x)^x$ and $z=x^x$. The easy references are from the use of these techniques in Hilbert's 10th problem, e.g. section 1 here: maa.org/sites/default/files/pdf/upload_library/22/Ford/… $\endgroup$
– Matt F.
$\begingroup$ In more detail, using 13 positive-integer variables: $$\newcommand{\e}{\exists} e = \{m/n: \e a \e b \e c \e d\, \e q \e r \e s \e t \e u\, \forall v \e w \e x \e y\\ (mc-nb + d)^2+\\ (q - (1+r)s - c)^2+ (q - (1+ra)t - b)^2+ (r - u - b)^2+\\ (v + 1 - w - a)^2 ((q - wa - (1+(v+1)r)x)^2+(q - w(1+a) - (1+vr)y)^2)=0 \}$$ $\endgroup$
Sep 4, 2017 at 9:17
$\begingroup$ @ZachTeitler, you can replace $x^y=z$ with its Diophantine definition (again, as in the references on Hilbert's 10th problem), and that will convert these definitions from exponential polynomials to ordinary polynomials. $\endgroup$
$\begingroup$ @ZachTeitler It's not saying $x^x$ or $(1 + x)^x$ are polynomials, rather that for the expressions $y = (1 + x)^x$ and $z = x^x$ there exist polynomial expressions (using extra variables) that are satisfiable if and only if the original ones are (equisatisfiable). $\endgroup$
– orlp
While you cannot define undefinable numbers, you can quantify over all real numbers, whether or not they are definable. "Let $a$ be a number" does not assume that $a$ is definable, but is merely a shorthand for quantification over $a$. The theorems in analysis are safe.
Definability is a subtle issue that was only partially dealt with in Joel David Hamkins excellent answer. $(V,∈)$-definability (as a predicate on sets) is $(V,∈)$-definable if and only if every ordinal is already $(V,∈)$-definable (in which case, $(V,∈)$-definability coincides with ordinal definability; a set is $(V,∈)$-definable iff it is first order parameter-free definable in $(V,∈)$). Intuitively, not every ordinal is $(V,∈)$-definable, and this can be formalized and proved by adding a $(V,∈)$ satisfaction relation Tr and replacement axiom schema for formulas involving Tr (this is not conservative over ZFC and does not hold in the minimal transitive model of ZFC). However, every consistent theory T extending ZF has a model (called definable ordinal model or Paris model) in which every ordinal is definable; for a complete T with a well-founded model, all Paris models are well-founded. This applies even if T proves that the set of ordinal definable real numbers is countable. It also applies to theories with Tr by using $(V,∈,\mathrm{Tr})$ definability, and analogously with other extensions.
In any case, we can speak of existence of $(V,∈)$-definable examples without ambiguity since there is a $(V,∈)$-definable set satisfying $P$ iff there is an ordinal definable set satisfying $P$ (and analogously if $P$ has parameters and we allow definability with those parameters). A set is ordinal definable iff it is definable in some $V_κ$. Now, V=HOD is $Π^V_2$ conservative over ZFC (and over ZFC + $φ$ for a $Σ^V_2$ $φ$), so even if the proofs assumed definability (which they do not), the theorems (which for analysis and 'ordinary mathematics' are all $Π^V_2$) would still be correct.
That does not mean that the theorems have definable examples. In second order arithmetic (where the examples are real numbers), existence of definable examples holds assuming projective determinacy (and for $Σ^1_2$ predicates in just ZFC), but existence of ordinal definable nonmeasurable sets (and likely other 'non-well-behaved' sets of reals) is independent of ZFC and ordinary large cardinal axioms.
Dmytro TaranovskyDmytro Taranovsky
It might be of some small interest to note the following result of Kenneth McAloon (from his paper, "Consistency Results About Ordinal Definability", Annals of Mathematical Logic, Vol. 2,No. 4 (1971) 449-467--note that he denotes as '$K$ = $V$' the statement "Every set is ordinal-definable" so his result also holds for $HOD$ as well):
Theorem. If $ZF$ is consistent, then so are
(i) $ZF$ + $GCH$ + $V$ = $OD$ + $V$ $\neq$ $L$;
(ii) $ZF$ + $V$ = $OD$ + $2^{\aleph_0}$ $\neq$ $\aleph_1$. [Since Andres is correct in pointing out that McAloon abbreviates "Every set is ordinal-definable" ($V$ = $OD$) as '$V$ = $K$' and also notes that "...if $V$ = $OD$ then $V$ = $HOD$", I wil replace McAloon's $V$ = $K$ with $V$ = $OD$ in order to strike an uneasy balance between directly quoting from a Source and paraphrasing the Source]
Since one can replace $V$ = $OD$ with $V$ = $HOD$, the models of these theories might be interesting universes for analysts to study in case mathematical practice should ever find use for definable non-constructible sets.
Thomas BenjaminThomas Benjamin
$\begingroup$ You misquoted the result. Again, the confusion is identifying a model with a theory. $\endgroup$
$\begingroup$ (Using $K $ is unfortunate here, since the letter now has a different, technical meaning.) $\endgroup$
$\begingroup$ (cont.) I can always re-edit. Thanks in advance for your help in this matter, it is always appreciated. $\endgroup$
– Thomas Benjamin
$\begingroup$ $V=K$ makes no sense if $K$ is a statement. $\endgroup$
$\begingroup$ I also have the paper, in fact. What is written there is that "The proposition `Every set is ordinal-definable' is abbreviated ${\rm V} = {\rm K}$'' (p. 449). This is different in an essential way from what you wrote. $\endgroup$
Examples of common false beliefs in mathematics
Examples of ubiquitous objects that are hard to find?
Another adjoint pair: Definable sets and set-builder formulas
Bishop's paradox of the countability of sequences
Can we compute every definable number with knowledge of the halting problem?
Axiom of Computable Choice versus Axiom of Choice
Graph properties: definability and decidability
Concerning the rarity of provably transcendental real numbers
Are the definable hyper-reals, using quantifiers only over the standard reals and natural numbers, the same as the algebraic numbers?
The set of largest numbers definable by formulas in different lengths
The symmetric group theory of natural numbers
Is the smallest $L_\alpha$ with undefinable ordinals always countable?
Definability of ordinals in various signatures
Can we interpret arithmetic in set theory, with exactly PA as the ZFC provable consequences? | CommonCrawl |
AUTHOREA
Log in Sign Up Browse Preprints
BROWSE LOG IN SIGN UP
Explore 11,750 preprints on the Authorea Preprint Repository
A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.
A novel approach to diagnosing Southern Hemisphere planetary wave activity and its in...
Damien Irving
Southern Hemisphere mid-to-upper tropospheric planetary wave activity is characterized by the superposition of two zonally-oriented, quasi-stationary waveforms: zonal wavenumber one (ZW1) and zonal wavenumber three (ZW3). Previous studies have tended to consider these waveforms in isolation and with the exception of those studies relating to sea ice, little is known about their impact on regional climate variability. We take a novel approach to quantifying the combined influence of ZW1 and ZW3, using the strength of the hemispheric meridional flow as a proxy for zonal wave activity. Our methodology adapts the wave envelope construct routinely used in the identification of synoptic-scale Rossby wave packets and improves on existing approaches by allowing for variations in both wave phase and amplitude. While ZW1 and ZW3 are both prominent features of the climatological circulation, the defining feature of highly meridional hemispheric states is an enhancement of the ZW3 component. Composites of the mean surface conditions during these highly meridional, ZW3-like anomalous states (i.e. months of strong planetary wave activity) reveal large sea ice anomalies over the Amundsen and Bellingshausen Seas during autumn and along much of the East Antarctic coastline throughout the year. Large precipitation anomalies in regions of significant topography (e.g. New Zealand, Patagonia, coastal Antarctica) and anomalously warm temperatures over much of the Antarctic continent were also associated with strong planetary wave activity. The latter has potentially important implications for the interpretation of recent warming over West Antarctica and the Antarctic Peninsula.
Satellite Dwarf Galaxies in a Hierarchical Universe: Infall Histories, Group Preproce...
Andrew Wetzel
In the Local Group, almost all satellite dwarf galaxies that are within the virial radius of the Milky Way (MW) and M31 exhibit strong environmental influence. The orbital histories of these satellites provide the key to understanding the role of the MW/M31 halo, lower-mass groups, and cosmic reionization on the evolution of dwarf galaxies. We examine the virial-infall histories of satellites with $\mstar=10^{3-9} \msun$ using the ELVIS suite of cosmological zoom-in dissipationless simulations of 48 MW/M31-like halos. Satellites at z = 0 fell into the MW/M31 halos typically $5-8 \gyr$ ago at z = 0.5 − 1. However, they first fell into any host halo typically $7-10 \gyr$ ago at z = 0.7 − 1.5. This difference arises because many satellites experienced "group preprocessing" in another host halo, typically of $\mvir \sim 10^{10-12} \msun$, before falling into the MW/M31 halos. Satellites with lower-mass and/or those closer to the MW/M31 fell in earlier and are more likely to have experienced group preprocessing; half of all satellites with $\mstar < 10^6 \msun$ were preprocessed in a group. Infalling groups also drive most satellite-satellite mergers within the MW/M31 halos. Finally, _none_ of the surviving satellites at z = 0 were within the virial radius of their MW/M31 halo during reionization (z > 6), and only <4% were satellites of any other host halo during reionization. Thus, effects of cosmic reionization versus host-halo environment on the formation histories of surviving dwarf galaxies in the Local Group occurred at distinct epochs and are separable in time.
Distinguishing disorder from order in irreversible decay processes
Jonathan Nichols
Fluctuating rate coefficients are necessary when modeling disordered kinetic processes with mass-action rate equations. However, measuring the fluctuations of rate coefficients is a challenge, particularly for nonlinear rate equations. Here we present a measure of the total disorder in irreversible decay i A → products, i = 1, 2, 3, …n governed by (non)linear rate equations – the inequality between the time-integrated square of the rate coefficient (multiplied by the time interval of interest) and the square of the time-integrated rate coefficient. We apply the inequality to empirical models for statically and dynamically disordered kinetics with i ≥ 2. These models serve to demonstrate that the inequality quantifies the cumulative variations in a rate coefficient, and the equality is a bound only satisfied when the rate coefficients are constant in time.
Real-space grids and the Octopus code as tools for the development of new simulation...
Xavier Andrade
Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schrödinger equation for low-dimensionality systems.
The "Paper" of the Future
Alyssa Goodman
_A 5-minute video demonstration of this paper is available at this YouTube link._ PREAMBLE A variety of research on human cognition demonstrates that humans learn and communicate best when more than one processing system (e.g. visual, auditory, touch) is used. And, related research also shows that, no matter how technical the material, most humans also retain and process information best when they can put a narrative "story" to it. So, when considering the future of scholarly communication, we should be careful not to do blithely away with the linear narrative format that articles and books have followed for centuries: instead, we should enrich it. Much more than text is used to communicate in Science. Figures, which include images, diagrams, graphs, charts, and more, have enriched scholarly articles since the time of Galileo, and ever-growing volumes of data underpin most scientific papers. When scientists communicate face-to-face, as in talks or small discussions, these figures are often the focus of the conversation. In the best discussions, scientists have the ability to manipulate the figures, and to access underlying data, in real-time, so as to test out various what-if scenarios, and to explain findings more clearly. THIS SHORT ARTICLE EXPLAINS—AND SHOWS WITH DEMONSTRATIONS—HOW SCHOLARLY "PAPERS" CAN MORPH INTO LONG-LASTING RICH RECORDS OF SCIENTIFIC DISCOURSE, enriched with deep data and code linkages, interactive figures, audio, video, and commenting.
Compressed Sensing for the Fast Computation of Matrices: Application to Molecular Vib...
Jacob Sanders
This article presents a new method to compute matrices from numerical simulations based on the ideas of sparse sampling and compressed sensing. The method is useful for problems where the determination of the entries of a matrix constitutes the computational bottleneck. We apply this new method to an important problem in computational chemistry: the determination of molecular vibrations from electronic structure calculations, where our results show that the overall scaling of the procedure can be improved in some cases. Moreover, our method provides a general framework for bootstrapping cheap low-accuracy calculations in order to reduce the required number of expensive high-accuracy calculations, resulting in a significant 3\(\times\) speed-up in actual calculations.
Large-Scale Microscopic Traffic Behaviour and Safety Analysis of Québec Roundabout De...
Paul St-Aubin
INTRODUCTION Roundabouts are a relatively new design for intersection traffic management in North America. With great promises from abroad in terms of safety, as well as capacity—roundabouts are a staple of European road design—roundabouts have only recently proliferated in parts of North America, including the province of Québec. However, questions still remain regarding the feasibility of introducing the roundabout to regions where driving culture and road design philosophy differ and where drivers are not habituated to their use. This aspect of road user behaviour integration is crucial for their implementation, for roundabouts manage traffic conflicts passively. In roundabouts, road user interactions and driving conflicts are handled entirely by way of driving etiquette between road users: lane merging, right-of-way, yielding behaviour, and eye contact in the case of vulnerable road users are all at play for successful passage negotiation at a roundabout. This is in contrast with typical North American intersections managed by computer-controlled traffic-light controllers (or on occasion police officers) and traffic circles of all kinds which are also signalized. And while roundabouts share much in common with 4 and 2-way stops, they are frequently used for high-capacity, even high-speed, intersections where 4 and 2-way stops would normally not be justified. Resistance to adoption in some areas is still important, notably on the part of vulnerable road users such as pedestrians and cyclists but also by some drivers too. While a number of European studies cite reductions in accident probability and accident severity, particularly for the Netherlands , Denmark , and Sweden , research on roundabouts in North America is still limited, and even fewer attempts at microscopic behaviour analysis exist anywhere in the world. The latter is important because it provides insight over the inner mechanics of driving behaviour which might be key to tailoring roundabout design for regional adoption and implementation efforts. Fortunately, more systematic and data-rich analysis techniques are being made available today. This paper proposes the application of a novel, video-based, semi-automated trajectory analysis approach for large-scale microscopic behavioural analysis of 20 of 100 available roundabouts in Québec, investigating 37 different roundabout weaving zones. The objectives of this paper are to explore the impact of Québec roundabout design characteristics, their geometry and built environment on driver behaviour and safety through microscopic, video-based trajectory analysis. Driver behaviour is characterized by merging speed and time-to-collision , a maturing indicator of surrogate safety and behaviour analysis in the field of transportation safety. In addition, this work represents one of the largest applications of surrogate safety analysis to date.
Comparison of Various Time-to-Collision Prediction and Aggregation Methods for Surrog...
INTRODUCTION Traditional methods of road safety analysis rely on direct road accident observations, data sources which are rare and expensive to collect and which also carry the social cost of placing citizens at risk of unknown danger. Surrogate safety analysis is a growing discipline in the field of road safety analysis that promises a more pro-active approach to road safety diagnosis. This methodology uses non-crash traffic events and measures thereof as predictors of collision probability and severity as they are significantly more frequent, cheaper to collect, and have no social impact. Time-to-collision (TTC) is an example of an indicator that indicates collision probability primarily: the smaller the TTC, the less likely drivers have time to perceive and react before a collision, and thus the higher the probability of a collision outcome. Relative positions and velocities between road users or between a user and obstacles can be characterised by a collision course and the corresponding TTC. Meanwhile, driving speed (absolute speed) is an example of an indicator that measures primarily collision severity. The higher the travelling speed, the more stored kinetic energy is dissipated during a collision impact . Similarly, large speed differentials between road users or with stationary obstacles may also contribute to collision severity, though the TTC depends on relative distance as well. Driving speed is used extensively in stopping-sight distance models , some even suggesting that drivers modulate their emergency braking in response to travel speed . Others content that there is little empirical evidence of a relationship between speed and collision probability . Many surrogate safety methods have been used in the literature, especially recently with the renewal of automated data collection methods, but consistency in the definitions of traffic events and indicators, in their interpretation, and in the transferability of results is still lacking. While a wide diversity of models demonstrates that research in the field is thriving, there remains a need of comparison of the methods and even a methodology for comparison in order to make surrogate safety practical for practitioners. For example, time-to-collision measures collision course events, but the definition of a collision course lacks rigour in the literature. Also lacking is some systematic validation of the different techniques. Some early attempts have been made with the Swedish Traffic Conflict Technique using trained observers, though more recent attempts across different methodologies, preferably automated and objectively-defined measures, are still needed. Ideally, this would be done with respect to crash data and crash-based safety diagnosis. The second best method is to compare the characteristics of all the methods and their results on the same data set, but public benchmark data is also very limited despite recent efforts . The objectives of this paper are to review the definition and interpretation of one of the most ubiquitous and least context-sensitive surrogate safety indicators, namely time-to-collision, for surrogate safety analysis using i) consistent, recent, and, most importantly, objective definitions of surrogate safety indicators, ii) a very large data set across numerous sites, and iii) the latest developments in automated analysis. This work examines the use of various motion prediction methods, constant velocity, normal adaptation and observed motion patterns, for the TTC safety indicator (for its properties of transferability), and space and time aggregation methods for continuous surrogate safety indicators. This represents an application of surrogate safety analysis to one of the largest data sets to date.
The Fork Factor: an academic impact factor based on reuse.
Ferdinando Pucci
HOW IS ACADEMIC RESEARCH EVALUATED? There are many different ways to determine the impact of scientific research. One of the oldest and best established measures is to look at the Impact Factor (IF) of the academic journal where the research has been published. The IF is simply the average number of citations to recent articles published in such an academic journal. The IF is important because the reputation of a journal is also used as a proxy to evaluate the relevance of past research performed by a scientist when s/he is applying to a new position or for funding. So, if you are a scientist who publishes in high-impact journals (the big names) you are more likely to get tenure or a research grant. Several criticisms have been made to the use and misuse of the IF. One of these is the policies that academic journal editors adopt to boost the IF of their journal (and get more ads), to the detriment of readers, writers and science at large. Unfortunately, these policies promote the publication of sensational claims by researchers who are in turn rewarded by funding agencies for publishing in high IF journals. This effect is broadly recognized by the scientific community and represents a conflict of interests, that in the long run increases public distrust in published data and slows down scientific discoveries. Scientific discoveries should instead foster new findings through the sharing of high quality scientific data, which feeds back into increasing the pace of scientific breakthroughs. It is apparent that the IF is a crucially deviated player in this situation. To resolve the conflict of interest, it is thus fundamental that funding agents (a major driving force in science) start complementing the IF with a better proxy for the relevance of publishing venues and, in turn, scientists' work. RESEARCH IMPACT IN THE ERA OF FORKING. A number of alternative metrics for evaluating academic impact are emerging. These include metrics to give scholars credit for sharing of raw science (like datasets and code), semantic publishing, and social media contribution, based not solely on citation but also on usage, social bookmarking, conversations. We, at Authorea, strongly believe that these alternative metrics should and will be a fundamental ingredient of how scholars are evaluated for funding in the future. In fact, Authorea already welcomes data, code, and raw science materials alongside its articles, and is built on an infrastructure (Git) that naturally poses as a framework for distributing, versioning, and tracking those materials. Git is a versioning control platform currently employed by developers for collaborating on source code, and its features perfectly fit the needs of most scientists as well. A versioning system, such as Authorea and GitHub, empowers FORKING of peer-reviewed research data, allowing a colleague of yours to further develop it in a new direction. Forking inherits the history of the work and preserves the value chain of science (i.e., who did what). In other words, forking in science means _standing on the shoulder of giants_ (or soon to be giants) and is equivalent to citing someone else's work but in a functional manner. Whether it is a "negative" result (we like to call it non-confirmatory result) or not, publishing your peer reviewed research in Authorea will promote forking of your data. (To learn how we plan to implement peer review in the system, please stay tuned for future posts on this blog.) MORE FORKING, MORE IMPACT, HIGHER QUALITY SCIENCE. Obviously, the more of your research data are published, the higher are your chances that they will be forked and used as a basis for groundbreaking work, and in turn, the higher the interest in your work and your academic impact. Whether your projects are data-driven peer reviewed articles on Authorea discussing a new finding, raw datasets detailing some novel findings on Zenodo or Figshare, source code repositories hosted on Github presenting a new statistical package, every bit of your work that can be reused, will be forked and will give you credit. Do you want to do a favor to science? Publish also non-confirmatory results and help your scientific community to quickly spot bad science by publishing a dead end fork (Figure 1).
The effect of carbon subsidies on marine planktonic niche partitioning and recruitmen...
Charles Pepe-Ranney
INTRODUCTION Biofilms are diverse and complex microbial consortia, and, the biofilm lifestyle is the rule rather than the exception for microbes in many environments. Large and small-scale biofilm architectural features play an important role in their ecology and influence their role in biogeochemical cycles . Fluid mechanics impact biofilm structure and assembly , but it is less clear how other abiotic factors such as resource availability affect biofilm assembly. Aquatic biofilms initiate with seed propagules from the planktonic community . Thus, resource amendments that influence planktonic communities may also influence the recruitment of microbial populations during biofilm community assembly. In a crude sense, biofilm and planktonic microbial communities divide into two key groups: oxygenic phototrophs including eukaryotes and cyanobacteria (hereafter "photoautotrophs"), and heterotrophic bacteria and archaea. This dichotomy, admittedly an abstraction (e.g. non-phototrophs can also be autotrophs), can be a powerful paradigm for understanding community shifts across ecosystems of varying trophic state . Heterotrophs meet some to all of their organic carbon (C) requirements from photoautotroph produced C while simultaneously competing with photoautotrophs for limiting nutrients such as phosphorous (P) . The presence of external C inputs, such as terrigenous C leaching from the watershed or C exudates derived from macrophytes , can alleviate heterotroph reliance on photoautotroph derived C and shift the heterotroph-photoautotroph relationship from commensal and competitive to strictly competitive . Therefore, increased C supply should increase the resource space available to heterotrophs and increase competition for mineral nutrients decreasing nutrients available for photoautotrophs (assuming that heterotrophs are superior competitors for limiting nutrients as has been observed ). These dynamics should result in the increase in heterotroph biomass relative to the photoautotroph biomass along a gradient of increasing labile C inputs. We refer to this differential allocation of limiting resources among components of the microbial community as niche partitioning. While these gross level dynamics have been discussed conceptually and to some extent demonstrated empirically , the effects of biomass dynamics on photoautotroph and heterotroph membership and structure has not been directly evaluated in plankton or biofilms. In addition, how changes in planktonic communities propagate to biofilms during community assembly is not well understood. We designed this study to test if C subsidies shift the biomass balance between autotrophs and heterotrophs within the biofilm or its seed pool (i.e. the plankton), and, to measure how changes in biomass pool size alter composition of the plankton and biofilm communities. Specifically, we amended marine mesocosms with varying levels of labile C input and evaluated differences in photoautotroph and heterotrophic bacterial biomass in plankton and biofilm samples along the C gradient. In each treatment we characterized plankton and biofilm community composition by PCR amplifying and DNA sequencing 16S rRNA genes and plastid 23S rRNA genes.
Lattice polymers with two competing collapse interactions
Andrea Bedini
We study a generalised model of self-avoiding trails, containing two different types of interaction (nearest-neighbour contacts and multiply visited sites), using computer simulations. This model contains various previously-studied models as special cases. We find that the strong collapse transition induced by multiply-visited sites is a singular point in the phase diagram and corresponds to a higher order multi-critical point separating a line of weak second-order transitions from a line of first-order transitions.
Non-cyanobacterial diazotrophs mediate dinitrogen fixation in biological soil crusts...
ABSTRACT Biological soil crusts (BSC) are key components of ecosystem productivity in arid lands and they cover a substantial fraction of the terrestrial surface. In particular, BSC N₂-fixation contributes significantly to the nitrogen (N) budget of arid land ecosystems. In mature crusts, N₂-fixation is largely attributed to heterocystous cyanobacteria, however, early successional crusts possess few N₂-fixing cyanobacteria and this suggests that microorganisms other than cyanobacteria mediate N₂-fixation during the critical early stages of BSC development. DNA stable isotope probing (DNA-SIP) with ¹⁵N₂ revealed that _Clostridiaceae_ and _Proteobacteria_ are the most common microorganisms that assimilate ¹⁵N₂ in early successional crusts. The _Clostridiaceae_ identified are divergent from previously characterized isolates, though N₂-fixation has previously been observed in this family. The Proteobacteria identified share >98.5 %SSU rRNA gene sequence identity with isolates from genera known to possess diazotrophs (e.g. _Pseudomonas_, _Klebsiella_, _Shigella_, and _Ideonella_). The low abundance of these heterotrophic diazotrophs in BSC may explain why they have not been characterized previously. Diazotrophs play a critical role in BSC formation and characterization of these organisms represents a crucial step towards understanding how anthropogenic change will affect the formation and ecological function of BSC in arid ecosystems. KEYWORDS: microbial ecology / stable isotope probing / nitrogen fixation / biological soil crusts
Ternary Ladder Operators
Benedict Irwin
ABSTRACT We develop a triplet operator system which encompasses the structure of quark combinations. Ladder operators are created. The constants β are currently being found.
Counting the Cost: A Report on APC-supported Open Access Publishing in a Research Lib...
Mark Newton
At one-hundred twenty-two articles published, the open access journal _Tremor and other Hyperkinetic Movements_ (tremorjournal.org, ISSN: 2160-8288), is growing its readership and expanding its influence among patients, clinicians, researchers, and the general public interested in issues of non-Parkinsonian tremor disorders. Among the characteristics that set the journal apart from similar publications, _Tremor_ is published in partnership with the library-based publications program at Columbia University's Center for Digital Research and Scholarship (CDRS). The production of _Tremor_ in conjunction with its editor, a researching faculty member, clinician, and epidemiologist at the Columbia University Medical Center, has pioneered several new workflows at CDRS: article-charge processing, coordination of vendor services, integration into PubMed Central, administration of publication scholarships granted through a patient-advocacy organization, and open source platform development among them. Open access publishing ventures in libraries often strive for lean operations by attempting to capitalize on the scholarly impact available through the use of templated and turnkey publication systems. For CDRS, production on _Tremor_ has provided opportunity to build operational capacity for more involved publication needs. The following report introduces a framework and account of the costs of producing such a publication as a guide to library and other non-traditional publishing operations interested in gauging the necessary investments. Following a review of the literature published to date on the costs of open access publishing and of the practice of journal publishing in academic libraries, the authors present a brief history of the _Tremor_ and a tabulation of the costs and expenditure of effort by library staff in production. Although producing _Tremor_ has been more expensive than other partner publications in the center's portfolio, the experiences have improved the library's capacity for addressing more challenging projects, and developments for _Tremor_ have already begun to be applied to other journals.
Large-Scale Automated Proactive Road Safety Analysis Using Video Data
Due to the complexity and pervasiveness of transportation in daily life, the use and combination of larger data sets and data streams promises smarter roads and a better understanding of our transportation needs and environment. For this purpose, ITS systems are steadily being rolled out, providing a wealth of information, and transitionary technologies, such as computer vision applied to low-cost surveillance or consumer cameras, are already leading the way. This paper presents, in detail, a practical framework for implementation of an automated, high-resolution, video-based traffic-analysis system, particularly geared towards researchers for behavioural studies and road safety analysis, or practitioners for traffic flow model validation. This system collects large amounts of microscopic traffic flow data from ordinary traffic using CCTV and consumer-grade video cameras and provides the tools for conducting basic traffic flow analyses as well as more advanced, pro-active safety and behaviour studies. This paper demonstrates the process step-by-step, illustrated with examples, and applies the methodology to a case study of a large and detailed study of roundabouts (nearly 80,000 motor vehicles tracked up to 30 times per second driving through a roundabout). In addition to providing a rich set of behavioural data about Time-to-Collision and gap times at nearly 40 roundabout weaving zones, some data validation is performed using the standard Measure of Tracking Accuracy with results in the 85-95% range.
Agroforestry: An adaptation measure for sub-Saharan African food systems in response...
Robert Orzanna
This paper examines the impact of increasing weather extremes due to climate change on African food systems. The specific focus lies on agroforestry adaptation measures that can be applied by smallholder farmers to protect their livelihoods and to make their food production more resilient against the effects of those weather extremes. The adoption potentials for agroforestry is evaluated, taking into consideration regional environmental and socio-economic differences, and possible barriers for adoption with respect to extrinsic and intrinsic factors are outlined. According to the indicators that approximate extrinsic factors, a high adoption potential for agroforestry is likely to be found in Angola, Botswana, Cameroon, Cabo Verde, Gabon, Ghana, Mauritania and Senegal. A very low potential exists in Somalia, Eritrea, South Sudan and Rwanda.
Science was always meant to be open
Alberto Pepe
Here's my crux: I find myself criticizing over and over the way that scientific articles look today. I have said many times that scientists today write 21th-century research, using 20th-century tools, packaged in a 17th-century format. When I give talks, I often use 400-year-old-articles to demonstrate that they look and feel similar to the articles we publish today. But the scientific article of the 1600's looked that way for a reason. This forthcoming article by explains: In the early 1600s, Galileo Galilei turned a telescope toward Jupiter. In his log book each night, he drew to-scale schematic diagrams of Jupiter and some oddly-moving points of light near it. Galileo labeled each drawing with the date. Eventually he used his observations to conclude that the Earth orbits the Sun, just as the four Galilean moons orbit Jupiter. History shows Galileo to be much more than an astronomical hero, though. His clear and careful record keeping and publication style not only let Galileo understand the Solar System, it continues to let anyone understand how Galileo did it. Galileo's notes directly integrated his data (drawings of Jupiter and its moons), key metadata (timing of each observation, weather, telescope properties), and text (descriptions of methods, analysis, and conclusions). Critically, when Galileo included the information from those notes in Siderius Nuncius, this integration of text, data and metadata was preserved:
Two Local Volume Dwarf Galaxies Discovered in 21 cm Emission: Pisces A and B
Erik Tollerud
INTRODUCTION The properties of faint dwarf galaxies at or beyond the outer reaches of the Local Group (1 − 5 Mpc) probe the efficiency of environmentally driven galaxy formation processes and provide direct tests of cosmological predictions \citep[e.g., ][]{kl99ms, moo99ms, stri08commonmass, krav10satrev, kirby10, BKBK11, pontzen12, geha13}. However, searches for faint galaxies suffer from strong luminosity and surface brightness biases that render galaxies with LV ≲ 10⁶ L⊙ difficult to detect beyond the Local Group . Because of these biases, searching for nearby dwarf galaxies with methodologies beyond the standard optical star count methods are essential. This motivates searches for dwarf galaxies using the 21 cm emission line of neutral hydrogen (). While such searches cannot identify passive dwarf galaxies like most Local Group satellites, which lack , they have the potential to find gas-rich, potentially starforming dwarf galaxies. This is exemplified by the case of the Leo P dwarf galaxy, found first in and later confirmed via optical imaging . Here we describe two faint dwarf galaxies identified via emission in the first data release of the Galactic Arecibo L-band Feed Array (GALFA-HI) survey . As described below, they are likely within the Local Volume (<10 Mpc) but just beyond the Local Group (≳1 Mpc), so we refer to them as Pisces A and B. This paper is organized as follows: in Section [sec:data], we present the data used to identify these galaxies. In Section [sec:distance], we consider possible distance scenarios, while in Section [sec:conc] we provide context and some conclusions. Where relevant, we adopt a Hubble constant of $H_0=69.3 \; {\rm km \; s}^{-1}{\rm Mpc}^{-1}$ from WMAP9 .
Swabs to Genomes: A Comprehensive Workflow
Jenna M. Lang
Abstract The sequencing, assembly, and basic analysis of microbial genomes, once a painstaking and expensive undertaking, has become much easier for research labs with access to standard molecular biology and computational tools. However, there are a wide variety of options available for DNA library preparation and sequencing, and inexperience with bioinformatics can pose a significant barrier to entry for many who may be interested in microbial genomics. The objective of the present study was to design, test, troubleshoot, and publish a simple, comprehensive workflow from the collection of an environmental sample (a swab) to a published microbial genome; empowering even a lab or classroom with limited resources and bioinformatics experience to perform it.
Predictions for Observing Protostellar Outflows with ALMA
INTRODUCTION Young protostars are observed to launch energetic collimated bipolar mass outflows . These protostellar outflows play a fundamental role in the star formation process on a variety of scales. On sub-pc scales they entrain and unbind core gas, thus setting the efficiency at which dense gas turns into stars . Interaction between outflows and infalling material may regulate protostellar accretion and, ultimately, terminate it . On sub-pc up to cloud scales, outflows inject substantial energy into their surroundings, potentially providing a means of sustaining cloud turbulence over multiple dynamical times. The origin of outflows is attributed to the presence of magnetic fields, and a variety of different models have been proposed to explain the launching mechanism \citep[e.g.,][]{arce07}. Of these, the "disk-wind" model , in which the gas is centrifugally accelerated from the accretion disk surface, and the "X-wind" model , in which gas is accelerated along tightly wound field lines, are most commonly invoked to explain observed outflow signatures. However, investigating the launching mechanism is challenging because launching occurs on scales of a few stellar radii and during times when the protostar is heavily extincted by its natal gas. Consequently, separating outflow gas from accreting core gas, discriminating between models, and determining fundamental outflow properties are nontrivial. Three main approaches have been applied to studying outflows. First, single-dish molecular line observations have been successful in mapping the extent of outflows and their kinematics on core to cloud scales \citep[][]{bourke97,arce10,dunham14}. However, outflow gas with velocities comparable to the cloud turbulent velocity can only be extracted with additional assumptions and modeling \citep[e.g.,][]{arce01b,dunham14}, which are difficult to apply to confused, clustered star forming environments . Second, interferometry provide a means of mapping outflows down to 1,000 AU scales scales , and the Atacama Large Millimeter/submilllimeter Antenna (ALMA) is extending these limits down to sub-AU scales . However, interferometry is not suitable for producing large high-resolution maps and it resolves out larger scale structure. Consequently, it is difficult to assemble a complete and multi-scale picture of outflow properties with these observations. Finally, numerical simulations provide a complementary approach that supplies three-dimensional predictions for launching, entrainment and energy injection . The most promising avenue for understanding outflows lies at the intersection of numerical modeling and observations. By performing synthetic observations to model molecular and atomic lines, continuum, and observational effects, simulations can be mapped into the observational domain where they can be compared directly to observations \citep[e.g.,][]{Offner11,Offner12b,Mairs13}. Such direct comparisons are important for assessing the "reality" of the simulations, to interpret observational data and to assess observational uncertainties . In addition to observational instrument limitations, chemistry and radiative transfer introduce additional uncertainties that are difficult to quantify without realistic models . Synthetic observations have previously been performed in the context of understanding outflow opening angles , observed morphology , and impact on spectral energy distributions . The immanent completion of ALMA provides further motivation for predictive synthetic observations. Although ALMA will have unprecedented sensitivity and resolution compared to existing instruments, by nature interferometry resolves out large-scale structure and different configurations will be sensitive to different scales. Atmospheric noise and total observing time may also effect the fidelity of the data. Previous synthetic observations performed by suggest that the superior resolution of full ALMA and the Atacama Compact Array (ACA) will be able to resolve core structure and fragmentation prior to binary formation. predicts that ALMA will be able to resolve complex outflow velocity structure and helical structure in molecular emission. In this paper we seek to quantify the accuracy of different ALMA configurations in recovering fundamental gas properties such as mass, line-of-sight momentum, and energy. We use the casa software package to synthetically observe protostellar outflows in the radiation-hydrodynamic simulations of . By modeling the emission at different times, inclinations, molecular lines, and observing configurations we evaluate how well physical quantities can be measured in the star formation process. In section §[Methods] we describe our methods for modeling and observing outflows. In section §[results] we evaluate the effects of different observational parameters on bulk quantities. We discuss results and summarize conclusions in §[Conclusions].
Quaternion Based Metrics in Relativity
ABSTRACT By introducing a new form of metric tensor the same derivation for the electromanetic tensor Fμν from potentials Aμ leads to the dual space (Hodge Dual) of the regular Fμν tensor. There are additional components in the i, j, k planes, however if after the derivation only the real part is considered a physically consistent electromagnetic theory is recovered with a relabelling of $$ fields to $$ fields and vice versa.
The Microbes We Eat
ABSTRACT Far more attention has been paid to the microbes in our feces than the microbes in our food. Research efforts dedicated to the microbes that we eat have historically been focused on a fairly narrow range of species, namely those which cause disease and those which are thought to confer some "probiotic" health benefit. Little is known about the effects of ingested microbial communities that are present in typical American diets, and even the basic questions of which microbes, how many of them, and how much they vary from diet to diet and meal to meal, have not been answered. We characterized the microbiota of three different dietary patterns in order to estimate: the average total amount of daily microbes ingested via food and beverages, and their composition in three daily meal plans representing three different dietary patterns. The three dietary patterns analyzed were: 1) the Average American (AMERICAN): focused on convenience foods, 2) USDA recommended (USDA): emphasizing fruits and vegetables, lean meat, dairy, and whole grains, and 3) Vegan (VEGAN): excluding all animal products. Meals were prepared in a home kitchen or purchased at restaurants and blended, followed by microbial analysis including aerobic, anaerobic, yeast and mold plate counts as well as 16S rRNA PCR survey analysis. Based on plate counts, the USDA meal plan had the highest total amount of microbes at \(1.3 X 10^9\) CFU per day, followed by the VEGAN meal plan and the AMERICAN meal plan at \(6 X 10^6 \)and \(1.4 X 10^6\) CFU per day respectively. There was no significant difference in diversity among the three dietary patterns. Individual meals clustered based on taxonomic composition independent of dietary pattern. For example, meals that were abundant in Lactic Acid Bacteria were from all three dietary patterns. Some taxonomic groups were correlated with the nutritional content of the meals. Predictive metagenome analysis using PICRUSt indicated differences in some functional KEGG categories across the three dietary patterns and for meals clustered based on whether they were raw or cooked. Further studies are needed to determine the impact of ingested microbes on the intestinal microbiota, the extent of variation across foods, meals and diets, and the extent to which dietary microbes may impact human health. The answers to these questions will reveal whether dietary microbial approaches beyond probiotics taken as supplements - _i.e._, ingested as foods - are important contributors to the composition, inter-individual variation, and function of our gut microbiota.
PRECISION ASTEROSEISMOLOGY OF THE WHITE DWARF GD 1212 USING A TWO-WHEEL CONTROLLED KE...
JJ Hermes
We present a preliminary analysis of the cool pulsating white dwarf GD1212, enabled by more than 11.5 days of space-based photometry obtained during an engineering test of a two-reaction wheel controlled _Kepler_ spacecraft. We detect at least 21 independent pulsation modes, ranging from 369.8 − 1220.8s, and at least 17 nonlinear combination frequencies of those independent pulsations. Our longest uninterrupted light curve, 9.0 days in length, evidences coherent difference frequencies at periods inaccessible from the ground, up to 14.5hr, the longest-period signals ever detected in a pulsating white dwarf. These results mark some of the first science to come from a two-wheel controlled _Kepler spacecraft_, proving the capability for unprecedented discoveries afforded by extending _Kepler_ observations to the ecliptic.
Making Materials Science and Engineering Data More Valueable Research Products
James A Warren
Both the global research community and federal governments are embracing a move toward more open sharing of the products of research. Historically, the primary product of research has been the peer-reviewed journal article for fundamental research and government technical report for applied research and engineering for government sponsored research. However, advances in information technology, new "open access" business models, and government policies are working to make publications and supporting materials much more accessible to the general public. These same drivers are obscuring the distinction between the data generated through the course of research and the associated publications. These developments have the potential to significantly enhance the value of both publications and the supporting digital research data, turning them into valuable assets that can be shared and reused by other researchers. The confluence of these shifts in the research landscape leads one to the conclusion that technical publications and their supporting research data must become bound together in a rational fashion. However, bringing these two research products together will require establishment of new policies and a supporting data infrastructure that have essentially no precedent in the materials community, and indeed are stressing many other fields of research. This document raises the key issues that must be addressed in developing these policies and infrastructure, and suggests a path forward in creating the solutions.
← Previous 1 2 … 482 483 484 485 486 487 488 489 490 Next → | CommonCrawl |
dev.arxiv.org > hep-ex > arXiv:1502.05199
High Energy Physics - Experiment
arXiv:1502.05199 (hep-ex)
[Submitted on 18 Feb 2015 (v1), last revised 31 Mar 2015 (this version, v2)]
Title:Physics Potential of a Long Baseline Neutrino Oscillation Experiment Using J-PARC Neutrino Beam and Hyper-Kamiokande
Authors:Hyper-Kamiokande Proto-Collaboraion: K. Abe, H. Aihara, C. Andreopoulos, I. Anghel, A. Ariga, T. Ariga, R. Asfandiyarov, M. Askins, J. J. Back, P. Ballett, M. Barbi, G. J. Barker, G. Barr, F. Bay, P. Beltrame, V. Berardi, M. Bergevin, S. Berkman, T. Berry, S. Bhadra, F. d. M. Blaszczyk, A. Blondel, S. Bolognesi, S. B. Boyd, A. Bravar, C. Bronner, F. S. Cafagna, G. Carminati, S. L. Cartwright, M. G. Catanesi, K. Choi, J. H. Choi, G. Collazuol, G. Cowan, L. Cremonesi, G. Davies, G. De Rosa, C. Densham, J. Detwiler, D. Dewhurst, F. Di Lodovico, S. Di Luise, O. Drapier, S. Emery, A. Ereditato, P. Fernández, T. Feusels, A. Finch, M. Fitton, M. Friend, Y. Fujii, Y. Fukuda, D. Fukuda, V. Galymov, K. Ganezer, M. Gonin, P. Gumplinger, D. R. Hadley, L. Haegel, A. Haesler, Y. Haga, B. Hartfiel, M. Hartz, Y. Hayato, M. Hierholzer, J. Hill, A. Himmel, S. Hirota, S. Horiuchi, K. Huang, A. K. Ichikawa, T. Iijima, M. Ikeda, J. Imber, K. Inoue, J. Insler, R. A. Intonti, T. Irvine, T. Ishida, H. Ishino, M. Ishitsuka, Y. Itow, A. Izmaylov, B. Jamieson, H. I. Jang, M. Jiang, K. K. Joo, C. K. Jung, A. Kaboth, T. Kajita, J. Kameda, Y. Karadhzov, T. Katori, E. Kearns, M. Khabibullin, A. Khotjantsev, J. Y. Kim, S. B. Kim, Y. Kishimoto
, T. Kobayashi, M. Koga, A. Konaka, L. L. Kormos, A. Korzenev, Y. Koshio, W. R. Kropp, Y. Kudenko, T. Kutter, M. Kuze, L. Labarga, J. Lagoda, M. Laveder, M. Lawe, J. G. Learned, I. T. Lim, T. Lindner, A. Longhin, L. Ludovici, W. Ma, L. Magaletti, K. Mahn, M. Malek, C. Mariani, L. Marti, J. F. Martin, C. Martin, P. P. J. Martins, E. Mazzucato, N. McCauley, K. S. McFarland, C. McGrew, M. Mezzetto, H. Minakata, A. Minamino, S. Mine, O. Mineev, M. Miura, J. Monroe, T. Mori, S. Moriyama, T. Mueller, F. Muheim, M. Nakahata, K. Nakamura, T. Nakaya, S. Nakayama, M. Needham, T. Nicholls, M. Nirkko, Y. Nishimura, E. Noah, J. Nowak, H. Nunokawa, H. M. O'Keeffe, Y. Okajima, K. Okumura, S. M. Oser, E. O'Sullivan, T. Ovsiannikova, R. A. Owen, Y. Oyama, J. Pérez, M. Y. Pac, V. Palladino, J. L. Palomino, V. Paolone, D. Payne, O. Perevozchikov, J. D. Perkin, C. Pistillo, S. Playfer, M. Posiadala-Zezula, J.-M. Poutissou, B. Quilain, M. Quinto, E. Radicioni, P. N. Ratoff, M. Ravonel, M. A. Rayner, A. Redij, F. Retiere, C. Riccio, E. Richard, E. Rondio, H. J. Rose, M. Ross-Lonergan, C. Rott, S. D. Rountree, A. Rubbia, R. Sacco, M. Sakuda, M. C. Sanchez, E. Scantamburlo, K. Scholberg, M. Scott, Y. Seiya, T. Sekiguchi, H. Sekiya, A. Shaikhiev, I. Shimizu, M. Shiozawa, S. Short, G. Sinnis, M. B. Smy, J. Sobczyk, H. W. Sobel, T. Stewart, J. L. Stone, Y. Suda, Y. Suzuki, A. T. Suzuki, R. Svoboda, R. Tacik, A. Takeda, A. Taketa, Y. Takeuchi, H. A. Tanaka, H. K. M. Tanaka, H. Tanaka, R. Terri, L. F. Thompson, M. Thorpe, S. Tobayama, N. Tolich, T. Tomura, C. Touramanis, T. Tsukamoto, M. Tzanov, Y. Uchida, M. R. Vagins, G. Vasseur, R. B. Vogelaar, C. W. Walter, D. Wark, M. O. Wascko, A. Weber, R. Wendell, R. J. Wilkes, M. J. Wilking, J. R. Wilson, T. Xin, K. Yamamoto, C. Yanagisawa, T. Yano, S. Yen, N. Yershov, M. Yokoyama, M. Zito
et al. (149 additional authors not shown)
Abstract: Hyper-Kamiokande will be a next generation underground water Cherenkov detector with a total (fiducial) mass of 0.99 (0.56) million metric tons, approximately 20 (25) times larger than that of Super-Kamiokande. One of the main goals of Hyper-Kamiokande is the study of $CP$ asymmetry in the lepton sector using accelerator neutrino and anti-neutrino beams.
In this paper, the physics potential of a long baseline neutrino experiment using the Hyper-Kamiokande detector and a neutrino beam from the J-PARC proton synchrotron is presented. The analysis uses the framework and systematic uncertainties derived from the ongoing T2K experiment. With a total exposure of 7.5 MW $\times$ 10$^7$ sec integrated proton beam power (corresponding to $1.56\times10^{22}$ protons on target with a 30 GeV proton beam) to a $2.5$-degree off-axis neutrino beam, it is expected that the leptonic $CP$ phase $\delta_{CP}$ can be determined to better than 19 degrees for all possible values of $\delta_{CP}$, and $CP$ violation can be established with a statistical significance of more than $3\,\sigma$ ($5\,\sigma$) for $76\%$ ($58\%$) of the $\delta_{CP}$ parameter space. Using both $\nu_e$ appearance and $\nu_\mu$ disappearance data, the expected 1$\sigma$ uncertainty of $\sin^2\theta_{23}$ is 0.015(0.006) for $\sin^2\theta_{23}=0.5(0.45)$.
Subjects: High Energy Physics - Experiment (hep-ex); High Energy Physics - Phenomenology (hep-ph)
Journal reference: Prog. Theor. Exp. Phys. (2015) 053C02
DOI: 10.1093/ptep/ptv061
Cite as: arXiv:1502.05199 [hep-ex]
(or arXiv:1502.05199v2 [hep-ex] for this version)
From: Masashi Yokoyama [view email]
[v1] Wed, 18 Feb 2015 12:37:22 UTC (4,944 KB)
[v2] Tue, 31 Mar 2015 06:33:45 UTC (5,131 KB)
hep-ex | CommonCrawl |
\begin{document}
\title{Gromov-Witten theory with derived algebraic geometry} \author{Etienne Mann}
\address{Etienne Mann, Universit\'e d'Angers, Département de mathématiques Bâtiment I Faculté des Sciences 2 Boulevard Lavoisier F-49045 Angers cedex 01 France } \email{[email protected] }
\author{Marco Robalo}
\address{Marco Robalo, Sorbonne Université. Université Pierre et Marie Curie, Institut Mathématiques de Jussieu Paris Rive Gauche, CNRS, Case 247, 4, place Jussieu, 75252 Paris Cedex 05, France } \email{[email protected]}
\thanks{E.M is supported by the grant of the Agence Nationale de la
Recherche ``New symmetries on Gromov-Witten theories'' ANR- 09-JCJC-0104-01. and "SYmétrie
miroir et SIngularités irrégulières provenant de la PHysique "ANR-13-IS01-0001-01/02 and project
``CatAG''ANR-17-CE40-0014}
\thanks{M. R was supported by a Postdoctoral Fellowship of the Fondation Sciences Mathematiques de
Paris and ANR ``CatAG''ANR-17-CE40-0014}
\begin{abstract} In this survey we add two new results that are not in our paper \cite{2015arXiv150502964M}. Using the idea of brane actions discovered by To\"en, we construct a lax associative action of the operad of stable curves of genus zero on a smooth variety $X$ seen as an object in correspondences in derived stacks. This action encodes the Gromov-Witten theory of $X$ in purely geometrical terms.
\end{abstract}
\personal{ \begin{center} LIST OF TODOs TEST \end{center}
Solve PROP \ref{prop-compatiblePerfCoh}\\ 1) the fact these are maps of stacks and not derived schemes, which are not representable by derived schemes 2)Why proper quasi-smooth (why of finite presentation?) pullbacks preserve bounded coherent? Meaning, why are they of finite tor amplitude.\\
THM \ref{thmbranes}\\ 1) Re-check the proof specially the verification it is a cocartesian fibration\\
H-descent: I think the proof is correct because we are only describing the lax structure and for this we only need the structure sheaf. As the maps in the diagram are closed immersions, therefore proper, then they preserve bounded coherent.\\ 1)
}
\maketitle \tableofcontents
\section{Introduction} \label{sec:introduction}
This paper is a survey\footnote{We add two new results Theorem \ref{thm:colim} and \ref{thm:orientation}} of \cite{2015arXiv150502964M}. We explain without technical details the ideas of \cite{2015arXiv150502964M} where we use derived algebraic geometry to redefine Gromov-Witten invariants and highlight the hidden operad picture.
Gromov-Witten invariants were introduced by Kontsevich and Manin in algebraic geometry in \cite{KMgwqceg, MR1363062}. The foundations were then completed by Behrend, Fantechi and Manin in \cite{Behrend-Manin-stack-stable-mapGWI-1996}, \cite{MR1437495} and \cite{MR1431140}. In symplectic geometry, the definition is due to Y. Ruan and G. Tian in \cite{RTmqc}, \cite{MR1390655} and \cite{MR1483992}. Mathematicians developed several techniques to compute them: via a localization formula proved by Graber and Pandharipande in \cite{MR1666787}, via a degeneration formula proved by J. Li in \cite{MR1938113} and another one called quantum Lefschetz proved by Coates-Givental \cite{Givental-Coates-2007-QRR} and Tseng \cite{tseng_orbifold_2010}.
These invariants can be encoded using different mathematical structures: quantum products, cohomological field theories (Kontsevich-Manin in \cite{KMgwqceg}), Frobenius manifolds (Dubrovin in \cite{Dtft}), Lagrangian cones and Quantum $D$-modules (Givental \cite{MR2115767}), variations of non-commutative Hodge structures (Iritani \cite{Iritani-2009-Integral-structure-QH} and Kontsevich, Katzarkov and Pantev in \cite{Katzarkov-Pantev-Kontsevich-ncVHS}) and so on, and used to express different aspects of mirror symmetry. Another important aspect of the theory concerns the study of the functoriality of Gromov-Witten invariants via crepant resolutions or flop transitions in terms of these structures (see \cite{MR2234886}, \cite{MR2360646}, \cite{CCIT-wall-crossingI}, \cite{CCIT-computingGW}, \cite{Bryan-Graber-2009}, \cite{MR2683208}, \cite{2013arXiv1309.4438B}, \cite{2014arXiv1407.2571B}, \cite{2014arXiv1410.0024C}, etc).\\
We first recall the classical construction of these invariants. Let $X$ be a smooth projective variety (or orbifold). The basic ingredient to define GW-invariants is the moduli stack of stable maps to $X$, denoted by $\overline{\mathcal{M}}_{g,n}(X,\beta)$, with a fixed degree $\beta \in H_{2}(X,\mathbb{Z})$ \footnote{The (co)homology in this paper are the singular
ones.}. The evaluation at the marked points gives maps of stacks $\ev_i :\overline{\mathcal{M}}_{g,n}(X,\beta) \to X$ and forgetting the morphism and stabilising the curve gives a map $p:\overline{\mathcal{M}}_{g,n}(X,\beta) \to \overline{\mathcal{M}}_{g,n}$ (See Remark \ref{rem:2,morphisms}).
To construct the invariants, we integrate over ``the fundamental class'' of the moduli stack $\overline{\mathcal{M}}_{g,n}(X,\beta)$. For this integration to be possible, we need this moduli stack to be proper, which was proved by Behrend-Manin \cite{Behrend-Manin-stack-stable-mapGWI-1996} and some form of smoothness. In general, the stack $\overline{\mathcal{M}}_{g,n}(X,\beta)$ is not smooth and has many components with different dimensions. Nevertheless and thanks to a theorem of Kontsevich \cite{MR1363062}, it is quasi-smooth - in the sense that locally it looks like the intersection of two smooth sub-schemes inside an ambient smooth scheme. In genus zero however this stack is known to be smooth under some assumptions on the geometry of $X$, for instance, when $X$ is the projective space or a Grassmaniann, or more generally when $X$ is convex, \textit{i.e.}, if for any map $f:\mathbb{P}^1\to X$, the group $\mathrm{H}^1(\mathbb{P}^1, f^*(\mathrm{T}_X))$ vanishes. See \cite{MR1492534}.
This quasi-smoothness has been used by Behrend-Fantechi to define in \cite{MR1437495} a ``virtual fundamental class'', denoted by $[\overline{\mathcal{M}}_{g,n}(X,\beta)]^{\vir}$, which is a cycle in the Chow ring of $\overline{\mathcal{M}}_{g,n}(X,\beta)$ that plays the role of the usual fundamental class.
One of the most important result of Gromov-Witten invariants is that they form a cohomological field theory, that is, there exist a family of morphisms \begin{align}
\label{eq:2}
I_{g,n,\beta}^{X}: H^{*}(X)^{\otimes n}& \to H^{*}(\overline{\mathcal{M}}_{g,n}) \\ (\alpha_{1}\otimes\ldots \otimes \alpha_{n}) &\mapsto \mathrm{Stb}_{*}\left(\left[\overline{\mathcal{M}}_{g,n}(X,\beta)\right]^{\vir}\cup (\cup_i ev^{*}_i(\alpha_{i})) \nonumber \right) \end{align} that satisfy some properties. Another formulation of this result is that we have a morphism of operads between $\left(H_{*}(\overline{\mathcal{M}}_{g,n})\right)_{n\in \mathbb{N}}$ and the endomorphism operad $\End(H^{*}(X))$ (see Corollary \ref{cor:alg,coho}). Yet a more concise way to explain this, is to say that $H^{*}(X)$ owns a structure of algebra over the operads $H_{*}(\overline{\mathcal{M}}_{g,n})$.
The main result of \cite{2015arXiv150502964M} is that it is possible to remove (co)homology from the previous statement. The main result of \cite{2015arXiv150502964M} is the following \begin{thm}[See Theorem \ref{thm,main}]
Let $X$ be a smooth projective variety.
The diagrams \begin{align*}
\xymatrix{&\coprod_{\beta}\mathbb{R}\overline{\mathcal{M}}_{0,n+1}(X,\beta) \ar[ld]_{p,e_{1}, \ldots ,e_{n}}
\ar[rd]^{p,e_{n+1}} & \\ \overline{\mathcal{M}}_{0,n+1}\times X^{n} &&
\overline{\mathcal{M}}_{0,n+1}\times X } \end{align*}
give a family of morphisms
\begin{align*}
\varphi_{n}:\overline{\mathcal{M}}_{0,n+1} \to \underline{\End}^{\corr}(X)[n]:=\underline{\Hom}^{\corr}(X^{n},X)
\end{align*} that forms a lax morphism of $\infty$-operads in the category of derived stacks. \end{thm}
We restrict our work to genus $0$ because we lack fundamental aspects for $\infty$-modular operads.
In this survey we omit the technical details and we insist on the ideas behind the theorem. Nevertheless, we add some new statements with respect to \cite{2015arXiv150502964M} as Theorem \ref{thm:colim} and Theorem \ref{thm:orientation} with the proofs given in the appendices.
\textbf{Acknowledgements:} We want to thank Bertrand To\"en for the organisation of the \'Etat de la Recherche and also for some ideas to prove Theorem \ref{thm:colim}. The first author thanks Daniel Naie who explains how to make these figures.
\section{Moduli space of stable maps, cohomological field theory and operads} \label{sec:stable-maps}
In this section, we recall some notions and ideas related to Gromov-Witten theory. Most of them are in the book of Cox-Katz \cite{Cox-Katz-Mirror-Symmetry}. The mathematical story started with the paper of Kontsevich \cite{MR1363062} (see also Kontsevich-Manin \cite{MR1369420}) and was followed by many more and interesting questions that we will skip here.
\subsection{Moduli space of stable maps} \label{sec:moduli-space-stable-1}
Let $X$ be a smooth projective variety over $\mathbb{C}$. Let $\beta\in H_{2}(X,\mathbb{Z})$. Let $g,n \in \mathbb{N}$. Denote by $(\mathrm{Aff-sch})$ the category of affine scheme and by $(\mathrm{Grps})$ the category of groupoids. We define the moduli space of stable maps by the following functor: \begin{align*}
\overline{\mathcal{M}}_{g,n}(X,\beta) : (\mathrm{Aff-sch})^{op}\longrightarrow & (\mathrm{Grps}) \end{align*} where $\overline{\mathcal{M}}_{g,n}(X,\beta)(S)$ is the following groupoids. Objects are flat proper morphisms $\pi:\mathcal{C}\to S$ together with $n$-sections $\sigma_{i}:S \to \mathcal{C}$ and a morphism $f:\mathcal{C}\to X$ such that for any geometric point $s \in S$, we have \begin{enumerate} \item the fiber $\mathcal{C}_{s}$ is a connected nodal curve of genus $g$ with $n$ distinct marked
points which live on the smooth locus of $\mathcal{C}_{s}$. \item $f_{s}:\mathcal{C}_{s}\to X$ is of degree $\beta$, meaning $f_{*}[\mathcal{C}_{s}]=\beta$. \item\label{item:stab} the automorphism group of $\Aut(\mathcal{C},\underline{\sigma},f)$ is finite
where we denote $\underline{\sigma}=(\sigma_{1}, \ldots ,\sigma_{n})$. This condition is called
\textit{stability} condition. \end{enumerate}
For any affine scheme $S$, the morphism in the groupoid $\overline{\mathcal{M}}_{g,n}(X,\beta)(S)$ ar the isomorphisms $\varphi:\mathcal{C}\to \mathcal{C'}$ such that the following diagram is commutative: \begin{displaymath}
\xymatrix{\mathcal{C} \ar[rdd]_-{\pi} \ar[rr]^-{\varphi}_{\sim} \ar[rd]^-{f} && \mathcal{C'} \ar[ldd]^-{\pi'}
\ar[ld]_-{f'} \\ & X& \\ &S \ar@/^16pt/[uul]^{\sigma_{i}} \ar@/_16pt/[uur]_{\sigma'_{i}}&} \end{displaymath}
Let $\varphi:S\to S'$ be a morphism of affine schemes. Let $(\mathcal{C}\to S,\underline{\sigma},f)$ be an object in $\overline{\mathcal{M}}_{g,n}(X,\beta)(S)$, then the pullback family defined by the diagram below satisfies the three conditions above that is it is in $\overline{\mathcal{M}}_{g,n}(X,\beta)(S')$ \begin{displaymath}
\xymatrix{\mathcal{C'}\times_{S'}S \ar[dd] \ar[rr]^{\widetilde{\varphi}}
\ar[rd]^-{\widetilde{\varphi}\circ f'} && \mathcal{C'} \ar[dd]^-{\pi'}
\ar[ld]_-{f'} \\ & X& \\ S \ar@/^16pt/[uu]^{\varphi^{*}\sigma_{i}'}\ar[rr]^{{\varphi}}&& S' \ar@/_16pt/[uu]_{\sigma'_{i}}} \end{displaymath} Notice that the condition $(1), (2)$ a,d $(3)$ are stable by pull-back.
\begin{remark}
Let explain the stability condition \eqref{item:stab} in more concrete terms (See \cite[\S 7.1.1
p. 169]{Cox-Katz-Mirror-Symmetry}). Denote by $\mathcal{C}_{s,i}$ the irreducible components of
$\mathcal{C}_{s}$ and by $f_{s,i}:\mathcal{C}_{s,i}\to X$ the restrictions of the morphism. Denote
by $\beta_{i}=(f_{s,i})_{*}[\mathcal{C}_{s,i}]\in H_{2}(X,\mathbb{Z})$ the degree of $f_{s}$ on
each irreducible component $\mathcal{C}_{s,i}$. On the irreducible component $\mathcal{C}_{s,i}$,
a point is called \textit{special} if it is a nodal point or a marked point. The stability
condition \eqref{item:stab} is equivalent to the following condition on each irreducible component
: if $\beta_{i}=0$ and the genus of $\mathcal{C}_{s,i}$ is $0$ (resp. $1$) then
$\mathcal{C}_{s,i}$ should have at least $3$ (resp. $1$) special points. So for example if
$\beta_{i}\neq 0$ or the genus is greater than $2$ there is no condition on $\mathcal{C}_{s,i}$. \end{remark}
In this text, we will never use the coarse moduli space of $\overline{\mathcal{M}}_{g,n}(X,\beta)$, so all the morphisms that we will use are morphisms of stacks.
\begin{ex}\label{sec:moduli-space-stable}
Let us give an example in genus $0$ (see Figure \ref{fig:stab,maps}). Consider the following
stable map in $\overline{\mathcal{M}}_{0,5}(X,\beta)$. All the $C_{i}$ are isomorphic to
$\mathbb{P}^{1}$. The stability condition on this stable map imposes only that $\beta_{2}\neq 0$
because $C_{2}$ has only $2$ special points.
\begin{figure}
\caption{Example of a stable map}
\label{fig:stab,maps}
\end{figure}
\end{ex}
In particular, the moduli space of stable curve, denoted by $\overline{\mathcal{M}}_{g,n}$ is $\overline{\mathcal{M}}_{g,n}(\pt,\beta=0)$. Notice that for $(g,n)\in\{(0,0),(0,1),(0,2),(1,0)\}$ the moduli space $\overline{\mathcal{M}}_{g,n}$ is empty.
\begin{remark}\label{rem:2,morphisms}
There are two kinds of natural morphisms of stacks from the moduli space of stable maps.
\begin{enumerate}
\item For any $i\in\{1, \ldots ,n\}$, the evaluation morphism
$e_{i}:\overline{\mathcal{M}}_{g,n}(X,\beta) \to X$ is the evaluation at the $i$-th marked point
i.e., it sends the geometric point $(C,x_{1}, \ldots ,x_{n},f)$ to $f(x_{i})$.
\item When $\overline{\mathcal{M}}_{g,n}$ is not empty, we define the morphism of stacks
$p:\overline{\mathcal{M}}_{g,n}(X,\beta)\to \overline{\mathcal{M}}_{g,n}$ that forgets the map
and stabilises the curve that is it sends $(C,x_{1}, \ldots ,x_{n},f)$ to
$(C^{\stab},x_{1}, \ldots ,x_{n})$ where $C^{\stab}$ is obtained from $C$ by contracting all the
unstable components (see \cite{Knudsen-moduli-stable-curves-II-1983} for the techniques). On the
stable map of the example \ref{sec:moduli-space-stable}, forgetting the map $f$, the irreducible
component $C_{2}$ become unstable (because it has only $2$ special points). So the image by $p$
is the following stable curve (see Figure \ref{fig:stab}).
\begin{figure}
\caption{The stabilisation of the stable maps of Figure \ref{fig:stab,maps}}
\label{fig:stab}
\end{figure} \end{enumerate} \end{remark}
\begin{thm}[Deligne-Mumford \cite{MR0262240}, Kontsevich-Manin \cite{MR1369420}, Behrend-Fantechi
\cite{MR1437495}]
\
\begin{enumerate}
\item The moduli space $\overline{\mathcal{M}}_{g,n}$ is a proper smooth Deligne-Mumford stack of
dimension $3g-3+n$.
\item The moduli space $\overline{\mathcal{M}}_{g,n}(X,\beta)$ is a proper (not smooth in general)
Deligne-Mumford stack. It has an expected dimension (see remark below for the meaning) which is
\begin{displaymath}
\int_{\beta}c_{1}(TX) +(1-g)\dim X +3g-3+n
\end{displaymath}
\item There exists a class, denoted by $[\overline{\mathcal{M}}_{g,n}(X,\beta)]^{\vir}$, in the
Chow ring $A_{*}(\overline{\mathcal{M}}_{g,n}(X,\beta))$ of degree equal to the expected
dimension of $\overline{\mathcal{M}}_{g,n}(X,\beta)$ which satisfies some functorial properties.
\end{enumerate} \end{thm}
\begin{remark}
\begin{enumerate}
\item To use standard tools of intersection on the moduli space of stable maps we need this moduli
space to be proper and smooth. The smoothness would give us the existence of a well-defined
fundamental class. Nevertheless, the moduli space of stable maps
$\overline{\mathcal{M}}_{g,n}(X,\beta)$, which is not smooth in general, could have different
irreducible components of different dimensions with some very bad singularities. So the problem
is to define an ersatz of a fundamental class. This was done by Behrend-Fantechi in
\cite{MR1437495} where they defined the \textit{virtual fundamental class} (see
\S~\ref{sec:comp-theor-two}).
\item In some very specific case the moduli space of maps is smooth : for example only in genus
$0$ for homogeneous variety like $\mathbb{P}^{n}$, grassmannian or flag varieties. In these
cases, the virtual dimension is the actual dimension and the virtual fundamental class is the
fundamental class.
\item The computation of the expected dimension comes from deformation theory. Namely, a
deformation of a stable maps turns to be a deformation of the underlying curve plus a
deformation of the map. As $\overline{\mathcal{M}}_{g,n}$ is smooth, the deformation functor of
the curve has no obstruction and the tangent space has the dimension of
$\overline{\mathcal{M}}_{g,n}$ which is $3g-3+n$. For the maps, the deformation functor has a
non zero obstruction. More precisely, at a point
$(C,\underline{x},f) \in \overline{\mathcal{M}}_{g,n}(X,\beta)$, the tangent space is
$H^{0}(C,f^{*}TX)$ and an obstruction is $H^{1}(C,f^{*}TX)$. Making this in family, one gets two
quasi coherent sheaves that are not vector bundles. Nevertheless the Euler characteristic can be
computed via the Hirzebruch-Riemann-Roch theorem:
\begin{displaymath}
\chi(C,f^{*}TX)=\dim H^{0}(C,f^{*}TX)-\dim H^{1}(C,f^{*}TX)=\int_{C}\Td(TC)\ch(f^{*}TX)
\end{displaymath}
is constant and equals to $\int_{\beta}c_{1}(TX) +(1-g)\dim X$.
\end{enumerate} \end{remark}
We will now introduce another moduli space which was introduce by Costello \cite{MR2247968} and which will play a crucial role latter. Let $\NE(X)$ be the subset of $H_{2}(X,\mathbb{Z})$ of classes given by the image of a curve i.e. the subset of all $f_{*}[C]$ for any morphism $f:C\to X$. Let define $\mathfrak{M}_{g,n\beta}$ as the moduli space of nodal curve of genus $g$ with $n$ marked smooth points where each irreducible component $C_{i}$ has a labelled $\beta_{i}$ (notice that this $\beta_{i}$ is not the degree of a map because there is no map from $C\to X$, it is just a labbeled. At the end of the day, it will be related to the degree of a map but not here) such that \begin{itemize} \item $\sum_{i}\beta_{i}=\beta$ \item if $\beta_{i}=0$ then $C_{i}$ is stable i.e., if $C_{i}$ is of genus 0 then it has at least
$3$ special points and if the genus is $1$ then it has at least $1$ special point. \end{itemize}
We have a natural morphism of stacks $p:\mathfrak{M}_{g,n+1,\beta}\to \mathfrak{M}_{g,n,\beta}$ which forgets the $(n+1)-th$ marked point and contracts the irreducible components that are not stable.
\begin{thm}[\cite{MR2247968}]\label{thm,Costello}
\begin{enumerate}
\item The stack $\mathfrak{M}_{g,n,\beta}$ is a smooth Artin stack.
\item The morphism $p:\mathfrak{M}_{g,n+1,\beta}\to \mathfrak{M}_{g,n,\beta}$ is the universal
curve.
\end{enumerate} \end{thm}
\begin{remark}
\begin{enumerate}
\item Notice that forgetting the last marked point and contracting the unstable component gives a
morphism $\overline{\mathcal{M}}_{g,n+1} \to \overline{\mathcal{M}}_{g,n}$ which is also the
universal curve (See \cite{Knudsen-moduli-stable-curves-II-1983}).
\item The Artin stack of prestable\footnote{where we do not ask any stability condition on
irreducible components see \cite[p.179]{Cox-Katz-Mirror-Symmetry}.} curves, denoted by
$\mathfrak{M}^{pre}_{g,n}$ also have a universal curve which is not
$\mathfrak{M}^{\pres}_{g,n+1}$. As there is no stability condition on the moduli space of
prestable curves, forgetting a marked point never contract a rational curve. So forgetting a
marked point $\mathfrak{M}^{\pres}_{g,n+1}\to \mathfrak{M}^{\pres}_{g,n}$ is not the universal
curve.
\end{enumerate} \end{remark}
Let us explain the meaning of being an universal curve of $\mathfrak{M}_{g,n,\beta}$. Let $C$ be a curve of genus $g$ with $4$ marked points with a label $\beta$. This is equivalent by definition to a morphism $\pt \to \mathfrak{M}_{g,4,\beta}$. Being a universal curve means that we have the $C=\mathfrak{M}_{g,5,\beta}\times_{\mathfrak{M}_{g,4,\beta}}\pt$ that is the following diagram \begin{displaymath}
\xymatrix{ C \ar[r]^{\varphi} \ar[d]&\mathfrak{M}_{g,5,\beta}\ar[d]\\ \pt \ar[r]&\mathfrak{M}_{g,4,\beta}} \end{displaymath} is cartesian. Let explain the morphism $\varphi$. To a smooth point $y\in C\setminus \{x_{1}, \ldots ,x_{4}\}$, $f(y)$ is the curve $C$ where $y$ is now $x_{5}$. If $y=x_{i}$, then $\varphi(y)$ is the curve $C$ where we attach a $\mathbb{P}^{1}$ at $x_{i}$ (let's say at $0$ of this $\mathbb{P}^{1}$) with $\beta=0$ and you marked $x_{i}$ and $x_{5}$ at $1$ and $\infty$. If $y$ is a node which is the intersection with $C_{i}$ and $C_{j}$, then we replace the node by a $\mathbb{P}^{1}$ with degree $0$ which meet $C_{i}$ at $0$, $C_{j}$ at $\infty$ and we marked the point $1$ by $x_{5}$ on this $\mathbb{P}^{1}$.
Here is a picture that we hope makes this clearer (see Figure \ref{fig:universal}). Forgetting the last point makes the component $(\mathbb{P}^{1},\beta=0)$ unstable so one should contract it and we get back $C$.
\begin{figure}
\caption{Universal curve}
\label{fig:universal}
\end{figure}
\subsection{Gromov-Witten classes and cohomological field theory} \label{sec:grom-witt-class}
We first define the Gromov-Witten classes. Let $\alpha_{1}, \ldots ,\alpha_{n} \in H^{*}(X)$. Let $\beta\in H_{2}(X,\mathbb{Z})$. We define the following morphism \begin{align*}
\varphi_{g,n,\beta}:H^{*}(X)\times \cdots \times H^{*}(X)& \longrightarrow
H^{*}(\overline{\mathcal{M}}_{g,n}) \\ (\alpha_{1}, \ldots ,\alpha_{n})& \longmapsto
p_{*}\left(\prod_{i=1}^{n}e_{i}^{*}\alpha_{i}\cap [\overline{\mathcal{M}}_{g,n}(X,\beta)]^{\vir}\right) \end{align*}
\begin{thm}[Kontsevich-Manin \cite{MR1369420}] All these maps $\{\varphi_{g,n,\beta}\}_{g,n\in\mathbb{N},\beta\in H_{2}(X,\mathbb{Z})}$ together form a cohomological field theory. \end{thm}
\begin{remark}
\begin{enumerate}
\item We refer to \cite{MR1369420} for a complete
definition of a cohomological field theory.
\item Unwindy the definition, is the so-called splitting property. Let
$g_{1},g_{2},n_{1},n_{2} \in \mathbb{N}$. Denote by $g=g_{1}+g_{2}$ and $n= n_{1}+n_{2}$. Consider
the gluing morphism of stacks
\begin{align}\label{eq:gluing,moprh}
g: \overline{\mathcal{M}}_{g_{1},n_{1}+1}\times \overline{\mathcal{M}}_{g_{2},n_{2}+1} &\to
\overline{\mathcal{M}}_{g,n} \\ (C_{1},C_{2}) & \longmapsto C_{1}\circ C_{2} \nonumber
\end{align}
that identifies the $n_{2}+1$-th marked point of $C_{2}$ with the first marked point of
$C_{1}$. Notice that the gluing morphism above is given by the pushout. More
precisely, let $({C}_{1}\to S,\underline{\sigma})$ in
$\overline{\mathcal{M}}_{g_{1},n_{1}+1}(S)$ and $(C_{2}\to S,\underline{\sigma})$ in
$\overline{\mathcal{M}}_{g_{2},n_{2}+1}(S)$ then $ C_{1}\circ C_{2}$ is the
pushout\footnote{Notice that pushouts do not exist for any morphisms of schemes in the category
of schemes but pushout along closed immersion does exist.} $C_{1}\coprod_{S}C_{2}$ given by
the two closed immersion given by the marking $\sigma_{1}:S\to C_{1}$ and $\sigma_{n_{2}+1}:S\to
C_{2}$.
This corresponds to the following picture
\begin{figure}
\caption{Gluing curves: the output of $C_{2}$, that is $x_{3}$, with the first input of $C_{1}$}
\label{fig:gluing}
\end{figure}
The splitting formula is the following
\begin{align}\label{eq:split}
g^{*}\varphi_{g,n,\beta}(\alpha_{1}, \ldots ,\alpha_{n})=\sum_{\stackrel{g_{1}+g_{2}=g}{\beta_{1}+\beta_{2}=\beta}}\sum_{a=0}^{s}\varphi_{g_{1},n_{1}+1,\beta_{1}}(\alpha_{1}, \ldots ,\alpha_{n_{1}},T_{a})\varphi_{g_{2},n_{2}+1,\beta_{2}}(T^{a},\alpha_{n_{1}+1}, \ldots ,\alpha_{n})
\end{align} where $(T_{a})_{a\in\{0, \ldots ,s\}}$ is a basis of $H^{*}(X)$ and $(T^{a})$ is its Poincaré dual basis.
\end{enumerate} Beyond this formula, the idea is that we can control the behaviour of the virtual fondamental class when we glue curves. We will see this again later. \end{remark}
Restricting to genus $0$, we can reformulate this equality \eqref{eq:split} by the following statement.
\begin{cor}\label{cor:alg,coho}
We have a morphism of operads in vector spaces
\begin{align*}
\psi_{n,\beta}: H_{*}(\overline{\mathcal{M}}_{0,n+1}) \to \End(H^{*}(X))[n]:=\Hom(H^{*}(X)^{\otimes n}, H^{*}(X))
\end{align*} given by \begin{align*}
\psi_{0,n,\beta}(\gamma)(\alpha_{1}, \ldots
,\alpha_{n})=(e_{n+1})_{*}\left(p^{*}\gamma\cup \prod_{i=1}^{n}e_{i}^{*}\alpha_{i}\cap [\overline{\mathcal{M}}_{0,n}(X,\beta)]^{\vir}\right) \end{align*} \end{cor} Another way of expressing exactly the same statement is to say that the cohomology $H^{*}(X)$ is an $\{H_{*}(\overline{\mathcal{M}}_{0,n+1})\}_{n\geq 2}$-algebra. The goal of this survey is to explain how to remove the (co)homology from this corollary and doing this at the geometrical level.
\subsection{Reviewed on operads} \label{sec:recall-operads}
We add this section for completeness as operads are not so well known to algebraic geometer\footnote{The first author did not know this notion before the working seminar in
Montpellier where these ideas were first discussed.}.
An operad is the following data : \begin{enumerate} \item A family of objects in a category (vector spaces, schemes or Deligne-Mumford stacks) $\mathcal{O}(n)$ for all $n\in \mathbb{N}$. The example that one should have in mind for this note is $\mathcal{O}(n)=\overline{\mathcal{M}}_{0,n+1}$. We should think that $\mathcal{O}(n)$ as a collection of operations, each with $n$ inputs and one output. In the case of $\overline{\mathcal{M}}_{0,n+1}$, the marked points $x_{1}, \ldots ,x_{n}$ can be thought as the inputs and the last marked points, $x_{n+1}$, is thought as the output. \item A collection of operations: putting the output of $\mathcal{O}(b)$ with the $i$-th input of $\mathcal{O}(a)$. Let $a,b \in \mathbb{N}$, for any $i\in\{1, \ldots ,a\}$, we have \begin{align}\label{eq:5} \circ_{i} : \mathcal{O}(a)\times \mathcal{O}(b) &\to \mathcal{O}(a+b-1) \end{align} satisfying some relations like associativity of the compositions. \end{enumerate}
\begin{example}We give three examples of operads that we will use in the next sections.
\begin{enumerate}
\item The example $\mathcal{O}(n)=\overline{\mathcal{M}}_{0,n+1}$ is an operads in DM stacks where the
composition $C_{1}\circ_{i}C_{2}$ is obtained by gluing the last marked point of $C_{2}$ to the $i$-th marked point of $C_{1}$ (see \eqref{eq:gluing,moprh} and Figure \ref{fig:gluing} for an example of $\circ_{1}$ with stable curves). Notice that here $\mathcal{O}(0)$ and $\mathcal{O}(1)$ are empty. A standard way of completing this is to put $\mathcal{O}(0)=\mathcal{O}(1)=\pt$ so that $\mathcal{O}(1)$ is the unit. \item Another example of operads that we will use is $\mathcal{O}_{\beta}(n)=\mathfrak{M}_{0,n+1,\beta}$. This is a
graded operad that is in the composition \eqref{eq:5}, we sum the grading :
\begin{displaymath}
\circ_{i} : \mathcal{O}_{\beta}(a)\times \mathcal{O}_{\beta'}(b) \to \mathcal{O}_{\beta+\beta'}(a+b-1)
\end{displaymath} The composition morphism for this operad is by gluing the curves as in the previous example. \item Let $V$ a vector space. Put $\mathcal{O}(n)=\End(V)[n]:=\Hom(V\times\cdots\times V,V)$. This is
called the endomorphism operad in vector spaces. The composition is given by
\begin{displaymath}
(f \circ_{i} g)(v_{1}, \ldots ,v_{a+b-1})=f(v_{1}, \ldots ,v_{i-1},g(v_{i}, \ldots ,v_{i+b-1}),v_{i+b} \ldots ,v_{a+b-1})
\end{displaymath}
\end{enumerate} \end{example}
Let $\mathcal{O}:=\{\mathcal{O}(n)\}_{n\in\mathbb{N}}$ and $\mathcal{E}:=\{\mathcal{E}(n)\}_{n\in\mathbb{N}}$ be two operads. A morphism of operads from $f:\mathcal{O}\to\mathcal{E}$ is a family of morphism $f_{n}:\mathcal{O}(n)\to \mathcal{E}(n)$ such that the following diagram is commutative \begin{align}\label{diag:morph,operad}
\xymatrix{\mathcal{O}(a)\times \mathcal{O}(b)\ar[r]^{f_{a},f_{b}}\ar[d]_{\circ_{i}}&\mathcal{E}(a)\times \mathcal{E}(b) \ar[d]^{\circ_{i}}\\ \mathcal{O}(a+b-1) \ar[r]^{f_{a+b-1}}& \mathcal{E}(a+b-1)} \end{align}
\section{Lax algebra structure on $X$} \label{sec:lax-algebra-struct}
In Corollary \ref{cor:alg,coho}, we have a collection of morphisms $H_{*}(\overline{\mathcal{M}}_{0,n+1}) \to \End(H^{*}(X))[n]$ that form a morphism of operads. The idea is to remove the (co)homology from this statement, that is, to construct in a purely geometrical way, a collection morphisms $\overline{\mathcal{M}}_{0,n+1} \to \End(X)[n]$ in an appropriate category and then to see if these morphisms form a morphism of operads. The correct category is the $(\infty,1)$-category of derived stacks and the morphism is only a lax morphism of $\infty$-operads (see Theorem \ref{thm,main}).
\subsection{Main result} \label{sec:main-result}
Denote by $\mathbb{R}\overline{\mathcal{M}}_{0,n+1}(X,\beta)$ the derived enhancement of $\overline{\mathcal{M}}_{0,n+1}(X,\beta)$ (see subsection \ref{sec:deriv,enhanc,under}). From the two natural morphisms of Remark \ref{rem:2,morphisms}, we get the following diagram \begin{align}\label{key,diag}
\xymatrix{&\coprod_{\beta}\mathbb{R}\overline{\mathcal{M}}_{0,n+1}(X,\beta) \ar[ld]_{p,e_{1}, \ldots ,e_{n}}
\ar[rd]^{p,e_{n+1}} & \\ \overline{\mathcal{M}}_{0,n+1}\times X^{n} &&
\overline{\mathcal{M}}_{0,n+1}\times X } \end{align} We prefer to state our theorem and then give explanations about it. \begin{thm}\label{thm,main}
Let $X$ be a smooth projective variety. The diagram \eqref{key,diag} give a family of morphisms
\begin{align*}
\varphi_{n}:\overline{\mathcal{M}}_{0,n+1} \to \underline{\End}^{\corr}(X)[n]:=\underline{\Hom}^{\corr}(X^{n},X)
\end{align*} that forms a lax morphism of $\infty$-operads in the category of derived stacks. \end{thm}
\begin{remark} In more conceptual terms, $X$ is lax $\{\overline{\mathcal{M}}_{0,n+1}\}_{n}$-algebra in the
category of correspondence in derived stack. \end{remark}
In the next sections, we will explain the contents of this theorem, namely \begin{itemize} \item In \S \ref{sec:categ,corr}, we define the notion of correspondances in a cateogry. \item In \S \ref{sec:deriv,enhanc,under}, we define the natural derived enhancement of the moduli space of
stable maps $\overline{\mathcal{M}}_{g,n}(X,\beta)$ and in \ref{sec:definition,End}, we explain
the underlying notation $\underline{\Hom}^{\corr}(X^{n},X)$. \item In \S \ref{sec:lax,morphism}, we explain what is a lax morphism between $\infty$-operads. \item Th notion of $\infty$-operads is a bit delicat and it is explain in In \S \ref{thm:brane}. \end{itemize}
\subsection{Category of correspondances} \label{sec:categ,corr}
Let $\mathbf{dSt}_{\mathbf{C}}$ be the $\infty$-category of derived stacks. We denote $\mathbf{dSt}_{\mathbf{C}}^{\corr}$ the $(\infty,2)$-category of correspondences in derived stack which is defined informally as follows (See \S 10 in \cite{1212.3563}). To have a formal definition, we refer to the notion of span in the website nLab. \begin{enumerate} \item Object of $\mathbf{dSt}_{\mathbf{C}}^{\corr}$ are objects of $\mathbf{dSt}_{\mathbf{C}}$. \item The $1$-morphism of $\mathbf{dSt}_{\mathbf{C}}^{\corr}$ between $X$ and $Y$, denoted by $X\dasharrow Y$, is a diagram
\begin{displaymath}
\xymatrix{&U\ar[ld]_{g}\ar[rd]^{f}&\\ X&&Y}
\end{displaymath} There is no condition on $f$ or $g$. The composition is given by fiber product
\begin{displaymath}
\xymatrix{&&U\times_{Y}V \ar[rd]\ar[ld]&&\\ &U\ar[ld]\ar[rd]&&V \ar[rd]\ar[dl]&\\ X&&Y&&Z}
\end{displaymath} Notice that a morphism from $X$ to $Y$ is also a morphism from $Y$ to $X$ but the composition is not the identity which is \begin{displaymath}
\xymatrix{&X\ar[ld]_{\Id}\ar[rd]^{\Id}&\\ X&&X}
\end{displaymath} Hence a morphism of scheme $f:X\to Y$ induces a morphism $X\dasharrow Y$ in correspondances given by $\Id_{X}:X\to X$ and $f:X\to Y$. This morphism $X\dasharrow Y$ is an isomorphism if and only if we have $X=X\times_{Y}X$ i.e., $f$ is a monomorphism. \item The $2$-morphisms are not necessarily isomorphisms, they are $\alpha: U\to V$ that make the
diagram commutative.
\begin{displaymath}
\xymatrix{&U\ar[dd]^{\alpha}\ar[ld]\ar[rd]&\\ X&&Y \\ &V\ar[ru]\ar[ul]&}
\end{displaymath} \end{enumerate}
The diagram
\begin{align}\label{key,diag,bis}
\xymatrix{&\mathbb{R}\overline{\mathcal{M}}_{0,n+1}(X,\beta) \ar[ld]_{e_{1}, \ldots ,e_{n}}
\ar[rd]^{e_{n+1}} & \\ X^{n} &&
X } \end{align} is by definition a morphism in $\mathbf{dSt}_{\mathbf{C}}^{\corr}$ between $X^{n}\dasharrow X$. Notice that the object that makes the correspondence is a derived stack so we need to be in the category of $\mathbf{dSt}_{\mathbf{C}}^{\corr}$ and not in the category of correspondence in schemes (or Deligne-Mumford stacks).
\subsection{Derived enhancement} \label{sec:deriv,enhanc,under}
\subsubsection{Derived enhancement of $\mathbb{R}\overline{\mathcal{M}}_{0,n+1}(X,\beta)$.} \label{sec:deriv-enhanc-mathbbr}
Here we follow the idea of Sch\"urg-To\"en-Vezzosi \cite{MR3341464} with a small modification. Let $g,n \in\mathbb{N}$ and $\beta\in H_{2}(X,\beta)$. Recall the definition of $\mathfrak{M}_{g,n,\beta}$ the moduli space defined before Theorem \ref{thm,Costello}. We denote the relative internal hom in derived stacks by \begin{align}\label{eq:8}
\mathbb{R}\Hom_{\mathrm{dst}_k/\mathfrak{M}_{g,n,\beta}}(\mathfrak{M}_{g,n+1,\beta}, X \times \mathfrak{M}_{g,n,\beta}) \end{align} As $\mathfrak{M}_{g,n+1,\beta}\to \mathfrak{M}_{g,n,\beta}$ is the universal curve, a point in $\mathbb{R}\Hom_{\mathrm{dst}_k/\mathfrak{M}_{g,n,\beta}}(\mathfrak{M}_{g,n+1,\beta}, X \times \mathfrak{M}_{g,n,\beta})$ is by definition a morphism from $f:C\to X$ where $[C] \in \mathfrak{M}_{g,n,\beta}$. Notice that the degree $f$ is not related for the moment to $\beta$. The truncation of \eqref{eq:8} is \begin{displaymath}
\Hom_{\mathrm{dst}_k/\mathfrak{M}_{g,n,\beta}}(\mathfrak{M}_{g,n+1,\beta}, X \times \mathfrak{M}_{g,n,\beta}) \end{displaymath} and inside it, we have an immersion \begin{align}\label{eq:9}
\overline{\mathcal{M}}_{g,n}(X,\beta) \hookrightarrow \Hom_{\mathrm{dst}_k/\mathfrak{M}_{g,n,\beta}}(\mathfrak{M}_{g,n+1,\beta}, X \times \mathfrak{M}_{g,n,\beta}) \end{align} given by stable maps $(\mathcal{C},\underline{\sigma},f:\mathcal{C}\to X)$ such that the degree of $f$ on each irreducible component $\mathcal{C}_{i}$ of $\mathcal{C}$, the degree of $f\mid_{\mathcal{C}_{i}}$ is $\beta_{i}$ i.e., we have the equality $(f\mid_{\mathcal{C}_{i}})_{*}[\mathcal{C}_{i}]=\beta_{i}$. This immersion is open because the degree is discrete.
Using the following result of Schu\"rg-To\"en-Vezzosi, we have
\begin{prop}[Proposition 2.1 in \cite{MR3341464}] Let $X$ be in $\mathbf{dSt}_{\mathbf{C}}$ and an open immersion of
$Y\hookrightarrow t_{0}(X)$ where $t_{0}(X)$ is the truncation of $X$. Then there exists a unique derived enhancement of $Y$, denoted by $\widehat{Y}$, such that the
following diagram is cartesian
\begin{displaymath}
\xymatrix{Y\ar@^{(->}[r]^{open} \ar@^{(->}[d]_{closed} & t_{0}(X) \ar@^{(->}[d]^{closed}\\\widehat{Y} \ar@^{(->}[r]^-{open}& X}
\end{displaymath} \end{prop}
Taking $Y=\overline{\mathcal{M}}_{g,n}(X,\beta)$ and the open immersion \eqref{eq:9}, we get a derived enhancement, which we denote by $\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)$. \begin{remark} To define the derived enhancement of the moduli space of stable maps $\overline{\mathcal{M}}_{g,n}(X,\beta)$, Sch\"urg-To\"en-Vezzosi (see \cite{MR3341464}) used the moduli space of prestable curve denoted by $\mathfrak{M}^{\pres}_{g,n}$ instead of the moduli space of Costello $ \mathfrak{M}_{g,n,\beta}$ in \eqref{eq:8}. So they use the universal curve of $\mathfrak{M}^{\pres}_{g,n}$ in \eqref{eq:8} instead of $\mathfrak{M}_{g,n+1,\beta}$. As we will see in the proof (see section \ref{sec:proof-our-main}), the fact that $\mathfrak{M}_{g,n+1,\beta}$ is the universal curve is fundamental, that is the reason why we made this little change.
Notice that their derived enhancement is the same as ours as the morphism $\mathfrak{M}_{g,n,\beta}\to \mathfrak{M}_{g,n}$ is étale (See \cite{MR2247968}). \end{remark}
\subsubsection{Definition of $\underline{\Hom}^{\corr}(X^{n},X)$} \label{sec:definition,End} The underling notation means the internal hom $\underline{\Hom}^{\corr}(X^{n},X)$. To be more precise, it is the sheaf \begin{displaymath} \underline{\Hom}^{\corr}(X^{n},X)(U):=\Hom^{\corr}(X^{n}\times U, X\times U) \end{displaymath} It turns out that this is a derived stack because $\Hom^{\corr}(X^{n}\times U, X\times U)$ is the same as the category of derived stack over $X^{n+1}\times U$.
By Yoneda's lemma, the morphism $\varphi_{n}$ of Theorem \ref{thm,main} is exactly given by an object in
$\Hom^{\corr}(X^{n}\times \overline{\mathcal{M}}_{0,n+1}, X\times \overline{\mathcal{M}}_{0,n+1})$
which is the diagram \eqref{key,diag}.
\subsection{Lax morphism} \label{sec:lax,morphism}
Recall that a classical morphism of operad is a commutative diagram \eqref{diag:morph,operad}. A \textit{lax morphism} is given by a collection of $2$-morphisms $(\alpha_{a,b})_{a,b\in\mathbb{N}}$ which are not an isomorphism. \begin{displaymath}
\xymatrix{\mathcal{O}(a)\times
\mathcal{O}(b)\ar[r]^{(f_{a},f_{b})}\ar[d]_{\circ_{i}}&\mathcal{E}(a)\times \mathcal{E}(b) \ar@{=>}[ld]_{\alpha_{a,b}}
\ar[d]^{\circ_{i}}\\ \mathcal{O}(a+b-1) \ar[r]^{f_{a+b-1}}& \mathcal{E}(a+b-1)} \end{displaymath}
In the following, we will explain why the Theorem \ref{thm,main} is lax in geometrical term . Let $\sigma\in \overline{\mathcal{M}}_{0,a+1}$ and $\tau\in \overline{\mathcal{M}}_{0,b+1}$. Denote by $\mathbb{R}\overline{\mathcal{M}}_{0,a+1}^{\sigma}(X,\beta)$ (resp. $\mathbb{R}\overline{\mathcal{M}}_{0,a+1}^{\tau}(X,\beta)$ ) the inverse image of $p^{-1}(\sigma)$ (resp. $p^{-1}(\tau)$).
The composition is $f_{a+b-1}\circ\circ_{i}$ given by
\begin{align}
\label{eq:11}
\xymatrix{&\coprod_{\beta}\mathbb{R}\overline{\mathcal{M}}_{0,a+b+1}(X,\beta) \ar[rd] \ar[ld]&\\X^{a+b}&&X}
\end{align}
The second composition morphism $\circ_{1}\circ(f_{a},f_{b})$ is given by the following fibered product. Let $\beta',\beta''$ such that $\beta'+\beta''=\beta$.
\resizebox{1\linewidth}{!}{
\begin{minipage}{\linewidth}\begin{align}
\label{eq:10}
\xymatrix{&&\coprod_{\beta}\mathbb{R}\overline{\mathcal{M}}^{\sigma}_{0,a+1}(X,\beta)
\times_{X}\coprod_{\beta}\mathbb{R}\overline{\mathcal{M}}^{\tau}_{0,b+1}(X,\beta) \ar[rd]\ar[ld]&& \\&\coprod_{\beta}\mathbb{R}\overline{\mathcal{M}}^{\sigma}_{0,a+1}(X,\beta)\times X^{b}
\ar[rd]^{e_{a+1},\Id_{X^{b}}} \ar[ld]_{e_{1}, \ldots ,e_{a},\Id_{X^{a}}} &&\coprod_{\beta}\mathbb{R}\overline{\mathcal{M}}^{\tau}_{0,a+1}(X,\beta)
\ar[rd]^{e_{b+1}} \ar[ld]_{e_{1}, \ldots ,e_{b}} \\X^{a}\times X^{b} && X \times X^{b} && X} \end{align} \end{minipage} } Let fix $\beta$. Finally, the $2$-morphism $\alpha$ is given by the gluing morphism \begin{align}
\label{eq:12}
\alpha :\coprod_{\stackrel{\beta',\beta''}{\beta'+\beta''=\beta}}\mathbb{R}\overline{\mathcal{M}}^{\sigma}_{0,a+1}(X,\beta')
\times_{X}\mathbb{R}\overline{\mathcal{M}}^{\tau}_{0,b+1}(X,\beta'') \to\mathbb{R}\overline{\mathcal{M}}^{\sigma\circ\tau}_{0,a+b+1}(X,\beta) \end{align} Notice that we can glue the stable maps denoted by $(C,x_{1}, \ldots ,x_{a+1},f)$ and $(\widetilde{C},\widetilde{x}_{1}, \ldots ,\widetilde{x}_{b+1},\widetilde{f})$ because the fiber product is over $X$ which means that $f(x_{a+1})=\widetilde{f}(\widetilde{x}_{1})$. This morphism $\alpha$ is surjective but not injective on points. To see the non injectivity, consider Figure \ref{fig:geo}, then the gluing curves are the same. Notice that by stability condition, we have $\beta_{2}\neq 0$. The two couple of curves $(C_{1}\circ C_{2}, C_{3})$ and $(C_{1},C_{2}\circ C_{3})$ are in two different connected components of \begin{displaymath}
\coprod_{\stackrel{\beta',\beta''}{\beta'+\beta''=\beta}}\mathbb{R}\overline{\mathcal{M}}^{\sigma}_{0,a+1}(X,\beta')
\times_{X}\mathbb{R}\overline{\mathcal{M}}^{\tau}_{0,b+1}(X,\beta''). \end{displaymath}
\begin{figure}
\caption{Geometric reason of the lax action}
\label{fig:geo}
\end{figure}
\section{Proof of our main result} \label{sec:proof-our-main}
\subsection{Brane action}
In this section, we explain the main theorem of \cite{Toen-operation-branes-2013}. This theorem has a lot of prerequisites (like $\infty$-operads, unital and coherent operads) that are too complicated for this survey. We refer to the definition of $\infty$-operads by Lurie \cite[Definition 2.1.1.8]{Lurie-higher-algebra} and to the Definition 3.3.1.4 for the notion of coherent $\infty$-operad.
\begin{thm}[see Theorem \cite{Toen-operation-branes-2013}]\label{thm:brane}
Let $\mathcal{O}^\otimes$ be an $\infty$-operad in the $\infty$-category of spaces such that
\begin{enumerate}
\item $\mathcal{O}^\otimes(0)=\mathcal{O}^\otimes(1)$ are contractible.
\item the operad is unital and coherent
\end{enumerate} Then $\mathcal{O}(2)$ is a $\mathcal{O}^\otimes$-algebra in the $\infty$-category of co-correspondence. \end{thm}
\begin{example}
We will illustrate the hypothesis and the conclusion of this theorem for the operad
$\mathcal{O}(n):=\overline{\mathcal{M}}_{0,n+1}$. We choose this example because it is a
well-known operad and it is easier to explain. Notice that to prove (see \S \ref{thm:brane,Costello}) our main theorem, we need to
apply to an other operad which is $\coprod_{\beta} \mathfrak{M}_{0,n+1,\beta}$ but the main ideas are
the same. Notice that we set $\overline{\mathcal{M}}_{0,1}=\overline{\mathcal{M}}_{0,2}:=\pt$ (with the usual definition they are empty). By definition, we impose that $\mathcal{O}(1)$ is the unit. For the operad $\mathcal{O}$, the following diagram is cartesian (See below for an explanation). \begin{displaymath}
\xymatrix{\mathcal{O}(n)\times \mathcal{O}(m+1) \coprod_{\mathcal{O}(2)\times
\mathcal{O}(n)\times\mathcal{O}(m)}\mathcal{O}(n+1)\times\mathcal{O}(m) \ar[r]
\ar[d]^{q}&\mathcal{O}(n+m) \ar[d]^{p}\\\mathcal{O}(n)\times\mathcal{O}(m) \ar[r]^{\circ}& \mathcal{O}(n+m-1)} \end{displaymath} This property was called of ``configuration type'' in \cite{Toen-operation-branes-2013}. Notice that in the context of \cite[Definition 3.3.1.4]{Lurie-higher-algebra}, this notion was called ``coherent''. As $p$ is flat, we need to prove that it is a cartesian diagram in the stack category. Let $(C_{1},x_{1}, \ldots ,x_{n+1})$ be in $\mathcal{O}(n)$ and $(C_{2},y_{1}, \ldots ,y_{m+1})$ be in $\mathcal{O}(m)$. As $\mathcal{O}(n+1)\to\mathcal{O}(n)$ is the universal curve, we deduce that $q^{-1}(C_{1},C_{2})=C_{1}\coprod_{\pt}C_{2}$ which is exactly $C_{1}\circ C_{2}$. This implies that the diagram above is cartesian.
Let us explain now the conclusion of this theorem. Notice that $\mathcal{O}(2)=\overline{\mathcal{M}}_{0,3}$ is a point. The statement means that we have a morphism of $\infty$-operad that is a family of morphisms
\begin{displaymath}
\varphi_{n}: \mathcal{O}(n)\to \underline{\Hom}^{\mathbf{CoCorr}}(\coprod_{i=1}^{n}\mathcal{O}(2), \mathcal{O}(2)) \end{displaymath} where the morphism $(\varphi_{n})$ are compatible with the composition law. The $\underline{\Hom}$ is the same meaning that in \S \ref{sec:definition,End}. The category of co-correspondances is in the same spirit as correspondance (See \S \ref{sec:categ,corr}) but with the arrows in the other directions. The morphism $\varphi_{n}$ is given by the following diagram \begin{align}\label{eq:16} \xymatrix{ \mathcal{O}(n)\times \coprod_{i=1}^{n} \mathcal{O}(2)\ar[r]^-{\circ} \ar[rd]& \mathcal{O}(n+1) \ar[d] & \ar[l]_-{\circ'}\mathcal{O}(2)\times
\mathcal{O}(n) \ar[ld] \\ & \mathcal{O}(n)&} \end{align} Let explain this diagram with $\mathcal{O}(n)=\overline{\mathcal{M}}_{0,n+1}$. We have \begin{enumerate} \item The morphism $\mathcal{O}(n+1)\to \mathcal{O}(n)$ is to forget the last marked point. \item The map $\circ:\mathcal{O}(n)\times \coprod_{i=1}^{n}\mathcal{O}(2) \to \mathcal{O}(n+1)$ is
given by the $n$ possible gluings of the third marked point of $\mathcal{O}(2)=\overline{\mathcal{M}}_{0,3}$ with one of the marked points $x_{i}$ for $i\in \{1, \ldots ,n\}$ in $\mathcal{O}(n)$. \item The $\circ'$ is the gluing of last marked point $x_{n+1}$ of $\mathcal{O}(n)$ with the third of $\mathcal{O}(2)$. \end{enumerate}
\end{example}
\subsection{Sketch of proof of Theorem \ref{thm,main}}
In this section, we explain how to apply Theorem \ref{thm:brane} to get our main theorem.
Here we take $\mathcal{O}(n)=\coprod_{\beta}\mathfrak{M}_{0,n+1,\beta}$. This is an operad in algebraic stack. One can check that all we said before in the previous section for $\overline{\mathcal{M}}_{0,n+1}$ works as well for $\coprod_{\beta}\mathfrak{M}_{0,n+1,\beta}$.
Let $X$ be a smooth projective variety. We apply the functor $\mathbb{R}\Hom_{/\mathfrak{M}_{0,n+1,\beta}}(-,X\times \mathfrak{M}_{0,n+1,\beta})$ to Theorem \ref{thm:brane}. As the source curve of a stable map may not be a stable curve, we need to use Theorem \ref{thm:brane} with an other operad than $\overline{\mathcal{M}}_{0,n+1}$. That's why we use $\coprod_{\beta}\mathfrak{M}_{0,n+1,\beta}$. We deduce the following result. \begin{thm}\label{thm:brane,Costello} The variety $X$ is an $\mathfrak{M}^{\otimes}$-algebra in the category of correspondances in derived stacks. The algebra structure is given by the \begin{displaymath}
\xymatrix{&\mathbb{R}\overline{\mathcal{M}}_{0,n+1}(X,\beta)\ar[rd] \ar[dl] & \\ X^{n}\times \mathfrak{M}_{0,n+1,\beta} && X \times \mathfrak{M}_{0,n+1,\beta}} \end{displaymath} \end{thm}
\begin{remark} To apply Theorem \ref{thm:brane}, we need to do several modifications
\begin{enumerate}
\item Notice that in this statement, the action is strong that means that the lax morphisms are
equivalences (See \S \ref{sec:lax,morphism}). The geometrical reason is the following. We can repeat the construction of \S \ref{sec:lax,morphism} replacing $\overline{\mathcal{M}}_{0,n+1}$ by $\mathfrak{M}_{0,n,\beta}$. The difference is that the forgetting morphism $q:\overline{\mathcal{M}}_{0,n+1}(X,\beta)\to \overline{\mathfrak{M}}_{0,n+1,\beta}$
does not contract any component of the curve. More precisely, let $\sigma \in \mathfrak{M}_{0,a+1,\beta}$ and $\tau \in \mathfrak{M}_{0,b+1,\beta'}$. Denote by \begin{displaymath} \mathbb{R}\overline{\mathcal{M}}^{\sigma}_{0,a+1}(X,\beta')=q^{-1}(\sigma). \end{displaymath}
Take care that in \S \ref{sec:lax,morphism}, we use $\mathbb{R}\overline{\mathcal{M}}^{\sigma}_{0,a+1}(X,\beta')=p^{-1}(\sigma)$ where $p:\overline{\mathcal{M}}_{0,n+1}(X,\beta)\to \overline{\mathcal{M}}_{0,n+1}$. Writing the same kind of diagram as \eqref{eq:10} we get the corresponding $\alpha$ given by
\begin{align}
\label{eq:31}
\widetilde{\alpha} :\mathbb{R}\overline{\mathcal{M}}^{\sigma}_{0,a+1}(X,\beta')
\times_{X}\mathbb{R}\overline{\mathcal{M}}^{\tau}_{0,b+1}(X,\beta'') \to\mathbb{R}\overline{\mathcal{M}}^{\sigma\circ\tau}_{0,a+b+1}(X,\beta) \end{align} which is now an isomorphism because from the glued curve, there is a unique possibility to cut it with respect to $\sigma$ and $\tau$.
\item First, Theorem \ref{thm:brane} apply only to operads in spaces and here we have operads in
derived stacks. This can be done using non-planar rooted trees and dendroidal sets. More
precisely, one can enrich $\infty$-operads using Segal functor from the nerve of $\Omega^{op}$
to derived stacks. Thanks to the work of \cite{1606.03826} and \cite{1305.3658} these two
definitions coincide on topological spaces.
\item Second, the condition $\mathcal{O}(0)=\mathcal{O}(1)=\pt$ is not satisfied by
$\mathfrak{M}_{0,n,\beta}$. So we impose that for any $\beta\neq 0$,
$\mathfrak{M}^{\fake}_{0,1,\beta}=\mathfrak{M}^{\fake}_{0,2,\beta}=\emptyset$ and that
$\mathfrak{M}^{\fake}_{0,1,0}=\mathfrak{M}^{\fake}_{0,2,0}=\pt$ is with
$\mathfrak{M}^{\fake}_{0,2,0}$ being the neutral element.
\item An other issue is that $\mathfrak{M}_{0,n,\beta}$ is not a coherent operad because the
inclusion of schemes in derived stacks does not commute with pushouts even along closed
immersion. We only have a canonical morphism
\begin{displaymath}
\theta: C_{1}\coprod^{dst}_{\pt}C_{2} \to
C_{1}\coprod^{sch}_{\pt}C_{2}
\end{displaymath}
Nevertheless, most of the proof of Theorem \ref{thm:brane} is
still valid and we know that the functor $\mathbb{R}\Hom (-,X)$ will see $\theta$ as an equivalence.
\end{enumerate} \end{remark}
The next step in order to prove Theorem \ref{thm,main} is to understand the morphism of operads \begin{displaymath}
\coprod_{\beta}\mathfrak{M}_{0,n+1,\beta} \to \overline{\mathcal{M}}_{0,n+1}. \end{displaymath} Embedding this morphism in the $\infty$-operads, it turns out that this morphism is a lax morphism of operads. This is the reason why the final action in Theorem \ref{thm,main} is lax.
\section{Comparison with other definition}
\subsection{Quantum product in cohomology and in $G_{0}$-theory}
In this section, we review the definition of the quantum product in cohomology and in $G_{0}$-theory. Recall that $X$ is a smooth projective variety. Givental-Lee defined in \cite{MR2040281} the Gromov-Witten invariants in $G_{0}$-theory. For that they defined a virtual structure sheaf, denoted by $\mathcal{O}^{\vir}_{\overline{\mathcal{M}}_{g,n}(X,\beta)}$, on the moduli space of stable maps. Recall the morphism $e_{i}:\overline{\mathcal{M}}_{g,n}(X,\beta)\to X$ are the evaluation morphism at the $i$-th marked point. For any $E_{1}, \ldots ,E_{n}\in G_0(X)$, the Gromov-Witten invariants in $G_{0}$-theory are \begin{displaymath}
\langle E_{1}, \ldots ,E_{n} \rangle^{G_{0}}_{0,n,\beta}:=\chi\left(\bigotimes_{i=1}^{n} e_{i}^{*}E_{i}
\otimes\mathcal{O}^{\vir}_{\overline{\mathcal{M}}_{0,n}(X,\beta)} \right) \in \mathbb{Z} \end{displaymath} where $\chi(.)$ is the Euler characteristic.
Let $\NE(X)$ be the Neron-Severi group of $X$ that is the subset of $H_{2}(X,\mathbb{Z})$ generated by image of curves in $X$. \begin{defn} Let $\gamma_{1},\gamma_{2} \in H^{*}(X)$. The quantum product in $H^{*}(X)$ is defined by \begin{align}\label{eq:21}
\gamma_{1}\bullet^{H^{*}} \gamma_{2}=\sum_{\beta \in \NE(X)} Q^{\beta}{\ev_{3}}_{*}\left(\ev_{1}^{*}\gamma_{1}\cup
\ev_{2}^{*}\gamma_{2}\cap [\overline{\mathcal{M}}_{0,3}(X,\beta)]^{\vir}\right). \end{align} \end{defn} One can see this product as a formal power series in $Q$. Hence, the quantum product lies in $H^{*}(X)\otimes \Lambda$ where $\Lambda$ is the Novikov ring i.e., it is the algebra generated by $Q^{\beta}$ for $\beta \in \NE(X)$.
We will recall the definition of the virtual class $\left[\overline{\mathcal{M}}_{0,n}(X,\beta)\right]^{\vir}$ (defined by Behrend-Fantechi) and the virtual
sheaf $\mathcal{O}^{\vir}_{\overline{\mathcal{M}}_{g,n}(X,\beta)}$ (defined by Lee \cite{MR2040281}) in \S \ref{sec:virtual-object-from-1} and \S \ref{sec:virtual-object-from}.
In $G_{0}$-theory, we define the quantum product with the following formula. \begin{defn}\label{eq:22}
Let $F_{1},F_{2} \in G_{0}(X)$. The quantum product in $G_{0}$-theory is defined to be the
element in $G_{0}(X)\otimes \Lambda$
\begin{displaymath}
\resizebox{1\linewidth}{!}{
\begin{minipage}{\linewidth}
\begin{align*}
F_{1}\bullet^{G_{0}} F_{2} =\sum_{\beta \in \NE(X)} Q^{\beta}{\ev_{3}}_{*}\left(\ev_{1}^{*}F_{1}\otimes
\ev_{2}^{*}F_{2}\otimes \sum_{r\in\mathbb{N}}\sum_{\stackrel{
(\beta_{0}, \ldots ,\beta_{r})\mid}{\sum\beta_{i}=\beta}}
(-1)^{r}\mathcal{O}^{\vir}_{\overline{\mathcal{M}}_{0,3}(X,\beta_{0})}\otimes
\mathcal{O}^{\vir}_{\overline{\mathcal{M}}_{0,2}(X,\beta_{1})} \cdots \otimes
\mathcal{O}^{\vir}_{\overline{\mathcal{M}}_{0,2}(X,\beta_{r})}\right)
\end{align*}
\end{minipage}} \end{displaymath} \end{defn}
The term $r=0$ in the formula in Definition \ref{eq:22} is of the same shape \eqref{eq:21}. One has to understand the other terms, i.e. $r>0$, are ``corrections terms''.
\subsection{About the associativity} \label{sec:about-associativity}
The most important property of these two products is the associativity. It is proved by Kontsevich-Manin \cite{MR1369420} (See also \cite{MR1492534}) that the quantum product in cohomology is associative. Notice that the key formula for the associativity is given in Theorem \ref{thm:gluing,virt,coho} which states that virtual classes behave with respect to the morphisms $\alpha$'s and the gluing morphisms. Recall that the morphisms $\alpha$'s are the one that appear in the lax action \eqref{eq:12}.
Later, when Givental and Lee (See \cite{MR2040281}) try to define a quantum product in $G_{0}$-theory they want an associative product. If one put the same kind of formula as in \eqref{eq:21}, the product is not associative. Hence the key observation of Givental and Lee is Theorem \ref{thm:gluing,sheaf,virt} which is the analogue of Theorem \ref{thm:gluing,virt,coho} in $G_{0}$-theory that is how the virtual sheaves behave with respect to the morphisms $\alpha$'s and the gluing morphisms.
Our contribution to this question is Theorem \ref{thm:colim} which is the geometric explanation that explains the two Theorems \ref{thm:gluing,sheaf,virt} and \ref{thm:gluing,sheaf,virt}.
Notice that Givental-Lee packed the complicated formula of \ref{eq:22} in a very clever way. Notice that $\overline{\mathcal{M}}_{0,2}(X,\beta)=\overline{\mathcal{M}}_{0,2} \times X$ is empty if $\beta=0$. As before put $\overline{\mathcal{M}}_{0,2}=\pt$. Then we put \begin{align}
\label{eq:23} \mathcal{O}^{\vir}_{\overline{\mathcal{M}}_{0,2}}:= \mathcal{O}_{X} + \sum_{\stackrel{\beta\in
\NE(X)}{\beta\neq 0}}
Q^{\beta}\mathcal{O}^{\vir}_{\overline{\mathcal{M}}_{0,2}(X,\beta)} \in G_{0}(X)\otimes \Lambda \end{align} Let invert the Formula above formally in $G_{0}(X)\otimes \Lambda$. The terms in front of $Q^{\beta}$ is \begin{align}\label{eq:24}
\sum_{r\in\mathbb{N}}\sum_{\stackrel{
(\beta_{0}, \ldots ,\beta_{r})\mid}{\sum\beta_{i}=\beta}}
(-1)^{r}\mathcal{O}^{\vir}_{\overline{\mathcal{M}}_{0,2}(X,\beta_{0})}\otimes
\mathcal{O}^{\vir}_{\overline{\mathcal{M}}_{0,2}(X,\beta_{1})} \cdots \otimes
\mathcal{O}^{\vir}_{\overline{\mathcal{M}}_{0,2}(X,\beta_{r})} \end{align} The Formula \eqref{eq:23} and \eqref{eq:24} are the reason of the ``metric'' (See Formula (16) in \cite{MR2040281} for more details) because one can express in a compact form the Formula \eqref{eq:22} using the inverse of the metric.
\subsection{Key diagram}
Let us consider the following homotopical fiber product. Let $n_{1},n_{2}\in\mathbb{N}_{\geq 2}$. Put $n=n_{1}+n_{2}$. \begin{align}\label{eq:key,diag}
\xymatrix{ Z_{\beta}\ar[r]\ar[d]& \mathbb{R}\overline{\mathcal{M}}_{0,n}(X,\beta) \ar[d]^{p}\\
\overline{\mathcal{M}}_{0,n_{1}+1} \times \overline{\mathcal{M}}_{0,n_{2}+1} \ar[r]^-{g}& \overline{\mathcal{M}}_{0,n}} \end{align} The fiber over a point $(\sigma,\tau)$ is denoted by $\overline{\mathcal{M}}^{\sigma\circ\tau}(X,\beta)$ in \S~\ref{sec:lax,morphism} that is stable maps where the curve stabilise to $\sigma\circ \tau$. In Figure \ref{fig:tree,p1}, we have an example of a fiber over $\sigma\circ \tau$ where we have a tree of $\mathbb{P}^{1}$ in the middle.
\begin{figure}
\caption{Example of a stable map above $\sigma\circ\tau$ with a tree of $\mathbb{P}^{1}$ in the
middle. The tree $C_{1}\circ C_{2} \circ C_{3}$ is contracting by $p$ to the node of $\sigma\circ \tau$.}
\label{fig:tree,p1}
\end{figure}
Using the universal property of the fiber product we get the morphism (see \eqref{eq:12}) \begin{align}\label{eq:alpha,asso}
\alpha
:\coprod_{\beta'+\beta''=\beta}\mathbb{R}\overline{\mathcal{M}}_{0,n_{1}+1}(X,\beta')\times_{X}\mathbb{R}\overline{\mathcal{M}}_{0,n_{2}+1}(X,\beta'')
\to Z_{\beta} \end{align} where the left hand side is defined by the following homotopical fiber product \begin{align}
\label{eq:13} \xymatrix{\mathbb{R}\overline{\mathcal{M}}_{0,n_{1}+1}(X,\beta')\times_{X}\mathbb{R}\overline{\mathcal{M}}_{0,n_{2}+1}(X,\beta'')
\ar[r] \ar[d]
&\mathbb{R}\overline{\mathcal{M}}_{0,n_{1}+1}(X,\beta')\times\mathbb{R}\overline{\mathcal{M}}_{0,n_{2}+1}(X,\beta'') \ar[d]^-{e_{1},e_{n_{2}+1}}\\
X \ar[r]^-{\Delta} & X\times X} \end{align}
The heart of the associativity of the quantum products in cohomology (see Theorem \ref{thm:gluing,sheaf,virt} for $G_{0}$-theory) is the following statement. \begin{thm}[Theorem 5.2 \cite{MR1467172}]\label{thm:gluing,virt,coho}
We have the following equality in the Chow ring of the truncation of $Z_{\beta}$.
\begin{align}
\label{eq:14}
\alpha_{*}\left(\sum_{\beta'+\beta''=\beta} \Delta^{!}\left([\overline{\mathcal{M}}_{0,n_{1}+1}(X,\beta')]^{\vir}\otimes[\overline{\mathcal{M}}_{0,n_{2}+1}(X,\beta'')]^{\vir})\right)\right)=g^{!}[\overline{\mathcal{M}}_{0,n}(X,\beta)]^{\vir}
\end{align} \end{thm}
\begin{remark}\label{rem,orientation,coho}
In \cite{MR1431140}, Behrend proves that the virtual class satisfies five properties, called
\textit{orientation} (see \S 7 in \cite{MR1412436}), namely: mapping to a point, products, cutting
edges, forgetting tails and isogenies. The formula \eqref{eq:14} is a combination of cutting tails
and isogenies. \end{remark}
The analogue statement in $G_{0}$-theory need a bit more of notations. We denote \begin{displaymath}
\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta):=\mathbb{R}X_{g,n,\beta}. \end{displaymath}
Let $r,n_{1},n_{2}$ be in $\mathbb{N}$ with $n_{1}+n_{2}=n$ and let $\beta$ be in $\NE(X)$. Let $\underline{\beta}=(\beta_{0}, \ldots ,\beta_{r})$ be a partition of $\beta$. Notice that there is only a finite number of partition.
We denote by \begin{displaymath} \resizebox{1\linewidth}{!}{
\begin{minipage}{\linewidth} \begin{align*}
\mathbb{R}{X}_{0,n_{1},n_{2},\underline{\beta}}:=\mathbb{R}X_{0,n_{1}+1,\beta_{0}}\times_{X}\mathbb{R}X_{0,2,\beta_{1}}\times_{X}\cdots\times_{X}\mathbb{R}X_{0,2,\beta_{r-1}}\times_{X}\mathbb{R}X_{0,n_{2}+1,\beta_{r}} \end{align*} \end{minipage}} \end{displaymath}
We generalize the situation of \eqref{eq:14} by the following homotopical cartesian diagram \begin{align}
\label{eq:13,diag,r} \xymatrix{ \mathbb{R}X_{0,n_{1},n_{2},\underline{\beta}}
\ar[r] \ar[d]
&\mathbb{R}X_{0,n_{1}+1,\beta_{0}}\times\left(\prod_{k=1}^{r-1}\mathbb{R}X_{0,2,\beta_{i}}\right)\times\mathbb{R}X_{0,n_{2}+1,\beta_{r}} \ar[d]\\
X^{r} \ar[r]^-{\Delta^{r}} & (X\times X)^{r}} \end{align}
Gluing all the stable maps and using the universal property of $Z_{\beta}$, we have a morphism
\begin{align}\label{eq:alpha,r}
\alpha_{r}: \coprod_{\beta=\sum_{i=0}^{r}\beta_{i}}
\mathbb{R}X_{0,n_{1},n_{2},\underline{\beta}} \to Z_{\beta} \end{align}
Notice that $\alpha_{1}$ is the $\alpha$ of \eqref{eq:12}
Finally, we can state the analogue of Theorem \ref{thm:gluing,virt,coho} in $G_{0}$-theory. \begin{thm}[Proposition 11 in \cite{MR2040281}]\label{thm:gluing,sheaf,virt}
We have the following equality in the $G_{0}$-group of the truncation of $Z_{\beta}$.
\resizebox{1\linewidth}{!}{
\begin{minipage}{\linewidth} \begin{align*}
\label{eq:15}
\sum_{r\in\mathbb{N}} (-1)^{r}{\alpha_{r}}_{*}\left(\sum_{\sum_{i=0}^{r}\beta_{i}=\beta}
(\Delta^{r})^{!}\left(\mathcal{O}^{\vir}_{X_{0,n_{1}+1\beta_{0}}}\otimes\mathcal{O}^{\vir}_{X_{0,2,\beta_{1}}}\otimes\cdots
\otimes \mathcal{O}^{\vir}_{X_{0,2,\beta_{r-1}}}\otimes
\mathcal{O}^{\vir}_{X_{0,n_{2}+1,\beta_{r}}}\right)\right)=g^{!}\mathcal{O}^{\vir}_{X_{0,n,\beta}}
\end{align*}\end{minipage}} \end{thm}
\begin{remark}\label{rem:orientation,Ktheory}
\begin{enumerate}
\item Comparing Theorem \ref{thm:gluing,virt,coho} with Theorem \ref{thm:gluing,sheaf,virt}, we see that
the formulas are more complicated in $G_{0}$-theory. We see that moduli spaces
of the kind $\overline{\mathcal{M}}_{0,2}(X,\beta)$ appears in $G_{0}$-theory. This corresponds to stable curve
with tree of $\mathbb{P}^{1}$ in the middle (see Figure \ref{fig:tree,p1}). Notice that this is
the same reason why the action of the main Theorem \ref{thm,main} is lax. \item Also in $G_{0}$-theory, there are 5 axioms, called orientation (see Remark \ref{rem,orientation,coho}), for the virtual sheaf
$\mathcal{O}^{\vir}_{\overline{\mathcal{M}}_{g,n}(X,\beta)}$. They are proved by Lee in \cite{MR2040281}.
\end{enumerate} \end{remark}
Denote by \begin{displaymath} X_{r,\beta}:= \coprod_{\sum\beta_{i}=\beta}\mathbb{R}X_{0,n_{1}+1,\beta_{0}}\times_{X}\mathbb{R}X_{0,2,\beta_{1}}\times_{X}\cdots \times_{X} \mathbb{R}X_{0,2,\beta_{r-1}}\times_{X}\mathbb{R}X_{0,n_{2}+1,\beta_{r}} \end{displaymath} We deduce a semi-simplicial object in the category of derived stacks where the $r+1$-morphisms from $X_{r+1,\beta}\to X_{r,\beta}$ are given by gluing two stable maps together. We have
\begin{tikzpicture}
\matrix (m) [matrix of math nodes, row sep=3em, column sep=3.2pc,
text width=2pc, text height=1pc, text depth=.5pc] {
X_{0,\beta} & X_{1,\beta} & X_{2,\beta} & \cdots \\
};
\path[<-] (m-1-1.15) edge node[above] {} (m-1-2.165) (m-1-1.-15) edge (m-1-2.-165); \path[<-] (m-1-2.28) edge node[above] {} (m-1-3.152) (m-1-2) edge (m-1-3) (m-1-2.-28) edge (m-1-3.-152); \path[<-] (m-1-3.37) edge node (t) {} node[above] {} (m-1-4.143) (m-1-3.-37) edge node (b) {} (m-1-4.-143);
\path[dotted] (t) edge (b); \end{tikzpicture}
Moreover, for any $r$ we have a morphism of gluing all stable maps from $X_{r,\beta}\to Z_{\beta}$ hence a morphism $\colim X_{\bullet,\beta} \to Z_{\beta}$.
The following theorem was not proved in \cite{2015arXiv150502964M}. We will prove it in the appendix. \begin{thm}\label{thm:colim}
We have that $\colim X_{\bullet,\beta} =Z_{\beta}$. \end{thm}
\subsection{Virtual object from derived algebraic geometry} \label{sec:virtual-object-from-1}
In this section, we explain how derived algebraic geometry will provide a sheaf in $G_{0}(\overline{\mathcal{M}}_{g,n}(X,\beta))$ that we will compare to the virtual sheaf of Lee.
\begin{lem}[See for example \cite{MR3285853} p.192-193]\label{lem:K,iso} Let $X$ be a derived algebraic stack. Denote by $t_{0}(X)$ its truncation. Denote by $\iota:t_{0}(X)\hookrightarrow X$ be the closed embedding. The morphism $\iota_{*}:G_{0}(t_{0}(X)) \to G_0(X)$ is an isomorphism. Moreover we have that \begin{displaymath}
(\iota_{*})^{-1}[\mathcal{F}]= \sum_{i}(-1)^{i}[\pi_{i}(\mathcal{F})] \end{displaymath} \end{lem}
Applying this lemma to the situation where $X=\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)$, we put \begin{displaymath}
\left[\mathcal{O}_{\overline{\mathcal{M}}_{g,n}(X,\beta)}^{\vir,\DAG}\right]:=\iota_{*}^{-1}[\mathcal{O}_{\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)}]. \end{displaymath} where the DAG means Derived Algebraic Geometry. Notice that the sheaf $\mathcal{O}_{\overline{\mathcal{M}}_{g,n}(X,\beta)}^{\vir,\DAG}$ depends on the derived structure that we put on the moduli space of stable maps.
The following theorem was not stated in \cite{2015arXiv150502964M}.
\begin{thm}\label{thm:orientation}
The DAG-virtual sheaf
$\mathcal{O}_{\overline{\mathcal{M}}_{g,n}(X,\beta)}^{\vir,\DAG}$ satisfies the orientation axiom in $G_{0}$-theory. That is \begin{enumerate} \item Mapping to a point. Let $\beta=0$, we have
\begin{displaymath}
\mathcal{O}_{\overline{\mathcal{M}}_{g,n}(X,0)}^{\vir,\DAG}=\sum_{i}(-1)^{i}\wedge^{i}(R^{1}\pi_{*}\mathcal{O}_{\mathcal{C}}\boxtimes
T_{X})^{\vee}
\end{displaymath} where $\mathcal{C}$ is the universal curve of $\overline{\mathcal{M}}_{g,n}$ and $\pi:\mathcal{C}\to \overline{\mathcal{M}}_{g,n}$. \item Product. We have
\begin{displaymath} \mathcal{O}^{\vir,\DAG}_{\overline{\mathcal{M}}_{g_{1},n_{1}}(X,\beta_{1})\times \overline{\mathcal{M}}_{g_{2},n_{2}}(X,\beta_{2})}=\mathcal{O}^{\vir,\DAG}_{\overline{\mathcal{M}}_{g_{1},n_{1}}(X,\beta_{1})}\boxtimes\mathcal{O}^{\vir,\DAG}_{\overline{\mathcal{M}}_{g_{2},n_{2}}(X,\beta_{2})}
\end{displaymath} \item Cutting edges. With the notation of Diagram \eqref{eq:13}, we have \begin{displaymath}
\mathcal{O}^{\vir,\DAG}_{\overline{\mathcal{M}}_{g_{1},n_{1}}(X,\beta_{1})\times_{X}\overline{\mathcal{M}}_{g_{2},n_{2}}(X,\beta_{2})}
=\Delta^{!}\mathcal{O}^{\vir,\DAG}_{\overline{\mathcal{M}}_{g_{1},n_{1}}(X,\beta_{1})\times \overline{\mathcal{M}}_{g_{2},n_{2}}(X,\beta_{2})} \end{displaymath} \item Forgetting tails. Forgetting the last marked point marked points, we get a morphism
$\pi:\overline{\mathcal{M}}_{g,n+1}(X,\beta)\to \overline{\mathcal{M}}_{g,n}(X,\beta)$. We have
the following equality. \begin{displaymath}\pi^{*}\mathcal{O}^{\vir,\DAG}_{\overline{\mathcal{M}}_{g,n}(X,\beta)}= \mathcal{O}^{\vir,\DAG}_{\overline{\mathcal{M}}_{g,n+1}(X,\beta)}.
\end{displaymath} \item Isogenies. The are two formulas. The morphism $\pi$ above induces a morphism
$\psi: \overline{\mathcal{M}}_{g,n+1}(X,\beta)\to \overline{\mathcal{M}}_{g,n+1}
\times_{\overline{\mathcal{M}}_{g,n}}\overline{\mathcal{M}}_{g,n}(X,\beta)$. With notation of
Diagram \eqref{eq:key,diag}, we have \begin{displaymath}
\psi_{*}\mathcal{O}^{\vir,\DAG}_{\overline{\mathcal{M}}_{g,n+1}(X,\beta)}=g^{!}\mathcal{O}^{\vir,\DAG}_{\overline{\mathcal{M}}_{g,n}(X,\beta)}.
\end{displaymath} The second formula is \begin{displaymath}
\sum_{r\in\mathbb{N}} (-1)^{r}{\alpha_{r}}_{*}\sum_{\sum_{i=0}^{r}\beta_{i}=\beta}
\mathcal{O}^{\vir,\DAG}_{X_{0,n_{1}+1}(X,\beta_{0})\times_{X}X_{0,2}(X,\beta_{1})\times_{X}\cdots
\times_{X}X_{0,2}(X,\beta_{r-1})\times_{X}X_{0,n_{2}+1}(X,\beta_{r})}=g^{!}\mathcal{O}^{\vir,\DAG}_{X_{0,n}(X,\beta)} \end{displaymath} where $g$ is defined in the key diagram \eqref{eq:key,diag}. \end{enumerate} \end{thm}
Before proving this theorem, we need a preliminary result. Consider a homotopical cartesian morphisms of schemes \begin{displaymath}
\xymatrix{X':=X\times_{Y}Y'\ar[rd]^{\iota} \ar@/^2pc/[rrd] \ar@/_2pc/[ddr]&&\\&X\times^{h}_{Y}Y'\ar[r]^-{\widetilde{f}} \ar[d]_-{g}&Y'\ar[d]\\&X \ar[r]^-{f}&Y} \end{displaymath} Denote by $X\times_{Y}^{h}Y'$ the homotopical pullback so that we have the closed immersion $\iota : X' \to X\times_{Y}^{h}Y'$. Assume that $f$ is a regular closed immersion. We have a rafined Gysin morphism (see \cite[p.4]{MR2040281}, \cite[ex.18.3.16]{MR1644323} or chapter 6 in \cite{MR801033}) which turns to be \begin{align}\label{eq:19}
f^{!}:G(Y') &\to G(X') \\ \nonumber [\mathcal{F}_{Y'}]& \mapsto (\iota_{*})^{-1}\circ\widetilde{f}^{*} [\mathcal{F}_{Y'}]. \end{align}
\begin{proof}[Proof of Theorem \ref{thm:orientation}]
(1). Strangely this proof is not easy and we postpone to the Appendix~\ref{sec:proof-theorem}. (2). This follows from the K\"unneth formula.
(3). We have the following diagram. \begin{displaymath}
\xymatrix{X_{g_{1},n_{1},\beta_{1}} \times_{X}X_{g_{2},n_{2},\beta_{2}}\ar[d]_{k}\ar@/^2pc/[rd]^-{h}& \\ X_{g_{1},n_{1},\beta_{1}} \times^{h}_{X}X_{g_{2},n_{2},\beta_{2}}\ar[d]_-{j}\ar[r]^-{g}& X_{g_{1},n_{1},\beta_{1}} \times X_{g_{2},n_{2},\beta_{2}}\ar[d]^-{i} \\
\mathbb{R}X_{g_{1},n_{1},\beta_{1}}
\times_{X}\mathbb{R}X_{g_{2},n_{2},\beta_{2}} \ar[d] \ar[r]^-{f}&\mathbb{R}X_{g_{1},n_{1},\beta_{1}}
\times\mathbb{R}X_{g_{2},n_{2},\beta_{2}} \ar[d]^{e_{i},e_{j}} \\ X \ar[r]^-{\Delta}& X\times X} \end{displaymath} We deduce the following equalities \begin{align*}
\Delta^{!}\mathcal{O}^{\vir,\DAG}_{X_{g_{1},n_{1},\beta_{1}}\times
X_{g_{2},n_{2},\beta_{2}}}&=\Delta^{!}(i_{*})^{-1}\mathcal{O}_{\mathbb{R}X_{g_{1},n_{1},\beta_{1}}\times
\mathbb{R}X_{g_{2},n_{2},\beta_{2}}}\\ &= (k_{*})^{-1}g^{*}(i_{*})^{-1}\mathcal{O}_{\mathbb{R}X_{g_{1},n_{1},\beta_{1}}\times \mathbb{R}X_{g_{2},n_{2},\beta_{2}}}&
& \mbox{by definition of rafined Gysin morphism} \\ &=(k_{*})^{-1}(j_{*})^{-1}f^{*}\mathcal{O}_{\mathbb{R}X_{g_{1},n_{1},\beta_{1}}\times
\mathbb{R}X_{g_{2},n_{2},\beta_{2}}} & &\mbox {by derived base change}\\ &= (k_{*})^{-1}(j_{*})^{-1}\mathcal{O}_{\mathbb{R}X_{g_{1},n_{1},\beta_{1}}\times_{X}
\mathbb{R}X_{g_{2},n_{2},\beta_{2}}} \\ &= \mathcal{O}^{\vir,\DAG}_{X_{g_{1},n_{1},\beta_{1}}\times_{X}
X_{g_{2},n_{2},\beta_{2}}} \end{align*}
(4). As $\widetilde{\pi}:\mathbb{R}\overline{\mathcal{M}}_{g,n+1}(X,\beta)\to \mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)$ is the universal curve (hence, it is flat) and $\pi$ is the truncation of $\widetilde{\pi}$. The derived base change formula implies the equality.
(5). We have the following diagram \begin{displaymath}
\xymatrix{\overline{\mathcal{M}}_{g,n+1}(X,\beta)\ar[r]^-{\psi} \ar[d]_-{k}&\overline{\mathcal{M}}_{g,n}\times_{\overline{\mathcal{M}}_{g,n}}\overline{\mathcal{M}}_{g,n}(X,\beta)\ar[r]^-{a}
\ar[d]^-{j}&\overline{\mathcal{M}}_{g,n}(X,\beta)\ar[d]_-{i}\\ \mathbb{R}\overline{\mathcal{M}}_{g,n+1}(X,\beta)\ar[r]^-{\varphi} \ar@/_2pc/[rd] &\overline{\mathcal{M}}_{g,n+1}\times_{\overline{\mathcal{M}}_{g,n}}\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)\ar[r]^-{b}\ar[d]&\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)\ar[d]\\ &\overline{\mathcal{M}}_{g,n+1} \ar[r]^-{c}& \overline{\mathcal{M}}_{g,n}} \end{displaymath} Notice that as $c$ is flat, the upper right square is also $h$-cartesian. We have \begin{align*}
c^{!}\mathcal{O}^{\vir,\DAG}_{\overline{\mathcal{M}}_{g,n}(X,\beta)}&=c^{!}(i_{*})^{-1}\mathcal{O}_{\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)}\\ &=a^{*}(i_{*})^{-1}\mathcal{O}_{\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)}\\ &= (j_{*})^{-1}b^{*}\mathcal{O}_{\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)}& \mbox{by derived base change}\\ &=(j_{*})^{-1}\mathcal{O}_{\overline{\mathcal{M}}_{g,n+1}\times_{\overline{\mathcal{M}}_{g,n}}\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)} \end{align*} On the other hand, we have \begin{align*}
\psi_{*}\mathcal{O}^{\vir,\DAG}_{\overline{\mathcal{M}}_{g,n}(X,\beta)}
&=\psi_{*}(k_{*})^{-1}\mathcal{O}_{\mathbb{R}\overline{\mathcal{M}}_{g,n+1}(X,\beta)}\\
&= (j_{*})^{-1}\varphi_{*}\mathcal{O}_{\mathbb{R}\overline{\mathcal{M}}_{g,n+1}(X,\beta)} & \end{align*} The formula follows from the equality below which is a consequence of the proof of Proposition 9 in \cite{MR2040281}. \begin{displaymath}
\varphi_{*}\mathcal{O}_{\mathbb{R}\overline{\mathcal{M}}_{g,n+1}(X,\beta)}=\mathcal{O}_{\overline{\mathcal{M}}_{g,n}\times_{\overline{\mathcal{M}}_{g,n}}\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)} \end{displaymath}
To prove the second formula of (5), we use the key Diagram \eqref{eq:key,diag}) with Theorem \ref{thm:colim}. Let $g_{1},g_{2},n_{1},n_{2}$ be integers. Put $g=g_{1}+g_{2}$ and $n=n_{1}+n_{2}$ and denote $\overline{\mathcal{M}}_{i}:=\overline{\mathcal{M}}_{g_{i},n_{i}+1}$. \begin{displaymath}
\xymatrix{t_{0}(Z_{\beta})\ar[d]^-{k}\ar@/^{2pc}/[rd]^-{a}\\ \left(\overline{\mathcal{M}}_{1}\times \overline{\mathcal{M}}_{2}\right)\times^{h}\overline{\mathcal{M}}_{g,n}(X,\beta)\ar[r]^-{b}
\ar[d]^-{j}&\overline{\mathcal{M}}_{g,n}(X,\beta)\ar[d]_-{i}\\ Z_{\beta}\ar[r]^-{c}\ar[d]&\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)\ar[d]\\ \overline{\mathcal{M}}_{1}\times \overline{\mathcal{M}}_{2} \ar[r]^-{g}& \overline{\mathcal{M}}_{g,n}} \end{displaymath}
We have \begin{align*}
g^{!}\mathcal{O}^{\vir,\DAG}_{\overline{\mathcal{M}}_{g,n}(X,\beta)}&= g^{!}(i_{*})^{-1}\mathcal{O}_{\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)}\\ &=(k_{*})^{-1}b^{*}(i_{*})^{-1}\mathcal{O}_{\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)}\\ &=(k_{*}^{-1})(j_{*})^{-1}c^{*} \mathcal{O}_{\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)}&
\mbox{by derived base change
}\\ &=(j\circ k)_{*} ^{-1}\mathcal{O}_{Z_{\beta}} \end{align*} We deduce the formula by observing that $Z_{\beta}$ is the colimit of $X_{\bullet,\beta}$ (see Theorem \ref{thm:colim}) and that the structure sheaf of a co-limit is the alternating sum of $\mathcal{O}_{X_{r,\beta}}$.
\end{proof}
The last formula of Theorem \ref{thm:orientation} and the third one implies the following corollary.
\begin{cor} We have the following equality in $G_0(t_{0}(Z_{\beta}))$.\\ \resizebox{1\linewidth}{!}{
\begin{minipage}{\linewidth} \begin{align*}
\label{eq:16}
\sum_{r\in\mathbb{N}} (-1)^{r}{\alpha_{r}}_{*}\left(\sum_{\sum_{i=0}^{r}\beta_{i}=\beta}
(\Delta^{r})^{!}\left(\mathcal{O}^{\vir,\DAG}_{X_{0,n_{1}+1}(X,\beta_{0})}\otimes\mathcal{O}^{\vir,\DAG}_{X_{0,2}(X,\beta_{1})}\otimes\cdots
\otimes \mathcal{O}^{\vir,\DAG}_{X_{0,2}(X,\beta_{r-1})}\otimes
\mathcal{O}^{\vir,\DAG}_{X_{0,n_{2}+1}(X,\beta_{r})}\right)\right)=g^{!}\mathcal{O}^{\vir,\DAG}_{X_{0,n}(X,\beta)}
\end{align*}\end{minipage}} \end{cor}
\subsection{Virtual object from perfect obstruction theory} \label{sec:virtual-object-from}
Here we follow the approach of Behrend-Fantechi \cite{MR1437495} to construct virtual object.
In the following, we denote by $\mathcal{M}$ a Deligne-Mumford stack. The reader can think of $\mathcal{M}$ being $\overline{\mathcal{M}}_{0,n}(X,\beta)$ as an example.
\begin{defn}
Let $\mathcal{M}$ be a Deligne-Mumford stack. An element $E^{\bullet}$ in the derived category $D(\mathcal{M})$ in degree
$(-1,0)$ is a perfect obstruction theory for $\mathcal{M}$ if we have a morphism $\varphi:E^{\bullet}\to
\mathbb{L}_{\mathcal{M}}$ that satisfies
\begin{enumerate}
\item $h^{0}(\varphi)$ is an isomorphism,
\item $h^{-1}(\varphi)$ is surjective.
\end{enumerate} \end{defn}
Let $E^{\bullet}$ be a perfect obstruction theory. Following \cite{MR1437495}, we have the following morphisms. \begin{enumerate} \item The morphism $a:C_{\mathcal{M}}\to h^{1}/h^{0}(E^{\vee}_{\bullet})$, where $C_{\mathcal{M}}$ is the intrinsic normal cone and $h^{1}/h^{0}(E_{\bullet}^{\vee})$ is the quotient stack $[E_{-1}^{\vee}/E_{0}^{\vee}]$. To understand how to construct this morphism, let us simplify the situation. Assume that $\mathcal{M}$ is embedded in something smooth, i.e $f:\mathcal{M}\hookrightarrow Y$ is a closed embedding with ideal sheaf $\mathcal{I}$. Then the intrinsic normal cone is the quotient stack $C_{\mathcal{M}}=[C_{\mathcal{M}}Y/f^{*}TY]$ where $C_{\mathcal{M}}Y:=\Spec \oplus_{n\geq 0} \mathcal{I}^{n}/\mathcal{I}^{n+1}$ is the normal cone of $f$. In this case, the intrinsic normal sheaf is $N_{\mathcal{M}}=[N_{\mathcal{M}}Y/f^{*}TY]=h^{1}/h^{0}(\mathbb{L}_{\mathcal{M}}^{\vee})$ where $N_{\mathcal{M}}Y:=\Spec \Sym \mathcal{I}/\mathcal{I}^{2}$. As we have a morphism from the normal cone to the normal sheaf $C_{\mathcal{M}}Y \to N_{\mathcal{M}}Y$, we deduce a morphism from the intrinsic normal cone to the intrinsic normal sheaf i.e., a morphism
\begin{equation}\label{eq:17}
C_{\mathcal{M}}\to N_{\mathcal{M}}
\end{equation} Now the morphism of the perfect obstruction theory $\varphi:E^{\bullet}\to \mathbb{L}_{\mathcal{M}}$ induces a morphism from \begin{equation}
\label{eq:18}
N_{\mathcal{M}} \to [E_{-1}^{\vee}/E_{0}^{\vee}]
\end{equation} The morphism $a$ is the composition of the two morphisms \eqref{eq:17} and \eqref{eq:18}. \item We also have a natural morphism $b:\mathcal{M}\to h^{1}/h^{0}(E_{\bullet}^{\vee})$ given by the zero section. \end{enumerate}
From these two morphisms, we can perform the homotopical fiber product \begin{align}\label{eq:1}
\xymatrix{\mathcal{M}\times_{h^{1}/h^{0}(E^{\vee}_{\bullet})}^{h}C_{\mathcal{M}} \ar[r]\ar[d]^{r}& C_{\mathcal{M}}\ar[d] \\ \mathcal{M}
\ar[r] &h^{1}/h^{0}(E_{\bullet}^{\vee})} \end{align} As the standard fiber product is $\mathcal{M}$, we have that $\mathcal{M}\times_{h^{1}/h^{0}(E^{\vee}_{\bullet})}^{h}C_{\mathcal{M}}$ is a derived enhancement of $\mathcal{M}$ with $\widetilde{j}:\mathcal{M}\to \mathcal{M}\times_{h^{1}/h^{0}(E^{\vee}_{\bullet})}^{h}C_{\mathcal{M}}$ the canonical closed embedding. Notice that in the case $\mathcal{M}=\overline{\mathcal{M}}_{g,n}(X,\beta)$, we get a
derived enhancement which is different from $\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)$ (see
Remark \ref{rk:different,retract}). We
will compare these two structures in \S~\ref{sec:comp-theor-two}. Hence we can apply the Lemma \ref{lem:K,iso} and we denote \begin{align}\label{eq:vir,POT}
[\mathcal{O}_{\mathcal{M}}^{\vir,\POT}]:=\widetilde{j}_{*}^{-1}[\mathcal{O}_{\mathcal{M}\times_{h^{1}/h^{0}(E^{\vee}_{\bullet})}^{h}C_{\mathcal{M}}}]
\in G_0(\mathcal{M}) \end{align} where POT means Perfect Obstruction Theory. The definition of Lee for the virtual sheaf turns to be exactly this one. Indeed, Lee consider the following (not homotopical) la cartesian diagram \begin{align}\label{eq:diag,POT}
\xymatrix{\mathcal{M}\times_{E_{-1}^{\vee}} C_{1} \ar[r] \ar[d]^{r}&C_{1}\ar[r] \ar[d]& C_{\mathcal{M}}\ar[d] \\ \mathcal{M}\ar[r]&
E_{-1}^{\vee}\ar[r]&h^{1}/h^{0}(E_{\bullet}^{\vee})} \end{align}
In \cite[p.8]{MR2040281}, Lee takes as a definition for the virtual sheaf \begin{align*} \mathcal{O}_{\mathcal{M}}^{\vir}:= \sum_{i}(-1)^{i} \mathcal{T}or_{i}^{h^{1}/h^{0}}(\mathcal{O}_{\mathcal{M}},\mathcal{O}_{C_{1}})= \mathcal{O}_{\mathcal{M}}\otimes^{\mathbb{L}}_{h^{1}/h^{0}}\mathcal{O}_{C_{1}}=\mathcal{O}_{\mathcal{M}}^{\vir,\POT} \end{align*} where the last equality follows from Lemma \ref{lem:K,iso}.
\subsection{Comparison theorem of the two approachs} \label{sec:comp-theor-two}
Let $\mathcal{M}:=\overline{\mathcal{M}}_{0,n}(X,\beta)$. In this section, we want to compare $\mathcal{O}_{\mathcal{M}}^{\vir,\DAG}$ with $\mathcal{O}_{\mathcal{M}}^{\vir,\POT}$. The first question is : what is the perfect obstruction theory we are choosing ?
This is given by the following result. \begin{prop}[\cite{2011-Schur-Toen-Vezzosi}]
Let $\mathbb{R}\mathcal{M}$ be a derived Deligne-Mumford stack. Denote by $\mathcal{M}$ its truncation and
its truncation morphism by $j:\mathcal{M}\hookrightarrow \mathbb{R}\mathcal{M}$. Then $j^{*}\mathbb{L}_{\mathbb{R}\mathcal{M}}\to \mathbb{L}_{\mathcal{M}}$ is a perfect obstruction theory. \end{prop}
Now the original question makes perfectly sense and we have the following result that says that they are the same sheaves.
\begin{thm}[See Proposition 4.3.2 in \cite{2015arXiv150502964M}]\label{thm,O,pot=Dag} In $G_0(\mathcal{M})$, we have \begin{displaymath}
[\mathcal{O}_{\mathcal{M}}^{\vir,\DAG}] =[\mathcal{O}_{\mathcal{M}}^{\vir,\POT}] \end{displaymath} \end{thm}
\begin{remark}\label{rk:different,retract}
Notice that the two enhancements $\mathbb{R}\mathcal{M}$ or
$\mathcal{M}\times_{h^{1}/h^{0}(E^{\vee}_{\bullet})}^{h}C_{\mathcal{M}}$ are not the same. Indeed,
the second one has a retract
$r:\mathcal{M}\times_{h^{1}/h^{0}(E^{\vee}_{\bullet})}^{h}C_{\mathcal{M}}\to \mathcal{M}$ given in
the diagram \eqref{eq:1} that is $r \circ \widetilde{j} =\Id_{\mathcal{M}}$ where $\widetilde{j}$
is the closed immersion from $\mathcal{M}$ to
$\mathcal{M}\times_{h^{1}/h^{0}(E^{\vee}_{\bullet})}^{h}C_{\mathcal{M}}$. From this we get the
following exact triangle of cotangent complexes \begin{align} & \mathbb{L}_{\widetilde{j}}[-1]\to \widetilde{j}^{*}\mathbb{L}_{\mathcal{M}\times_{h^{1}/h^{0}(E^{\vee}_{\bullet})}^{h}C_{\mathcal{M}}}
\to \mathbb{L}_{\mathcal{M}} \label{eq:6}\\ & r^{*}\mathbb{L}_{\mathcal{M}}\to \mathbb{L}_{\mathcal{M}\times_{h^{1}/h^{0}(E^{\vee}_{\bullet})}^{h}C_{\mathcal{M}}} \to\mathbb{L}_{r} \label{eq:4} \end{align} Applying $\widetilde{j}^{*}$ to the second line, we get \begin{displaymath}
\mathbb{L}_{\mathcal{M}}\to \widetilde{j}^{*}\mathbb{L}_{\mathcal{M}\times_{h^{1}/h^{0}(E^{\vee}_{\bullet})}^{h}C_{\mathcal{M}}} \to \widetilde{j}^{*}\mathbb{L}_{r} \label{eq:4} \end{displaymath} This means that \eqref{eq:6} has a splitting that is \begin{align}\label{eq:spli}
\widetilde{j}^{*}\mathbb{L}_{\mathcal{M}\times_{h^{1}/h^{0}(E^{\vee}_{\bullet})}^{h}C_{\mathcal{M}}}=\mathbb{L}_{\widetilde{j}}[-1]\oplus \mathbb{L}_{\mathcal{M}} \end{align} Comparing to the cotangent complex of $\mathbb{R}\mathcal{M}$ that has no reason to split, we get a priori two different derived enhancement of $\mathcal{M}$. \end{remark}
Notice that in the work of Fantechi-G\"ottsche \cite[Lemma 3.5]{MR2578301} (see also Roy Joshua \cite{Roy-joshua}), they prove that for a scheme $X$ with a perfect obstruction theory $E^{\bullet}:=[E^{-1}\to E^{0}]$, we have \begin{align}
\label{eq:20} \tau_{X}(\mathcal{O}_{X}^{\vir,\POT})=\Td (TX^{\vir})\cap [X^{\vir,\POT}] \end{align} where $TX^{\vir}\in G_{0}(X)$ is the class of $[E_{0}]-[E_{1}]$ where $[E_{0}\to E_{1}]$ is the dual complex of $E^{\bullet}$ and $\tau_{X}:G_{0}(X) \to A_{*}(X)_{\mathbb{Q}}$.
Notice that the Formula \eqref{eq:20} with Theorem \ref{thm,O,pot=Dag} implies that \begin{displaymath} [\overline{\mathcal{M}}_{g,n}(X,\beta)]^{\vir,\POT}=\tau(\mathcal{O}_{\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)})\Td(T_{\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)})^{-1} \end{displaymath}
\appendix
\section{Proof of theorem \ref{thm:colim}} \label{sec:colimit-proof}
\begin{thm} The map $$ f: \colim^{\mathrm{DM}}\, X_{\bullet, \beta} \to Z_\beta $$ \noindent of \cite[(4.2.9)]{2015arXiv150502964M} is an equivalence of derived Deligne-Mumford stacks. \begin{proof} It follows from the discussion in the proof of \cite[Prop. 4.2.1]{2015arXiv150502964M} that
\BarrBeckTriangle{\Perf(Z_\beta); \Perf(\colim^{\mathrm{DM}}\, X_{\bullet, \beta}); \lim_{\Delta} \Perf(X_{\bullet, \beta});f^*;g;h}
\noindent commutes with the morphism $h$ being an equivalence after h-descent for perfect complexes \cite[4.12]{1402.3204} and the morphism $g$ being fully faithful after the result of gluing along closed immersions \cite[16.2.0.1]{Lurie-SAG}. This immediately implies that the map $f^*$ is an equivalence of categories because we have $g\circ f^{*}=h$ and $g$ is conservative as it is fully faithful.
As both source and target of $f$ are perfect stacks (the first being a colimit of perfect stacks along closed immersions and second being pullback of perfect stacks), $f^*$ induces an equivalence
$$ \xymatrix{\Qcoh(Z_\beta)\ar[r]^-{f^*}&\Qcoh(\colim^{\mathrm{DM}}\, X_{\bullet, \beta})}$$
We conclude that $f$ is an equivalence using Tannakian duality \cite[9.2.0.2 ]{Lurie-SAG}.\\
\end{proof} \end{thm}
\section{Proof of Theorem \ref{thm:orientation}.(1)} \label{sec:proof-theorem}
Let $X$ be a derived stack. We will use the linear derived stacks $\mathbb{V}(\mathbb{E})$ (See \cite[p.200]{MR3285853} ) where $\mathbb{E}$ is a complex of quasi-coherent sheaf on $X$. We have a morphism $\mathbb{V}(\mathbb{E})\to X$ and a zero section $s:X\to \mathbb{V}(\mathbb{E})$. One should understand that $\mathbb{V}(\mathbb{E})$ as a vector bundle where the fibers are $\mathbb{E}$.
It is a derived generalisation of $\mathbf{\Spec} \Sym \mathcal{E}$ for a coherent sheaf $\mathcal{E}$. If $\mathbb{E}$ is a two terms complex with cohomology in degree $0$ and $1$, then we have that $t_{0}(\mathbb{V}(\mathbb{E}^{\vee}[-1]))=[h^{1}/h^{0}(\mathbb{E})]$ (See \S 2 in \cite{MR1437495} for the definition of the quotient stacks).
Let recall some notation of \S \ref{sec:virtual-object-from} and \S \ref{sec:comp-theor-two}. Let
$g,n \in\mathbb{N}$ and $\beta\in H_{2}(X,\mathbb{Z})$. Denote by $j$ the closed immersion
$\overline{\mathcal{M}}_{g,n}(X,\beta)\to \mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)$. To
simplify the notation, put $\mathcal{M}=\overline{\mathcal{M}}_{g,n}(X,\beta)$ and
$\mathbb{R}\mathcal{M}=\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)$.
From the exact triangle \[ j^{*}\mathbb{L}_{\mathbb{R}\mathcal{M}} \to \mathbb{L}_{\mathcal{M}} \to \mathbb{L}_{j} \] We deduce that following cartesian diagram
\begin{align}
\xymatrix{ \mathbb{V}(\mathbb{L}_{j}[-1]) \ar[r] \ar[d]& \mathbb{V}(\mathbb{L}_{\mathcal{M}}[-1]) \ar[d] \\ \mathcal{M}
\ar[r]& \mathbb{V}(j^{*}\mathbb{L}_{\mathbb{R}\mathcal{M}}[-1]) }
\end{align} Recall that $j^{*}\mathbb{L}_{\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)}$ is a two terms complex in degree $-1$ and $0$ but in general it is not the case for $\mathbb{L}_{j}$ and $\mathbb{L}_{\overline{\mathcal{M}}_{g,n}(X,\beta)}$. Comparing with Behrend-Fantechi, we have $ t_{0}(\mathbb{V}(\mathbb{L}_{\mathcal{M}}[-1]))$ is the intrinsic normal sheaf $N_{\mathcal{M}}$ (See \S \ref{sec:virtual-object-from}) and we have the following cartesian diagram
\begin{align}\label{diag:cone}
\xymatrix{\mathcal{M}\times^{h}_{\mathbb{V}(j^{*}\mathbb{L}_{\mathbb{R}\mathcal{M}}[-1])}
C_{\mathcal{M}} \ar[r] \ar[d]& C_{\mathcal{M}} \ar[d]\\ \mathbb{V}(\mathbb{L}_{j}[-1]) \ar[r] \ar[d]& \mathbb{V}(\mathbb{L}_{\mathcal{M}}[-1]) \ar[d] \\ \mathcal{M}
\ar[r]& \mathbb{V}(j^{*}\mathbb{L}_{\mathbb{R}\mathcal{M}}[-1]) }
\end{align}
\begin{prop}\label{prop:def,normal,sheaf}
Let $g,n \in\mathbb{N}$ and $\beta\in H_{2}(X,\mathbb{Z})$. Denote by $j$ the closed immersion $\overline{\mathcal{M}}_{g,n}(X,\beta)\to \mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)$ and by $s:\overline{\mathcal{M}}_{g,n}(X,\beta)\to \mathbb{V}({\mathbb{L}_{j}[-1]})$ be the zero section. We have the following equality in $G_{0}(\overline{\mathcal{M}}_{g,n}(X,\beta))$
\begin{align*}
\mathcal{O}_{\overline{\mathcal{M}}_{g,n}(X,\beta)}^{\vir,\DAG}:=j_{*}^{-1} \mathcal{O}_{\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,\beta)} = s_{*}^{-1}(\mathcal{O}_{\mathbb{V}(\mathbb{L}_{j})[-1]})
\end{align*} \end{prop}
\begin{proof} From Gaitsgory (see Proposition 2.3.6 p 18 Chapter IV.5 \cite{gaitsgory}), we can construct an derived stack $\mathcal{Y}_{scaled}$ such that the following diagram has two homotopical fiber products \begin{align*}
\xymatrix{\mathbb{R}\mathcal{M} \ar[r]^-{h}& \mathcal{Y}_{scaled}& \ar[l]_-{v} \mathbb{V}(\mathbb{L}_{j}[-1])\\ \mathcal{M}\times \{0\}\ar[r]^-{i_{0}} \ar[u]^{j} & \mathcal{M}\times \mathbb{A}^{1} \ar[u]_{\sigma}&\ar[l]_-{i_{1}} \mathcal{M}\times \{1\} \ar[u]^{s}} \end{align*}
We have \begin{align*}
(s_{*})^{-1}\mathcal{O}_{\mathbb{V}(\mathbb{L}_{j}[-1])}& = (s_{*})^{-1}v^{*}\mathcal{O}_{\mathcal{Y}_{scaled}}\\ &=i_{1}^{*}(\sigma_{*})^{-1}\mathcal{O}_{\mathcal{Y}_{scaled}}\\ &=i_{0}^{*}(\sigma_{*})^{-1}\mathcal{O}_{\mathcal{Y}_{scaled}}\\ \end{align*} The last equality follows from the $\mathbb{A}^{1}$- invariance of the $G$-theory. That is, we have that $G_{0}(\mathcal{M}\times \mathbb{A}^{1}) \to G_{0}(\mathcal{M})$ and $i_{0}^{*}=(\pi^{*})^{-1}=i_{1}^{*}$ where $\pi$ is the projection. Applying the same computation as above with the other homotopical fiber product, we get Formula. \end{proof}
\begin{remark}
This statement is a first step in proving Theorem \ref{thm,O,pot=Dag}. The last step is to prove
that the inclusion $C_{\mathcal{M}}\to N_{\mathcal{M}}$ induces an equality of the structure sheaf
in $G_{0}$-theory. \end{remark}
\begin{cor}\label{prop:degree,0} For stable maps of degree $0$, we have that \[ \mathcal{O}_{\overline{\mathcal{M}}_{g,n}(X,0)}^{\vir,\DAG}=\sum_{i}(-1)^{i}\wedge^{i}(TX\boxtimes R^{1}\pi_{*}\mathcal{O}_{\mathcal{C}}) \] \end{cor}
\begin{remark}
Notice that in the case of $\beta=0$, we have that $\overline{\mathcal{M}}_{g,n}(X,\beta=0)
=\overline{\mathcal{M}}_{g,n}\times X$ which is smooth. Nevertheless, it has a derived
enhancement, given by the $\mathbb{R}\Map$ which has a retract given by the projection and the
evaluation. For $\beta\ne 0$, this retract does not exist.
\end{remark}
\begin{proof}
For $\beta=0$, the smoothness of $\mathcal{M}$ implies that the intrinsic normal cone is the intrinsic normal sheaf that is we have the $C_{\mathcal{M}}=\mathbb{V}(\mathbb{L}_{\mathcal{M}}[-1])$ in the diagram \eqref{diag:cone}. The second thing which is different is that $j:\mathcal{M}\to \mathbb{R}\mathcal{M}$ has a retract. This implies that $\mathbb{L}_{j}[-1]\simeq\mathbb{L}_{\mathcal{M}}[-1]\oplus j^{*}\mathbb{L}_{\mathbb{R}\mathcal{M}}$. Hence the Proposition \ref{prop:def,normal,sheaf}, implies that we need to compput $s_{*}^{-1}\mathcal{O}_{\mathbb{V}(\mathbb{L}_{j}[-1])}$ which is by standard computation $\sum_{i}(-1)^{i}\wedge^{i}(TX \boxtimes R^{1}\pi_{*}\mathcal{O}_{\mathcal{C}})$ where $\mathcal{C}$ is the universal curve of $\overline{\mathcal{M}}_{g,n}$.
\end{proof}
From the proof, we see that the RHS of the formula is the structure sheaf of $\mathbb{V}(\mathbb{L}_{j}[-1])$. In fact, we think that $\mathbb{R}\overline{\mathcal{M}}_{g,n}(X,0)$ is isomorphic to $\mathbb{V}(\mathbb{L}_{j}[-1])$. This should follow from a general argument that we will detail in the next section for the affine case.
\section{Alternative proof of Corollary \ref{prop:degree,0} in the affine case.}
\begin{prop}\label{prop:qsmooth-dagvectbundle} Let $F=\Spec A$ be an affine quasi-smooth algebraic derived stack. Let $F_{0}=\Spec \pi_{0}(A)$ its truncation and denote $j:F_{0}\to F$ its closed immersion. Assume that $F_{0}$ is smooth and that $F$ admit a retract $r:F\to F_{0}$. Then $F=\mathbb{V}(\mathbb{L}_{j}[-1])$. \end{prop}
This proposition is a way of proving Corollary \ref{prop:degree,0} in the affine case without using the deformation argument of Gaitsgory. We believe that we can drop the affine assumption in the previous proposition.
Notice that we can drop the existence of the retract in the hypothesis because when $F{0}$ is smooth, there always exists a retract (see the Remark \ref{rmk:retract}).
\begin{lem}\label{lem:appen}
With the previous hypothesis, we have
\begin{align*}
\pi_{0}(\mathbb{L}_{j})=\pi_{1}(\mathbb{L}_{j})=0\\
\pi_{2}(\mathbb{L}_{j})=\pi_{1}(j^{*}\mathbb{L}_{F})=\pi_{2}(\mathbb{L}_{\pi_{0}(A)/\tau_{\leq
1}A}) = \pi_{1}(A)\\ \mathbb{L}_{j}[-1] \simeq \pi_{1}(A)[1]
\end{align*} \end{lem}
\begin{proof} We have the triangle \begin{displaymath}
j^{*}\mathbb{L}_{F}\to\mathbb{L}_{F_{0}} \to \mathbb{L}_{j}. \end{displaymath}
Applying the hypothesis, we get \begin{enumerate} \item As $F$ is quasi-smooth, we have that $\pi_{2}(j^{*}\mathbb{L}_{F})=0$. \item As $F_{0}$ is smooth, we have that $\pi_{2}(\mathbb{L}_{F_{0}})=\pi_{1}(\mathbb{L}_{F_{0}})=0$. \item As $j^{*}\mathbb{L}_{F}\to \mathbb{L}_{F_{0}}$ is a perfect obstruction theory, we deduce
$\pi_{0} (j^{*}\mathbb{L}_{F}) \simeq \pi_{0}(\mathbb{L}_{F_{0}})$ and $\pi_{1}
(j^{*}\mathbb{L}_{F}) \to \pi_{1} (\mathbb{L}_{F_{0}})$ is onto. \end{enumerate}
Applying the three properties above to the associated long exact sequence, we get \begin{enumerate} \item As $F$ is quasi-smooth, we have that $\pi_{2}(j^{*}\mathbb{L}_{F})=0$. \item As $F_{0}$ is smooth, we have that $\pi_{2}(\mathbb{L}_{F_{0}})=\pi_{1}(\mathbb{L}_{F_{0}})=0$. \item As $j^{*}\mathbb{L}_{F}\to \mathbb{L}_{F_{0}}$ is a perfect obstruction theory, we deduce
$\pi_{0} (j^{*}\mathbb{L}_{F}) \simeq \pi_{0}(\mathbb{L}_{F_{0}})$ and $\pi_{1}
(j^{*}\mathbb{L}_{F}) \to \pi_{1} (\mathbb{L}_{F_{0}})$ is onto. \end{enumerate}
\begin{tikzpicture} \matrix[matrix of nodes,ampersand replacement=\&, column sep=0.5cm, row sep=0.5cm](m) {
$0$ \& $0 $ \& $\pi_{2} (\mathbb{L}_{j})$ \\
$\pi_{1} (j^{*}\mathbb{L}_{F})$ \& $0$ \& $0$ \\
$ \pi_{0} (j^{*}\mathbb{L}_{F})$ \& $\pi_{0}(\mathbb{L}_{F_{0}})$ \& $0$ \\
}; \draw[->] (m-1-1) edge (m-1-2)
(m-1-2) edge (m-1-3)
(m-1-3) edge[out=0, in=180] (m-2-1)
(m-2-1) edge (m-2-2)
(m-2-2) edge (m-2-3)
(m-2-3) edge[out=0, in=180] (m-3-1)
(m-3-1) edge (m-3-2)
(m-3-2) edge (m-3-3); \end{tikzpicture}
We conclude that \begin{enumerate} \item $ \pi_{2}(\mathbb{L}_{j})=\pi_{1}(j^{*}\mathbb{L}_{F})$ \item $\mathbb{L}_{j}$ is $2$-connective. \end{enumerate}
To prove the second equality of the lemma, we use the Postnikov tower that is we consider the closed immersion $j_{1}:F_{0}\to F_{1} $ and $j_{2}:F_{1}\to F$ where $F_{1}$ is $\Spec \tau_{\leq 1} A$. We deduce the exact triangle \begin{align*}
j_{1}^{*}\mathbb{L}_{j_{2}} \to \mathbb{L}_{j} \to \mathbb{L}_{j_{1}} \end{align*}
As we have $j$ and $j_{1}$ are $1$-connected and $j_{2}$ is $2$-connected, we deduce from connective estimates that $\mathbb{L}_{j}$ and $\mathbb{L}_{j_{1}}$ are $2$-connective and $\mathbb{L}_{j_{2}}$ is $3$-connective (See Corollary 5.5 in \cite{2013arXiv1310.3573P}). We deduce from the long exact sequence that $\pi_{2}(\mathbb{L}_{j})=\pi_{2}(\mathbb{L}_{j_{1}})$. How we apply Lemma 2.2.2.8 in \cite{Toen-Vezzosi-2008-HAGII} that implies that $\pi_{2}(\mathbb{L}_{j_{1}})=\pi_{1}(A)$.
As we have that $\pi_{k}(\mathbb{L}_{j})=0$ for all $k\neq 2$ and $\pi_{2}(\mathbb{L}_{j})=\pi_{1}(A)$, we deduce that $\mathbb{L}_{j}[-1]\simeq \pi_{1}(A)[1]$. \end{proof}
\begin{proof}[Proof of Proposition \ref{prop:qsmooth-dagvectbundle}] To prove the proposition, we will show that \begin{align}\label{eq:25}
B:=\Sym_{\pi_{0}(A)} (\pi_{1}(A)[1]) \simeq A \end{align} First, we will construct a morphism $f:B\to A$. Notice that $\pi_{1}(A)$ is a free $\pi_{0}(A)$ module by the last statement of Lemma \ref{lem:appen}. Then we get an inclusion $\pi_{1}(A)[1]\to A$ of $\pi_{0}(A)$-modules which induces $f:B\to A$. Moreover $f$ is an equivalence on $\pi_{0}$ and $\pi_{1}$ that is $\tau_{\leq 1}B\simeq \tau_{\leq 1}A$.
Then we construct an inverse from $A\to B$ using the Postnikov tower. We have $\varphi:A\to\tau_{\leq
1}A\simeq \tau_{\leq 1}B$. As $B$ is the colimit of its Postnikov tower, we will proceed by induction on the Postnikov tower. First, we want to lift the morphism $\varphi:A \to \tau_{\leq 1}B$ to $A\to \tau_{\leq 2}B$. We use the following cartesian diagram (See Remark 4.3 in \cite{2013arXiv1310.3573P}) \begin{align}\label{eq:26}
\xymatrix{\tau_{\leq 2}B \ar[r] \ar[d]& \tau_{\leq 1}B\ar[d]^{d} \\ \tau_{\leq 1} B\ar[r]^-{\Id,0} &\tau_{\leq 1}B \oplus \pi_{2}(B)[3]} \end{align}
Hence, we need to construct a commutative diagram \begin{align}\label{eq:27}
\xymatrix{A \ar[r]^{\varphi} \ar[d]^{\varphi}& \tau_{\leq 1}B\ar[d]^{d} \\ \tau_{\leq 1} B\ar[r]^-{\Id,0} &\tau_{\leq 1}B \oplus \pi_{2}(B)[3]} \end{align} As $\mathbb{L}_{A}$ has a tor-amplitude in $[-1,0]$, we have that \begin{align*}
\pi_{0}(\Map(\mathbb{L}_{A},\pi_{2}(B)[3]))=0\\ \pi_{1}(\Map(\mathbb{L}_{A},\pi_{2}(B)[3]))=0 \end{align*} Hence we deduce a morphism from $\psi:A\to A\oplus_{d\circ \varphi}\pi_{2}(B)[3]$. Hence we get the morphism from $A\to B_{\tau_{\leq 2}}$.
\begin{align*}
\xymatrix{ A \ar@/^/[rrrrd] \ar@/_/[ddddr] \ar@{.>}[rd]^-{\psi}&&&&&\\ & A\oplus_{d\circ \varphi}\pi_{2}(B)[3] \ar[ddd]\ar[rrr] \ar[rd]&&& A \ar[ddd]^{d\circ \varphi} \ar[ld]^-{\varphi}& \\ & &B_{\tau_{\leq 2}}\ar[r] \ar[d]& B_{\tau{\leq 1}} \ar[d]^-{d} &&\\ &&B_{\tau_{\leq 1}} \ar[r]^-{0}&B_{\tau_{\leq 1}}\oplus \pi_{2}(B)[3]&&\\ &A\ar[rrr]^-{0}\ar[ru]^-{\varphi}&&&A\oplus \pi_{2}(B)[3] \ar[lu]^-{\varphi,\Id}&\\ } \end{align*}
Hence by induction, we get a morphism from $g:A\to B$. The composition $g\circ f: B\to A\to B$ is the identity on $\pi_{1}(B)$ and by the universal property of $\Sym$, we deduce that $g\circ f=\Id_{B}$. This implies that $\pi_{i}(B)=\wedge^{i}\pi_{1}(A)\to \pi_{i}(A)$ is injective. To finish the proof, we will prove that these morphisms are surjective.
For this purpose we use another characterization of afffine quasi-smooth derived scheme. Let us fix generators of $\pi_0(A)$. This choice is determined a surjective map of commutative $k$-algebras $k[x_1,.., x_n]\to \pi_0(A)$. As the polynomial ring is smooth, we proceed by induction on the Postnikov tower of $A$ to construct a morphism from $k[x_{1}, \ldots ,x_{n}]\to \tau_{\leq n}A$. We use the same idea as above for constructing the morphism $A\to B$. We get a map of cdga's $k[x_1,.., x_n]\to A$ which remains a closed immersion. Moreover, one can now choose generators for the kernel $I$ of $k[x_{1}, \ldots ,x_{n}]\to \pi_0(A)$, say, $f_1,.., f_m$ whose image in $I/I^2$ form a basis. The fact that $k[y_1,.., y_m]$ is smooth allows us to extend the zero composition map $$ k[y_1,..., y_m]\to k[x_1,.., x_m]\to \pi_0(A) $$ to map $$ k[y_1,..., y_m]\to k[x_1,.., x_m]\to A $$ together with a null-homotopy. This puts $A$ in a commutative square of cdga's \begin{displaymath}
\xymatrix{k[y_{1}, \ldots ,y_{m}]\ar[r] \ar[d] & k[x_{1}, \ldots ,x_{n}] \ar[d] \\ k \ar[r]& A} \end{displaymath}
which we is a pushout square. Indeed, it suffices to show that the canonical map $$ k\otimes^{\mathbb{L}}_{k[y_1,.., y_m]}k[x_1,..., x_n]\to A $$ induces an isomorphism between the cotangent complexes. But as $\Spec(A)$ is quasi-smooth, its cotangent complex is perfect in tor-amplitudes $-1, 0$, meaning that it can be written as $$ {A}^m\to {A}^n $$ and this identifies with the standard description of the cotangent complex of the derived tensor product $k\otimes^{\mathbb{L}}_{k[y_1,.., y_m]}k[x_1,..., x_n]$. This implies that surjectivity of the morphisms $\pi_{i}(B)\to \pi_{i}(A)$. \end{proof}
\begin{remark}\label{rmk:retract}
As $F=\Spec A$ is a derived scheme (not necessarily quasi-smooth) and its truncation is $F_0$ is smooth,
we have that $F_{0}\to F$ admits a retract. We proceed by induction on the
Postnikov tower of $A$ to construct a lift $$ \xymatrix{ & A\ar[d]\\ \pi_0(A)\ar[r]^{\Id}\ar@{->}[ur]&\pi_0(A) } $$ We use the same kind of diagrams as \eqref{eq:26} and \eqref{eq:27} Indeed, as $\mathbb{L}_{F_0}$ is concentrated in degree 0, all the groups $$ \pi_0(\Map(\mathbb{L}_{F_0}, \pi_n(A)[n+1]))=\pi_1(\Map(\mathbb{L}_{F_0}, \pi_n(A)[n+1]))=0 $$ vanish for $n\geq 1$ saying that the liftings exist at each level of the Postnikov tower the space of choices of such liftings is connected. \end{remark}
\end{document} | arXiv |
\begin{document}
\title{Minors of Hermitian (quasi-) Laplacian matrix of a mixed graph} \author{ Deepak Sarma \\ Department of Mathematical Sciences,\\ Tezpur University, Tezpur-784028, India.\\ E-mail: \url{[email protected]} } \date{}
\pagestyle{myheadings} \markboth{Deepak Sarma} {Minors of Hermitian (quasi-)Laplacian matrix of a mixed graph}
\maketitle
\vskip 5mm \noindent{\footnotesize
\begin{abstract} A mixed graph is obtained from an unoriented graph by orienting a subset of its edges. Yu, Liu and Qu in 2017 have established the expression for the determinant of the Hermitiann (quasi-) Laplacian matrix of a mixed graph. Here we find general expressions for all minors of Hermitian (quasi-) Laplacian matrix of mixed graphs.
\vskip 3mm
\noindent{\footnotesize Key Words: Mixed Graphs; Hermitian Adjacency Matrix; Hermitian (quasi-) Laplacian matrix }
\vskip 3mm
\noindent {\footnotesize AMS subject classification: 05C20, 05C50, 05B20.} \end{abstract}
\section{{Introduction}}\label{secone}
\hspace{.62cm}Throughout this article, all graphs are finite and simple, i.e., without loops and multiple edges. But an unoriented edge will be equivalent to two parallel oriented edges in opposite directions. A mixed graph is obtained from an unoriented graph by orienting a subset of its edges. If there is an oriented edge from vertex $u$ to vertex $v$ in a graph $G(V,E)$ then we write it as $\overrightarrow{uv} \in E$ or $\overleftarrow{vu} \in E$ and we say that $u$ is the head and $v$ is the tail of the edge. If there is an unoriented edge connecting vertices $u$ and $v$ in graph $G(V,E)$ then we write it as $\overline{uv}\in E.$
Hermitian adjacency matrix of a mixed graph was introduced by Liu and Li \cite{ll16} and independently by Guo and Mohar \cite{gm16}. Let G be a mixed graph with vertex set $V(G)=\{v_{1}, v_{2}, ... , v_{n}\}$ and edge set $E(G)=\{e_{1}, e_{1}, ... , e_{m}\}$. The Hermitian adjacency matrix of the mixed graph $G$ is the $n \times n$ matrix $H(G)=(h_{uv})$, where
\[ h_{uv} = \left\{ \begin{array}{ll}
i, & \hbox{if $\overrightarrow{uv} \in E$;} \\
-i, & \hbox{if $\overrightarrow{vu} \in E$;} \\
1, & \hbox{if $\overline{uv} \in E$;} \\
0, & \hbox{otherwise.}
\end{array} \right. \]
If $V_1\subseteq V(G)$ and $E_1\subseteq E(G),$ then by $G-V_1$ and $G-E_1$ we mean the graphs obtained from $G$ by deleting the vertices in $V_1$ and the edges $E_1$ respectively. In particular case when $V_1=\{u\}$ or $E_1=\{e\},$ we simply write $G-V_1$ by $G-u$ and $G-E_1$ by $G-e$ respectively. If $G$ is an unoriented graph with $u,v\in V(G),$ then the disatance between $u$ and $v,$ denoted by $d_{uv}$ is the length of the shortest path connecting them in $G.$
For mixed graphs, Yu and Qu introduced Hermitian Laplacian matrix \cite{yq15} and Hermitian quasi-Laplacian \cite{ylq17} matrix are denoted and defined as $L_{H}(G)=D(G)-H(G)$ and $Q_{H}(G)=D(G)+H(G)$ respectively, where $D(G)=diag\{d_{v1}, d_{v2}, \ldots ,d_{vn}\}$ is the diagonal matrix of vertex degrees in the underlying graph. When the graph $G$ is clear from the context we will simply write $L_H$ and $Q_H$ instead of $L_{H}(G)$ and $Q_{H}(G)$ respectively. The authors of \cite{yq15} and \cite{ylq17} have shown that both Hermitian Laplacian and Hermitian quasi-Laplacian matrices of a mixed graph are positive semi-definite. They have also established expressions for determinants for both Hermitian Laplacian and Hermitian quasi-Laplacian matrices of a mixed graph and gave necessary and sufficient conditions of singularity of the two matrices for any mixed graph. \par Matrix-tree theorem says that the cofactor of any element of the Laplacian matrix of an unoriented graph equals the number of spanning trees of the graph. In this article we establish the formuli for various minors of Hermitian Laplacian and Hermitian quasi-Laplacian matrices of mixed graphs which generalizes the matrix-tree theorem of unoriented graph. We have organized the paper as that in \cite{bap99}. In section 2, we consider Hermitian incidence matrix and Hermitian quasi-incidence matrix of a mixed graph and find their determinant for rootless mixed trees and mixed cycles. Also we find expressions for various principal minors of Hermitian Laplacian and Hermitian quasi-Laplacian matrices of mixed graphs. In section 3, we consider non principal submatrices of $Q_H(G)$ and $L_H(G)$ and find their determinants.
\section{Principal Minors} The authors of \cite{ylq17} introduced two class of incidence matrices of mixed graphs. Here we consider particular cases of those matrices and call them Hermitian incidence matrix and Hermitian quasi-incidences matrix for mixed graphs.
We define the Hermitian quasi-incidence matrix of $G$ as the $n\times m$ matrix $S(G)=(s_{ue})$ whose entries are given by
\[ s_{ue} = \left\{ \begin{array}{ll}
1, & \hbox{if $e$ is an unoriented link incident to $u$ or}\\
& \hbox{if e is an oriented edge with head $u$;} \\
-i, & \hbox{if e is an oriented edge with tail u;} \\
0, & \hbox{otherwise.}
\end{array} \right. \]
Now to define Hermitian incidence matrix of a mixed graph $G$ we form a new graph $G'$ by assigning arbitrary orientations to unoriented edges in $G$, we call these edges as new oriented edges in $G'$.
Hermitian incidence matrix of $G$ as the $n\times m$ matrix $T(G)=(t_{ue})$ whose entries are given by
\[ t_{ue} = \left\{ \begin{array}{ll}
1, & \hbox{if e is an oriented edge in $G$ or new oriented edge in $G'$ with head $u$;} \\
-1, & \hbox{if e is a new oriented edge in $G'$ with tail $u$;} \\
i, & \hbox{if e is an oriented edge in $G$ with tail u;} \\
0, & \hbox{otherwise.}
\end{array} \right. \]
When there is no confusion of the graph $G$, we will simply write $S$ in place of $S(G)$ and $T$ in place of $T(G).$
\begin{theorem}\label{t1} \cite{ylq17} For any mixed graph $G,$ $SS^{*}=Q_{H}$ and $TT^{*}=L_{H}$ \end{theorem}
By substructure of a mixed graph we mean an object formed by a subset of the vertex set of the graph together with a subset of the edge set of the graph. If $G(V,E)$ is a mixed graph and $X \subseteq V$, $Y \subseteq E$, then $R=(X,Y)$ forms a substructure of $G$ and by $S(R)$ and $T(R),$ we denote the corresponding submatrix of quasi-incidence matrix and incidence matrix respectively. A substructure $R$ of a mixed graph $G$ with equal number of vertices and edges will be called a square substructure. By rootless tree we mean a substructure of a mixed tree obtained by deleting a vertex of the tree. We call the missing vertex as root of the rootless tree and any edge adjacent to the root will be called as rootless edge. If $T$ is a mixed tree with $v\in V(T)$ and $T_v$ is the rootless tree with root $v$ obtained from $T$, then an edge $e$ in $T_v$ will be treated to be away (towards) the root if $e=\overrightarrow{uw}$ in $T$ and in the underlying tree of $T$ we have $d_{vw}=d_{vu}+1~(d_{vu}=d_{vw}+1). $
\begin{lemma}\label{l1} If the substructure $T_{\diamond}$ of $G$ is a rootless mixed tree, then $det(S(T_{\diamond}))=(-i)^{\alpha},$ where $\alpha$ is the number of directed edges in $T_{\diamond}$ away from the root. \end{lemma}
{\it \bf Proof. } First we consider $T_{\diamond}$ to be a rootless mixed path with $n$ vertices and $n$ edges. Without loss of generality we can assume that edge $e_{n}$ is rootless and its one end is $v_{n}$, each other edge $e_{k}\, (k=1,2,\ldots,n-1)$ is incident with $v_{k}$ and $v_{k+1}$. Then upto permutation similarity the Hermitian quasi-incidence matrix of $T_{\diamond}$ takes the form $$S(T_{\diamond})=\left(
\begin{array}{ccccccc}
* & 0 & 0 & \cdots & 0 & 0 & 0 \\
* & * & 0 & \cdots & 0 & 0 & 0 \\
0 & * & * & \cdots & 0 & 0 & 0 \\
0 & 0 & * & \cdots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & * & 0 & 0 \\
0 & 0 & 0 & \cdots & * & * & 0 \\
0 & 0 & 0 & \cdots & 0 & * & * \\
\end{array}
\right)$$ Here each * is either $1$ or $-i$. In the diagonal, number of $-i$ equals the number of edges away from the root in the path, and so the result follows.
Now we consider $T_{\diamond}$ to be a rootless tree with the edge $e_{n}$ dangling. Let $S(n)$ be the the principal submatrix of $S(T_{\diamond})$ which results after deleting $n^{th}$ row and $n^{th}$ column. Then $S(n)$ represents the Hermitian quasi-incidence matrix of the rootless tree which results from $T_{\diamond}$ after removing $e_{n}$ and the vertex incident to it. So $det(S)=c.det(S(n))$, where $c=-i$ if $e_{n}$ is away from the root and $1$ otherwise. Again applying the same logic and proceeding, ultimately we arrive at some rootless paths, and hence we obtain our required result.
\qed
Similar to the above result we can get the following lemma.
\begin{lemma}\label{l2} If the substructure $T_{\diamond}$ of $G$ is a rootless mixed tree, then $$det(T(T_{\diamond}))=(-1)^ci^{\alpha},$$ where $\alpha$ is the number of directed edges away from the root and c is the number of unoriented edges in $T_{\diamond}.$ \end{lemma}
For a mixed cycle any one direction will be considered as clockwise direction and the opposite direction will be treated as the anticlockwise direction. Now we define five classes of mixed cycles namely type I, type II, type III, type IV and type V so that $\{ type I, type II, type III \}$ and $\{ type I, type IV, type V \}$ forms two partitions of the class of all mixed cycles. Throughout this section we reserve the symbols $a(C)$, $b(C)$ and $c(C)$ respectively for the number of directed edges in clockwise direction, the number of directed edges in anticlockwise direction and the number of unoriented edges in the mixed cycle $C.$ Also when the cycle is understood from the context we would prefer to write them simply as $a,b, c.$
\begin{definition}A mixed cycle will be called a cycle of type I if total number of directed edges is odd. \end{definition}
\begin{definition}A mixed cycle will be called a cycle of type II if $\frac{|a-b|}{2}+c$ is odd. \end{definition}
\begin{definition}A mixed cycle will be called a cycle of type III if $\frac{|a-b|}{2}+c$ is even. \end{definition}
\begin{definition}A mixed cycle will be called a cycle of type IV if $\frac{|a-b|}{2}$ is odd. \end{definition}
\begin{definition}A mixed cycle will be called a cycle of type V if $\frac{|a-b|}{2}$ is even. \end{definition}
From the above definitions we can observe the following results for the nature of a mixed cycle when directions of some of its edges are changed.
\begin{theorem}If $C_n$ is a mixed cycle of type III and $C_{n}^{k}$ be the mixed cycle obtained by reverting the directions of k directed edges of $C_n$, then $C_{n}^{k}$ is of type III(type II) if and only if $k$ is even(odd). \end{theorem}
\begin{theorem} If $C_n$ is a mixed cycle of type V and $C_{n}^{k}$ be the mixed cycle obtained by reverting the directions of k directed edges of $C_n$, then $C_{n}^{k}$ is of type V(type IV) if and only if $k$ is even(odd). \end{theorem}
\begin{lemma}\label{l3} If $C_n$ is a mixed cycle, then $|det(S(C))|=\sqrt2$ or $2$ or $0$ according as $C$ is of type I or type II or type III respectively
\end{lemma}
{\it \bf Proof. } Upto permutation similarity, we observe that the nonzero entries of $S(C_{n})$ occur precisely at positions $(k, k), (k+l, k),$ for $k= 1, 2, \ldots, n-1,$ and at $(1, n)$ and $(n, n).$
$$i.e., \quad S(C_n)=\left(
\begin{array}{ccccccc}
* & 0 & 0 & \cdots & 0 & 0 & * \\
* & * & 0 & \cdots & 0 & 0 & 0 \\
0 & * & * & \cdots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & * & 0 & 0 \\
0 & 0 & 0 & \cdots & * & * & 0 \\
0 & 0 & 0 & \cdots & 0 & * & * \\
\end{array}
\right)$$
Let $a, b$ respectively denote the number of directed edges in clockwise and anticlockwise direction and $c$ denote the number of unoriented edges in the mixed cycle. Then expanding along the first row, we see that \beq det(S(C_{n})&=(-i)^{b}+(-1)^{n-1}(-i)^{a}\\
\Rightarrow |det(S(C_{n})|&=|i^{a-b}+(-1)^{n-1}|. \eeq If $C_{n}$ is of type I, then $a+b$ is odd and so is $a-b.$ Therefore \beq i^{a-b}&=\pm i \\ \Rightarrow i^{a-b}+(-1)^{n-1}&=\pm 1\pm i. \eeq
\par Hence $|det(S(C_{n})|= \sqrt{2} .$ \\ \\
Again if $C_{n}$ is of type II, then $\frac{|a-b|}{2}+c$ is odd. \\ \par Thus $i^{a-b}=(-1)^{c+1}$ and $n$ is even or odd according as $c$ is even or odd. Therefore we get \\
\beq |det(S(C_{n})|&=|i^{a-b}+(-1)^{c-1}| \\ &= 2\eeq \\
Finally if $C_{n}$ is of type III, then $\frac{|a-b|}{2}+c$ is even. Thus $n$ and $c$ are either both even or both odd and $i^{a-b}=i^{-2c}.$ Hence
\beq |det(S(C_{n})|=|i^{a-b}+(-1)^{n-1}|
=|i^{-2c}+(-1)^{c-1}|=0. \eeq
\qed
\begin{lemma}\label{l4}
If $C_n$ is a mixed cycle, then $|det(T(C))|=\sqrt2$ or $2$ or $0$ according as $C$ is of type I or type IV or type V respectively. \end{lemma}
{\it \bf Proof. } Here we observe that if we reverse the direction of any new oriented edge, then the corresponding determinant changes only by sign and thus absolute value remains same. Therefore without loss of generality we consider every new oriented edge in forward direction. Then as in the \cref{l3}, expanding along the top row, we get
$$|det(T(C_{n})|=|i^{b}+(-1)^{n-1}i^{a}(-1)^c|=|i^{b-a}+(-1)^{n+c-1}|.$$ If the cycle is of type I, we can get our result as that of \cref{l3}. \\
Now if the cycle is not of type I, then $n-c$ is even and therefore $n+c-1$ is odd. Thus we see that\\
$$ |det(T(C_{n})|=|i^{b-a}-1|.$$ Again if the cycle is of type IV or V then proceeding as in the proof of \cref{l3}, we are done. \qed
By unicyclic graph of type I/II/III/IV/V, we mean a unicyclic mixed graph where the corresponding cycle is of type I/II/III/IV/V.
\begin{definition} A square substructure $R$ of a mixed graph will be called a special square substructure (in short SSS) if the only possible components of $R$ are rootless trees and/or unicyclic graphs not of type III. \end{definition}
\begin{lemma}\label{l5} If $R$ is a square substructure of a mixed graph, then
\[ |det(S(R))|= \left\{
\begin{array}{ll}
(\sqrt2)^{x+2y}, & \hbox{if R is an SSS;} \\
0, & \hbox{otherwise.}
\end{array} \right. \] where $x$ and $y$ denote the number of components of the SSS which are unicyclic graphs of type I and of type II respectively. \end{lemma}
{\it \bf Proof. } If R is not an SSS, then it can be observed that every term in the Laplace expansion of det(S(R)) is zero. \par If R is an SSS, then S(R) is permutationally similar to a block diagonal matrix where each diagonal block corresponds to a component of R. Therefore absolute value of det(S(R)) is the product of the absolute values of the determinants of those diagonal blocks. Also from \cref{l1}, we can see that absolute value of the determinant of quasi-incidence matrix of a rootless tree is always 1. Thus if there are $x$ unicyclic graphs of type I and $y$ unicyclic graphs of type II as a component of R then from \cref{l1} and \cref{l3} we get our required result. \qed
\begin{definition} A square substructure $R$ of a mixed graph will be called a super special square substructure (in short SSSS) if the only possible components of $R$ are rootless trees and/or unicyclic graphs not of type V. \end{definition}
\begin{lemma}\label{l6} If $R$ is a square substructure of a mixed graph, then
\[ |det(T(R))|= \left\{
\begin{array}{ll}
(\sqrt2)^{p+2q}, & \hbox{if R is an SSSS;} \\
0, & \hbox{otherwise.}
\end{array} \right. \] where $p$ and $q$ denote the number of components of the SSSS which are unicyclic graphs of type I and of type IV respectively. \end{lemma}
\par If $A$ be any $n \times m$ matrix and $B\subseteq \{1, \ldots, m \}$, $C\subseteq \{1, \ldots, n \}$, then $A[B,C]$ will denote the submatrix of $A$ formed by rows corresponding to $B$ and columns corresponding to $C$. Also $A(B,C)$ will denote the submatrix of $A$ formed by deleting rows and columns corresponding to $B$ and $C$ respectively. In short $A[B,B]$ will be denoted by $A[B]$ and $A(B,B)$ will be denoted by $A(B)$.
\begin{theorem}\label{t3} Let $G(V,E)$ be a mixed graph and $W \subseteq V$, then\\ \par det$(Q_{H}[W])=\displaystyle\sum_{R}2^{x+2y}$, where the summation runs over all SSS $R$ with $V(R)=W$ and $x, y$ denote the number of components of R which are unicyclic graphs of type I and of type II respectively. \end{theorem}
{\it \bf Proof. } From \cref{t1} we observe that $Q_{H}[W]=S[W,E]S^{*}[W,E]$. Therefore, by Cauchy-Binet Theorem, det$(Q_{H}[W])$ is the sum of modulus of the squares of the determinants of the submatrices S[W, Z], where $Z \subseteq E$ with $|W| = |Z|$. Here we don't need to bother about the zero terms and non zero terms are due to the SSS R corresponding to W. From \cref{l5} we see that every SSS R corresponding to W contributes $((\sqrt2)^{x+2y})^{2}=2^{x+2y}$ to det$(Q_{H}[W])$. Hence the result follows. \qed
\begin{theorem}\label{lap} Let $G(V,E)$ be a mixed graph and $W \subseteq V$, then\\ \par det$(L_{H}[W])=\displaystyle\sum_{R}2^{p+2q}$, where the summation runs over all SSSS $R$ with $V(R)=W$ and $p, q$ denote the number of components of R which are unicyclic graphs of type I and of type IV respectively. \end{theorem}
\begin{definition} A mixed graph $G(V,E)$ is said to be quapartite if its vertex set V can be partitioned into four sets $V_{1}$,$V_{2}$,$V_{3}$ and $V_{4}$ so that unoriented edges are between $V_{1}$ and $V_{3}$ or between $V_{2}$ and $V_{4}$ and oriented edges are from $V_{1}$ to $V_{2}$ or $V_{2}$ to $V_{3}$ or $V_{3}$ to $V_{4}$ or $V_{4}$ to $V_{1}$ \end{definition} \par The above definition generalizes the concept of bipartite graph of unoriented graph to a mixed graph. By close observation of the structure of a quapartite graph, we can easily obtain the following results which extends some well known results of unoriented graph to mixed graph..
\par We know that an unoriented graph is bipartite if and only if all its cycles(if any) are even cycles. We now generalize this concept to mixed graph and provide a pure combinatorial proof for it. If $P=v_0-v_1-\cdots -v_k$ be a mixed path then an edge $\overrightarrow{v_iv_j}\in E(P)$ will be termed as forward or backward in $P$ according as $d_{v_0v_j}=d_{v_0v_i}+1$ or $d_{v_0v_j}=d_{v_0v_i}-1$ in the underlyning graph of $P$. Now we define the weight of a mixed walk as follows.
\begin{definition} If $P$ is a walk in $G$, we define its weight by
$\omega(P)=|\displaystyle\sum_{e\in E(P)}f(e)|,$ where \[ f(e) = \left\{ \begin{array}{ll}
1, & \hbox{if $e$ is forward in P;} \\
-1, & \hbox{if $e$ is backward in P;} \\
2, & \hbox{if $e$ is unoriented in P;}
\end{array} \right. \] \end{definition}
\begin{lemma}\label{len} If $G$ is a quapartite mixed graph and $u,v$ are in the same vertex partition of $V(G)$, then weight of any walk connecting $u$ and $v$ is an integral multiple of $4$.
\end{lemma}
{\it \bf Proof. } If we consider a walk $W$ in a graph $G$ so that except the initial and final vertex of $W$ all other vertices belong to different vertex partition of $V(G)$ then $W$ has length at most 4 and it can be easily observed that $\omega(W)=0 \text{ or }4.$
Now suppose $P=(u=w_1, w_2, \ldots w_i, \ldots w_j, w_{j+1}, \ldots, w_k=v)$ be a $uv$ walk so that $w_i, w_j$ belongs to same vertex partition of $V(G)$ with each $w_s$ belonging to different vertex partition of $V(G)$ for $s=i+1, \ldots, j-1.$ Then by above argument weight of the $w_iw_j$ walk is a multiple of 4. Considering all such possibilities and using above argument we get the required result.
\qed
\begin{theorem} \label{quaIII} A mixed graph is quapartite if and only if all its cycles are of type III. \end{theorem} {\it \bf Proof. } If $G$ is quapartite and $C$ be any mixed cycle in $G$, then considering $C$ to be a walk connecting any $u\in V(G)$ to itself by \cref{len} we get, $\omega(C)=0(mod 4)$. Again if $a$, $b$ and $c$ are the number of forward, backward and unoriented edges in $C$, then we get $\omega(C)=a-b+2c.$ Thus $a-b+2c=0(mod 4)$ and hence $C$ is of Type III.
\par Conversely, let every cycle in G are of type III. For any walk P in $G$ we reserve $x$, $y$ and $z$ respectively for the number of forward, backward and unoriented edges in P. Let us call a walk as $ij$ walk if $i$ and $j$ are its terminal vertices. Now for any fix $v\in V(G)$ we consider \par
$V_{1}=\{u\in V:\frac{|x-y|}{2}+z$ \textit{is even for some uv walk}$\}$
\par $V_{2}=\{u\in V:\frac{|x-y-1|}{2}+z$ \textit{is even for some uv walk}$\}$
\par $V_{3}=\{u\in V:\frac{|x-y|}{2}+z$ \textit{is odd for some uv walk}$\}$
\par $V_{4}=\{u\in V:\frac{|x-y+1|}{2}+z$ \textit{is even for some uv walk}$\}$. \\
\par First we show that $V_{1}$, $V_{2}$, $V_{3}$ and $V_{4}$ actually forms a partition of $V(G)$. \\ Suppose if possible $u\in V_{1}\cap V_{2}$. Then for $u\in V_{1}$ there exists a $uv$ walk $P_{1}$ with $x_{1}$, $y_{1}$ and $z_{1}$ as number of forward, backward and unoriented edges so that \par
\beq &\frac{|x_{1}-y_{1}|}{2}+z_{1} \text{ is even. }\\ i.e. \quad & i^{x_{1}-y_{1}}=(-1)^{z_{1}}.\eeq Again for $u\in V_{2}$ there exists a $uv$ walk $P_{2}$ with $x_{2}$, $y_{2}$ and $z_{2}$ as number of forward, backward and unoriented edges so that
\beq &\frac{|x_{2}-y_{2}-1|}{2}+z_{2} \text{ is even. }\\ i.e. \quad & i^{x_{2}-y_{2}-1}=(-1)^{z_{2}}. \eeq
But $P_{1}$ and $P_{2}$ together form a cycle $C$(say). Then for this cycle $a=x_{1}+y_{2}$, $b=y_{1}+x_{2}$ and $c=z_{1}+z_{2}$. Therefore we get
\beq i^{a-b}&= (-1)^{z_{1}}i(-1)^{z_{2}+1}=i(-1)^{c+1} \\ \Rightarrow \hspace{.3cm} i^{a-b-1} &=(-1)^{c+1} \eeq
Which implies that $a-b-1$ is even. Therefore $C$ must be a cycle of type I, a contradiction to our assumption. \\ Similarly we can show disjointness of other pairs of $V_{1}$,$V_{2}$,$V_{3}$ and $V_{4}$.
Again let $u\in V(G)$ be any vertex of $G$ and $P$ be some $uv$ walk in $G$. Then if $|x-y|$ is even then $u\in V_{1}$ or $u\in V_{3}$. Again if $|x-y|$ is odd, then $u\in V_{2}$ or $u\in V_{4}$. Therefore $V_{1}$, $V_{2}$, $V_{3}$ and $V_{4}$ forms a partition of $V(G)$.
\par Now we prove the quapartiteness property of G. For this we consider a fixed edge $uw$ of $G$. Without loss of generality we can take $v\in V(G)$ so that there is a $vu$ walk $P_{1}$ not containing $w.$ Let $P_{2}$ be the walk joining $v$ to $w$ which consist of $P_{1}$ and the edge $uw$. Besides let $x_{1}$, $y_{1}$, $z_{1}$ and $x_{2}$, $y_{2}$, $z_{2}$ in order denote the number of forward, backward and unoriented edges in paths $P_{1}$ and $P_{2}$ respectively.
\par Let us first suppose that $uw$ is an unoriented edge. Then $x_{2}=x_{1}$, $y_{2}=y_{1}$ and $z_{2}=z_{1}+1$. If $u\in V_{1}$, then for $P_{1}$, $\frac{|x_{1}-y_{1}|}{2}+z_{1}$ is even. \\
Which implies that $\frac{|x_{2}-y_{2}|}{2}+z_{2}$ is odd. Therefore, considering $P_{2}$ we can conclude that $w\in V_{3}$. Similarly if we assume $u\in V_{3}$, then we get $w\in V_{1}$.
\par Now if $u\in V_{2}$, then for $P_{1}$, $\frac{|x_{1}-y_{1}-1|}{2}+z_{1}$ is even. \\
So, $\frac{|x_{2}-y_{2}+1|}{2}+z_{2}=\frac{|x_{1}-y_{1}-1|}{2}+z_{1}+2$ is even. \par Therefore, considering $P_{2}$ we can conclude that $w\in V_{4}$. \\ Similarly if we assume $u\in V_{4}$, then we get $w\in V_{2}$. \\ Thus we can conclude that unoriented edges are possible only between $V_{1}$ and $V_{3}$ or between $V_{2}$ and $V_{4}$. \par Next we suppose that $uw$ is an directed edge from vertex $u$ to vertex $w$, Then \par $x_{2}=x_{1}+1$, $y_{2}=y_{1}$ and $z_{2}=z_{1}$. \\ \\
\textbf{Case 1:} If $u\in V_{1}$, then for $P_{1}$, $\frac{|x_{1}-y_{1}|}{2}+z_{1}$ is even. \\
So, $\frac{|x_{2}-y_{2}-1|}{2}+z_{2}=\frac{|x_{1}-y_{1}|}{2}+z_{1}$ is even. Therefore $ w\in V_{2}$ \\ \\
\textbf{Case 2:} If $u\in V_{2}$, then for $P_{1}$, $\frac{|x_{1}-y_{1}-1|}{2}+z_{1}$ is even.
Therefore, $\frac{|x_{2}-y_{2}|}{2}+z_{2}=\frac{|x_{1}-y_{1}-1|}{2}+z_{1}+1$, which is odd. \par Thus considering $P_{2}$ we can conclude that $w\in V_{3}$. \\ \\ \textbf{Case 3:} If $u\in V_{3}$, then for $P_{1}$,
$\frac{|x_{1}-y_{1}|}{2}+z_{1}$ is odd.
Therefore $\frac{|x_{2}-y_{2}+1|}{2}+z_{2}=\frac{|x_{1}-y_{1}|}{2}+z_{1}+1$ is even. \par Thus considering $P_{2}$ we get $w\in V_{4}$. \\ \\
\textbf{Case 4:} If $u\in V_{4}$, then for $P_{1}$, $\frac{|x_{1}-y_{1}+1|}{2}+z_{1}$ is even. \\
Now $\frac{|x_{2}-y_{2}|}{2}+z_{2}=\frac{|x_{1}-y_{1}+1|}{2}+z_{1}$ is even. \par Thus considering $P_{2}$ we get $w\in V_{1}$. \\ Hence directed edges are possible only from $V_{1}$ to $V_{2}$ or $V_{2}$ to $V_{3}$ or $V_{3}$ to $V_{4}$ or $V_{4}$ to $V_{1}$. \\ Therefore G is quapartite. \qed
From the above theorem, we can immediately get the following well known result of graph theory as a corollary. \begin{corollary}An unoriented graph is bipartite if and only if it does not contain any odd cycle. \end{corollary}
\begin{theorem} Any odd cycle (cycle with odd number of vertices) in a quapartite graph must consist of both oriented and unoriented edges. \end{theorem}
\bc Any directed cycle in a quapartite mixed graph has even number of vertices. \ec
\bc Any unoriented cycles in a quapartite mixed graph has even number of vertices. \ec
\begin{theorem}If G be a quapartite graph and $\widetilde{G}$ be a graph obtained from G by reverting the direction of any one directed edge, then $\widetilde{G}$ is quapartite if and only if that edge does not lie on any cycle. \end{theorem} {\it \bf Proof. } Let $G$ is a quapartite mixed graph and $\widetilde{G}$ be the graph obtained from $G$ by reverting the direction of the directed edge $e\in E(G)$ . Then by \cref{quaIII} all cycles in $G$ are of $type~ III.$ Now if $e$ is not in any cycle of $G,$ then cycles in $\widetilde{G}$ are same as cycles of $G$ and therefore $\widetilde{G}$ is also quapartite. Again if $e$ is in come cycle $C$ in $G$ and $\widetilde{C}$ is the corresponding cycle in $\widetilde{G}$ obtained by reverting the direction of $e,$ then $C$ is of $type ~III$ implies that $\widetilde{G}$ is of $type~II$ in $\widetilde{G}.$ Hence the result follows by \cref{quaIII}. \qed
\begin{theorem}\label{t5} A mixed graph is quapartite if and only if the corresponding Hermitian quasi-Laplacian is singular. \end{theorem}
{\it \bf Proof. } If $Q_{H}(G)$ is singular then there exist $x=\{x_{1}, x_{2}, \ldots, x_{n}\}\in C^{n}$ such that $x^{*}S =0.$ Then \beq &\overline{x}_{u}+\overline{x}_{v}=0 \text{ for } \overline{uv}\in E \text{ and } \overline{x}_{u}-i\overline{x}_{v}=0 \text{ for } \overrightarrow{uv}\in E. \\ \Rightarrow \hspace{.3cm} & x_{v}=-x_{u} \text{ for } \overline{uv}\in E \text{ and } x_{v}=ix_{u} \text{ for } \overrightarrow{uv}\in E. \eeq
\par Now taking,
\beq
V_{1}&=\{v_{k}: x_{k}=x_{1}\} \\
V_{2}&=\{v_{k}: x_{k}=ix_{1}\} \\
V_{3}&=\{v_{k}: x_{k}=-x_{1}\} \\
V_{4}&=\{v_{k}: x_{k}=-ix_{1}\} \eeq
\par we see that $V=V_{1}\cup V_{2}\cup V_{3}\cup V_{4}$ is a partition of $V(G)$ which fulfills the definition of quapartiteness.
\par Conversely if G is a quapartite mixed graph, then it can be observed that
$x=(1, 1, \ldots ,1, i, i, \ldots ,i, -1, -1 \ldots ,-1, -i, -i, \ldots ,-i)$ satisfies $x^{*}S=0$. Here $1, i, -1$ and $-i$ appears respectively $|V_{1}|$, $|V_{2}|$, $|V_{3}|$ and $|V_{4}|$ times as components in $x$. Thus $x$ plays the role of an eigenvector for $Q_{H}(G)$ corresponding to the eigenvalue 0. Hence $Q_{H}(G)$ is singular.
\qed
\par The above theorem was proved in \cite{bkp12} in more general form for weighted directed graph, but we have produced the proof here for completeness. Using \cref{t3} and \cref{lap} we can immediately attain the following results.
\bc Quasi-Laplacian matrix of a mixed graph is singular if and only if all its cycles(if any) are of type III. \ec
\bc Laplacian matrix of a mixed graph is singular if and only if all its cycles(if any) are of type V. \ec
\section{Non Principal Minors}
Let $G(V,E)$ be any mixed graph. If $Q_{H}[A,B]$ is any non principal square submatrix of $Q_{H}$, then $Q_{H}[A,B]=S(A,E)(S(B,E))^{*}$. Similarly if $L_{H}[A,B]$ is any non principal square submatrix of $L_{H}$, then $L_{H}[A,B]=T(A,E)(T(B,E))^{*}$. As defined in \cite{bap99}, we would call $S(A \cup B, F)$ nonsingular relative to A and B if S(A, F) and S(B, F) are both non singular. Similarly we call call $T(A \cup B, F)$ nonsingular relative to A and B if T(A, F) and T(B, F) are both non singular. If $S(A \cup B, F)$ with $|A|=|B|=|F|$ is non singular then it will be called a quasi generalized matching between A and B. If $T(A \cup B, F)$ with $|A|=|B|=|F|$ is non singular then it will be called a generalized matching between A and B.
\begin{theorem}\cite{bap99}
Let $G(V, E)$ be a mixed graph; $A, B \subseteq V$ and $F \subseteq E$ with $|A|=|B|=|F|$. Then $S(A \cup B, F)$ is nonsingular relative to $A$ and $B$ if and only if each component is either a nonsingular substructure of $S(A \cap B, F)$ or a tree with exactly one vertex in each of $A \setminus B$ and $B \setminus A$. \end{theorem}
\begin{theorem}\cite{bap99}
Let $G(V, E)$ be a mixed graph; $A, B \subseteq V$ and $F \subseteq E$ with $|A|=|B|=|F|$. Then $T(A \cup B, F)$ is nonsingular relative to A and B if and only if each component is either a nonsingular substructure of $T(A \cap B, F)$ or a tree with exactly one vertex in each of $A \setminus B$ and $B \setminus A$. \end{theorem}
\begin{theorem} If $T$ is a tree in a quasi generalized matching R between $A$ and $B$, then it contributes $i^{b-a}$ to the determinant of $Q_{H}[A, B]$, where $a$ and $b$ are the number of edges away from the points of $A\setminus B$ and $B\setminus A$ of $T$ in the path connecting the two points in $T$. \end{theorem}
{\it \bf Proof. } Suppose $v_{1}\in A\setminus B$ and $v_{2}\in B\setminus A$ are two points of the tree T. So $T\setminus \{v_{1}\}$ is a rootless tree as a component of an SSS corresponding to $A \subseteq V$ and some $F \subseteq E$ with $|A|=|F|$ and $T\setminus \{v_{2}\}$ is a rootless tree as a component of an SSS corresponding to $B \subseteq V$ and $F \subseteq E$ with $|B|=|F|$. Now $T\setminus \{v_{1}\}$ and $T\setminus \{v_{2}\}$ contributes respectively $(-i)^{\alpha}$ and $(-i)^{\beta}$ to the determinants of S(A,F) and S(B,F), where $\alpha$ and $\beta$ are respectively the number of edges away from $v_{1}$ and $v_{2}$ in the tree T. So they together contribute $(-i)^{\alpha}\times \overline{((-i)^{\beta})}=i^{\beta-\alpha}$ to the determinant of $S(A\bigcup B, F)$. But if we consider the edges in T which do not lie in the path connecting $v_{1}$ and $v_{2}$, then they all are away from both $v_{1}$ and $v_{2}$, So in our calculation we can avoid them. Hence a tree T in a quasi generalized matching R between A and B contributes $i^{b-a}$ to the determinant of $Q_{H}[A, B]$, where $a$ and $b$ are the number of edges away from the point of A and B in the path connecting the two points in T. \qed
Using \cref{l2} and proceeding as in the above theorem, we get the following result.
\begin{theorem} If $T$ is a tree in a generalized matching R between $A$ and $B$, then it contributes $i^{a-b}$ to the determinant of $L_{H}[A, B]$, where $a$ and $b$ are the number of edges away from the points of $A\setminus B$ and $B\setminus A$ of $T$ in the path connecting the two points in $T$. \end{theorem}
\begin{theorem} If $G(V,E)$ be a mixed graph and $A, B \subseteq V$ with $|A|=|B|$, then $$det(Q_{H}[A, B])={\displaystyle\sum_{R}i^{\Sigma_{T}(b-a)}2^{x+2y}},$$ where the main summation runs over all quasi generalized matching $R$ between A and B and the exponent summation runs over all trees $T$ in $R$, $a$ and $b$ are the number of edges away from the points of $A\setminus B$ and $B\setminus A$ of $T$ in the path connecting the two points in $T$, $x$ and $y$ denote the number of components of $R$ which are unicyclic graphs of type I and of type II respectively. \end{theorem}
\begin{theorem} If $G(V,E)$ is a mixed graph and $A, B \subseteq V$ with $|A|=|B|$, then $$det(L_{H}[A, B])=\displaystyle\sum_{R}i^{\Sigma_{T}(b-a)}2^{p+2q},$$ where the main summation runs over all generalized matching $R$ between A and B and the exponent summation runs over all trees $T$ in $R$, $a$ and $b$ are the number of edges away from the points of $A\setminus B$ and $B\setminus A$ of $T$ in the path connecting the two points in $T$, $p$ and $q$ denote the number of components of $R$ which are unicyclic graphs of type I and of type IV respectively. \end{theorem}
\begin{corollary} Let G be a quapartite graph. Then the modulus of all the cofactors of the Hermitian quasi-Laplacian matrix $Q_{H}$ are equal, and their common absolute value is the number of spanning trees of the underlying graph of G. \end{corollary}
\begin{corollary} Let G be any mixed graph with all cycles(if any) of type V. Then the cofactors of the Hermitian Laplacian matrix $L_{H}$ are equal, and their common value is the number of spanning trees of the underlying graph of G. \end{corollary}
\begin{corollary}If G is non quapartite mixed graph but $G\setminus v$ is quapartite for some $v\in V(G)$, then $|det(Q_{H}(v))|$ is the number of spanning trees of G. \end{corollary}
\begin{corollary}If $G(V,E)$ is a quapartite mixed graph and $A, B \subseteq V$ with $|A|=|B|$, then $$det(Q_{H}[A, B])=\displaystyle\sum_{R}i^{\Sigma_{T}(b-a)},$$ where the main summation runs over all generalized matching $R$ between A and B and the exponent summation runs over all trees $T$ in $R$, $a$ and $b$ are the number of edges away from the points of $A\setminus B$ and $B\setminus A$ of $T$ in the path connecting the two points in $T$. \end{corollary}
\begin{corollary}If $G(V,E)$ be a mixed graph and $A, B \subseteq V$ with $|A|=|B|$ and $A\bigcap B=\phi$, then $$det(Q_{H}[A, B])=\displaystyle\sum_{R}i^{\Sigma_{T}(b-a)},$$ where the main summation runs over all generalized matching $R$ between A and B and the exponent summation runs over all trees $T$ in $R$, $a$ and $b$ are the number of edges away from the points of $A\setminus B$ and $B\setminus A$ of $T$ in the path connecting the two points in $T$. \end{corollary} \textbf{Acknowledgement:} The financial assistance for the author was provided by CSIR, India, through JRF.
\end{document} | arXiv |
Circle $\omega$ has radius 5 and is centered at $O$. Point $A$ lies outside $\omega$ such that $OA=13$. The two tangents to $\omega$ passing through $A$ are drawn, and points $B$ and $C$ are chosen on them (one on each tangent), such that line $BC$ is tangent to $\omega$ and $\omega$ lies outside triangle $ABC$. Compute $AB+AC$ given that $BC=7$.
[asy]
unitsize(0.1 inch);
draw(circle((0,0),5));
dot((-13,0));
label("$A$",(-13,0),S);
draw((-14,-0.4)--(0,5.5));
draw((-14,0.4)--(0,-5.5));
draw((-3.3,5.5)--(-7.3,-5.5));
dot((0,0));
label("$O$",(0,0),SE);
dot((-4.8,1.5));
label("$T_3$",(-4.8,1.5),E);
dot((-1.7,4.7));
label("$T_1$",(-1.7,4.7),SE);
dot((-1.7,-4.7));
label("$T_2$",(-1.7,-4.7),SW);
dot((-3.9,3.9));
label("$B$",(-3.9,3.9),NW);
dot((-6.3,-2.8));
label("$C$",(-6.3,-2.8),SW);
[/asy]
Let $T_1, T_2$, and $T_3$ denote the points of tangency of $AB, AC,$ and $BC$ with $\omega$, respectively.
[asy]
unitsize(0.1 inch);
draw(circle((0,0),5));
dot((-13,0));
label("$A$",(-13,0),S);
draw((-14,-0.4)--(0,5.5));
draw((-14,0.4)--(0,-5.5));
draw((-3.3,5.5)--(-7.3,-5.5));
dot((0,0));
label("$O$",(0,0),SE);
dot((-4.8,1.5));
label("$T_3$",(-4.8,1.5),E);
dot((-1.7,4.7));
label("$T_1$",(-1.7,4.7),SE);
dot((-1.7,-4.7));
label("$T_2$",(-1.7,-4.7),SW);
dot((-3.9,3.9));
label("$B$",(-3.9,3.9),NW);
dot((-6.3,-2.8));
label("$C$",(-6.3,-2.8),SW);
[/asy]
Then $7 = BC=BT_3+T_3C = BT_1 + CT_2$. By Pythagoras, $AT_1 = AT_2 = \sqrt{13^2-5^2}=12$. Now note that $24 = AT_1 + AT_2 = AB + BT_1 + AC + CT_2 = AB+AC+7$, which gives $AB + AC = \boxed{17}$. | Math Dataset |
extremelearning.com.au Go back Open original
Evenly distributing points on a sphere
By Martin Roberts
How to distribute points on the surface of a sphere as evenly as possibly is an incredibly important problem in maths, science and computing, and mapping the Fibonacci lattice onto the surface of a sphere via equal-area projection is an extremely fast and effective approximate method to achieve this. I show that with only minor modifications it can be made even better.
The problem of how to evenly distribute points on a sphere has a very long history and is one of the most studied problems in the mathematical literature associated with spherical geometry. It is of critical importance in many areas of mathematics, physics, chemistry including numerical analysis, approximation theory, coding theory, crystallography, electrostatics, computer graphics, viral morphology to name just a few.
Unfortunately, with the exception of a few special cases (namely the platonic solids) it is not possible to exactly equally distribute points on the sphere. Furthermore, the solution to this problem is critically dependent on the criteria used to judge the uniformity. There are many criteria in use, and they include:
Packing and covering
Convex hulls, Voronoi cells and Delaunay triangles,
Riesz $s$-energy kernels
Cubature and Determinants
Repeating this point as it is crucial: there is usually no single optimal solution to this question, because an optimal solution based on one criteria is often not an optimal point distribution for another. For example, in this post, we will also find that optimising for packing does not necessarily produce an optimal convex hull, and vice-versa.
For sake of brevity, this post focuses on just two of these: the minimum packing distance and convex hull / Delaunay mesh measures (volume and area).
Section 1 will show how we can modify the canonical Fibonacci lattice to consistently produce a better packing distribution.
Section 2 will show how we can modify the canonical Fibonacci lattice that produces better convex hull measures (volume and surface area).
Section 1. Optimizing Packing Distance
This is often referred to as "Tammes's problem" due to the botanist Tammes, who searched for an explanation of the surface structure of pollen grains. The packing criterion asks us to maximize the smallest neighboring distance among the $N$ points. That is,
$$ d_N = \min_{i \neq j} | x_i – x_j | $$
This value decreases at a rate $~ 1/\sqrt{N}$, so it is useful to define the normalized distance, and also the asymptotic limit of the normalized distance as
$$ d^*_N = \sqrt{N} d_N ,\quad \quad d^* = \lim_{N \rightarrow \infty} d^*_N $$
The Fibonacci Lattice
One very elegant solution is modeled after nodes appearing in nature such as the seed distribution on the head of a sunflower or a pine cone, a phenomenon known as spiral phyllotaxis. Coxeter demonstrated these arrangements are fundamentally related to the Fibonacci sequence, $F_k =\{1, 1, 2, 3, 5, 8, 13, …\}$ and the golden ratio $\phi = (1+\sqrt{5})/2$.
There are two similar definitions of the spherical Fibonacci lattice point set in the literature. The original one is strictly only defined for $N$ equal to one of the terms of the Fibonacci sequence, $F_m$ and is very well studied in number theory.
$$ t_i = \left( \frac{i}{F_m}, \frac{i F_{m-1}}{F_m} \right) \quad \textrm{for }\; 0 \leq i \leq N-1 $$
Whilst the second one generalises this to arbitrary $N$, and is used more frequently in computing:
$$ t_i = \left( \frac{i}{N}, \frac{i }{\phi} \right) \quad \textrm{for }\; 0 \leq i \leq N \tag{1}$$
$$\phi = \frac{1+\sqrt{5}}{2} = \lim_{n \rightarrow \infty} \left( \frac{F_{n+1}}{F_n} \right)$$
These points sets map to the unit square $[0, 1]^2$ which can then be mapped to the sphere by the cylindrical equal area projection:
$$ (x,y) \rightarrow (\theta, \phi) : \quad \left( \cos^{-1}(2x-1) – \pi/2, 2\pi y \right) $$
$$ (\theta,\phi) \rightarrow (x,y,z) : \quad \left (\cos\theta \cos\phi, \cos \theta \sin \phi, \sin \theta \right) $$
You can find numerous versions of python code for this at "Evenly distributing points on a sphere".
Even though spherical Fibonacci point sets are not the globally best distribution of samples on a sphere, (because their solutions do not coincide with the platonic solids for $n=4,6,8,12,20$), they yield excellent sampling properties and are extremely simple to construct in contrast to other more sophisticated spherical sampling schemes.
As the mapping from the unit square to the surface of the sphere is done via an area-preserving projection, one can expect that if the original points are evenly distributed then they will also quite well evenly distributed on the sphere's surface. However, this does not mean that it is provably optimal.
This mapping suffers from two distinct but inter-related problems.
The first is that this mapping is area-preserving, not distance preserving. Given that in our case, our objective constraint is maximizing the minimum pairwise distance separation between points, then it is not guaranteed that such distance-based constraints and relationships will hold after the projection.
The second, and from a pragmatic point of view possibly the trickiest to resolve, is that the mapping has a two singularity points at each pole. Consider two points very close to the pole but 180 degrees different in longitude. On the unit square, (and also on the cylindrical projection) they would correspond to two points that are quite distant from each other, and yet when mapped to the surface of the sphere they could be joined be a very small great arc going over the north pole. It is this particular issue that makes many of the spiral mappings sub-optimal.
The spherical fibonacci spiral produced from equation 1, results in a value of $d_N^*$ for all $N$, and so $d^* = 2$.
Lattice 1
The more common version – especially in computing – which produces a better $d^*=3.09$, is:
$$ t_i = \left( \frac{i+1/2}{N}, \frac{i}{\phi} \right) \quad \textrm{for }\; 0 \leq i \leq N-1 \tag{2}$$
It places the points in the midpoints of the intervals (aka the midpoint rule in gausssian quadrature), and so for $n=100$, the values of the first coordinate would simply be:
$$ \{ \frac{0.5}{100},\frac{1.5}{100},\frac{2.5}{100},\ldots, \frac{97.5}{100},\frac{98.5}{100},\frac{99.5}{100} \} $$
Lattice 2.
A key insight to further improving on Equation 2, is to realize that the $d^*_N$ always corresponds to the distance between the points $t_0$ and $t_3$, which are at the poles. Thus, to improve $d_N$ the points near the poles should be positioned farther apart.
If we define the following distribution:
$$ t_i(\varepsilon) = \left( \frac{i+1/2+ \varepsilon}{N+2\varepsilon}, \frac{i}{\phi} \right) \quad \textrm{for }\; 0 \leq i \leq N-1 $$
The $d^*_N$-curves for various values. $\varepsilon=0$ (blue); $\varepsilon=\frac{1}{2}$ (orange); $\varepsilon=\frac{3}{2}$ (green); and $\varepsilon=\frac{5}{2}$ (red). One can see that $\varepsilon = \frac{5}{2}$ produces results close to the asymptotic results. That is, for $N>20$, compared to the canonical spherical Fibonacci lattice, the following simple expression produces substantially better results of $d^* = 3.29$:
$$ t_i = \left( \frac{i+3}{N+5}, \frac{i}{\phi} \right) \quad \textrm{for }\; 0 \leq i \leq N-1 \tag{3} $$
Thus for $n=100$, the values of the first coordinate would simply be:
$$ \{ \frac{3}{105},\frac{4}{105},\frac{5}{105},\ldots, \frac{100}{105},\frac{101}{105},\frac{102}{105} \} $$
Figure 1. Various values of $d_N^*$ for various values of $\epsilon$. Note that the larger the value correspond to more optimal configurations. We see that $\epsilon \simeq 2.5$ provides a near optimal solution.
As stated earlier, one of the greatest challenges of the distributing points evenly on the sphere is that the optimal distribution critically depends on which objective function you use. It turns out that local measures such as $d_N^*$ are at times very "unforgiving" inasmuch as a single point in a suboptimal position can catastrophically reduce measure of the entire point distribution.
In our case, regardless of how large $N$ is, the $D_N^*$ is typically determined by the four points closest to each pole, especially $t_0$ and $t_3$. However, what is also known about this lattice is that the largest Voronoi polygon is at the pole. Thus, in trying to maximize $d_N$ by separating the initial polar points in the sequence, actually makes the void at the pole even larger! Thus, we present an alternative to lattice 2 which is generally more preferable, as it does not exhibit such a large void near the poles.
It is almost identical to lattice 2 but with two differences. Firstly, it uses $\varepsilon = 11/2$ for $1 \leq i \leq n-2$. Secondly, in addition to these $n-2$ points, the first and final point is defined to be at each pole. That is,
$$ t_0=(0,0); \; t_n = (1,0); \quad t_i = \left( \frac{i+6}{N+11}, \frac{i}{\phi} \right) \quad \textrm{for }\; 0 < i < N-1 \tag{3} $$
The amazing thing about this method of construction is that although it was motivated by desiring to minimize the gap at the poles, it actually has the best $d_N$ and $d^*$ value of all methods, with $d^* = 3.31$!
$$ \{ 0; \; \frac{6}{111},\frac{7}{111},\frac{8}{1111},\ldots, \frac{103}{111},\frac{104}{111},\frac{105}{111} ; \; 1\} $$
Figure 2. The various lattice configurations. The canonical fibonacci lattice is on the left. Note that although the middle lattice has an improved $d_N^*$ it has a noticeable void at the pole. Lattice 3 does not have a void at the pole and has the best $d_N^*$.
For large $N$ this value of $d^*$ compares extremely well compared to other methods, such as geodesic domes, which are based on triangulated projections from the faces of platonic solids to the surface of the sphere. Not surprsingly, the best quality geodesic domes are those based on the icosahedron or the dodecahedron.
For various $N$ point geodesic domes based on the dodecahedron.
\begin{array}{|c|cccccccccc|} \hline N & 12 & 42 & 92 & 162 & 252& 362 & 492 & 642 & 812 & 1002 \\ \hline d^* & 3.64 & 3.54 & 3.34 & 3.22 & 3.15 & 3.09 & 3.06 & 3.03 & 3.00 & 2.99 \\ \hline
\end{array}
And for various $N$ point geodesic domes based on the icosahedron.
\begin{array}{|c|ccccccc|} \hline N & 20 & 32 & 122 & 272 & 482 & 752 & 1082\\ \hline d^* & 3.19 & 3.63 & 3.16 & 2.99 & 2.90 & 2.84 & 2.81 \\ \hline
Also, the truncated icosahedron, which is the shape of a $C_{60}$ buckminster fullerene only corresponds to $d^* = 3.125$.
Thus for $N>100$, lattices based on Eqn 3 are better than any geodesic polyhedra.
As per the first reference, some of the state-of-the-art methods which are typically complex and require recursive and/or dynamic programming are as follows.
\begin{array}{|lr|} \hline \text{Lattice 1} & 3.09\\ \hline \text{Max Determinant} & 3.19 \\ \hline \text{Lattice 2} & 3.28 \\ \hline \text{Lattice 3} & 3.31 \\ \hline \text{Zonal Equal Area} & 3.32 \\ \hline \text{Coulomb} & 3.37 \\ \hline \text{Log Energy} & 3.37\\ \hline
Section 1 Summary
Lattice 3, as per equation 3, is a modification of the canonical Fibonacci lattice that produces a significantly better packing point distribution. That is,
$$ t_0 = (0,0); \; t_{N-1} = (0,1); \; \; t_i = \left( \frac{i+6}{N+11}, \frac{i}{\phi} \right) \quad \textrm{for }\; 0 \leq i \leq N-1 $$
Section 2. Optimising the Convex hull (Delaunay mesh)
Although the previous section optimized for $d^*_N$, unfortunately these modifications actually make other measures worse, such as the volume of the convex hull (Delaunay mesh). This section shows how to evenly distribute points on a sphere in a manner that optimizes (maximizes) the a more global measure such as the volume of the convex hull.
Let us define, $C_N$ as the convex hull of the $N$ points,
$$ \epsilon_N = N \left( \frac{4\pi }{3} – \textrm{Vol}(C_N) \right)$$
where the normalization factor of $N$ is included, as the absolute discrepancy decreases at a rate $~ 1/N$.
The behavior of $\epsilon_N$ for varying $N$ can be seen in Figure 3 (blue).
The key to improving the volume discrepancy is to note that although the use of $\phi$, the golden ratio intuitively makes sense as $N \rightarrow \infty$, it does not necessarily follow that it is the best value for finite $N$. In science terminology, we could say that need to consider finite-term correction effects.
Thus, let us generalize equation 1 as follows:
$$ t_i = \left( \frac{i+1/2}{N}, \frac{i}{g(n)} \right) \quad \textrm{for }\; 0 \leq i \leq N-1 \tag{4}$$
First we define an auxillary value,
$$ k = \left\lfloor \textrm{log}_{\phi}(\frac{n}{1.5}) \right\rfloor = \left\lfloor \frac{\ln (n/1.5)}{\ln \phi } \right\rfloor$$
where $\lfloor x \rfloor$ is the floor function.
Now define $g(n)$ as follows:
$$ g(n) = \begin{cases} 3-\phi, & \text{if $k$ is even} \\ \phi, & \text{if $k$ is odd} \end{cases}
\tag{5}$$
Figure 3 shows that this substantially improves the volume discrepancy for half the values of $N$.
The underlying reason why this works is based on the less well known fact that all numbers $x$ that satisfy the special Mobieus transformation are equivalent to $\phi$ in terms of irrationality.
$$ x = \frac{a\phi+b}{c\phi+d}, \quad \textrm{for all integers} \; \; a,b,c,d \; \textrm{such that } |ad-bc|=1 $$
And thus the connection why $\phi$ and $3-\phi$ work together so well , is that
$$\frac{1}{\phi} = \frac{\phi+1}{2\phi+1}, \quad \quad \frac{1}{3-\phi }= \frac{2\phi+1}{1\phi+1} $$
Figure 3. The discrepancy between the volume of the convex hull of points and the volume of a unit sphere. Note that smaller is better. This shows that a hybird model (orange) based on $\phi$ and $3-\phi$ offers a better point distribution than the canonical fibonacci lattice (blue).
For the remaining half, we first define an auxiliary sequence $A_N$ that is variant of the Fibonacci sequence
$$ A_1 =1, \; A_2 = 4; \; A_{n+2}= A_{n+1}+A_n \; \textrm{for } n = 1,2,3,… $$
$$ A_N = 1,4,5,9,14,23,37,60,97,157,254,411,…$$
The convergents of this sequence all have elegant continued fractions, and in the limit converge to $\phi$. For example,
$$t _5/t_4 = 1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{4}}}}$$.
We now fully generalize $g(n)$ as follows:
$$ g(n) = \begin{cases} 3-\phi, & \text{if $k$ is even} \\ A_{j+1}/A_j , & \text{if $k$ is odd, where $j= (k+7)/2$} \end{cases} \tag{6}
The following table is a summary of the value of $g(n)$ for various $n$.
\begin{array}{|c|c|c|c|c|c|c|c|c|} \hline N & 4-6 & 7-10 & 11-16& 17-26& 27-43& 44-70& 71-114 & 115-184 & 185-300\\ \hline g(n) &3-\phi & \frac{23}{14} & 3-\phi & \frac{37}{23} & 3-\phi & \frac{60}{37} & 3-\phi & \frac{97}{60} & 3-\phi \\ \hline
Figure 4 shows that, in relation to convex hull volume, this new distribution is better than the canonical lattice for all values of $n$.
Figure 4. The discrepancy between the volume of the convex hull of points and the volume of a unit sphere. Note that smaller is better. This shows that the newly proposed method (green) produces consistently better distribution than the canonical Fibonacci lattice (blue).
Figure 5. Visual comparison of canonical lattice (left) with the newly modified lattice (right), for n=35 and n=150. The visual differences are almost imperceptible. However, for circumstances that require maximal efficiency the modified version (right) offers a small but quantifiable improvement in both volume and surface area of the convex hull.
Of interest, this distribution also slightly but consistently reduces the discrepancy between the surface area of the convex hull and the surface area of the unit sphere. This is shown in figure 6.
Figure 5. Normalised Area Discrepancy between surface area of convex hull (Delaunay mesh) and surface area of unit sphere. Lower is better. This shows that the newly proposed modification (green) shows a small but consistent improvement over the canonical Fibonacci lattice (blue) in surface area discrepancy
The lattice as per equation 6, is a modification of the canonical Fibonacci lattice that produces a significantly better point distribution as measured by the volume and surface area of the convex hull (Delaunay Mesh).
A Comparison of poopular point confugrations on S^2″: https://arxiv.org/pdf/1607.04590.pdf
http://web.archive.org/web/20120421191837/http://www.cgafaq.info/wiki/Evenly_distributed_points_on_sphere
https://perswww.kuleuven.be/~u0017946/publications/Papers97/art97a-Saff-Kuijlaars-MI/Saff-Kuijlaars-MathIntel97.pdf
https://projecteuclid.org/download/pdf_1/euclid.em/1067634731
https://maths-people.anu.edu.au/~leopardi/Leopardi-Sphere-PhD-Thesis.pdf
https://www.irit.fr/~David.Vanderhaeghe/M2IGAI-CO/2016-g1/docs/spherical_fibonacci_mapping.pdf
https://maths-people.anu.edu.au/~leopardi/Macquarie-sphere-talk.pdf
The Unreasonable Effectiveness of Quasirandom Sequences
Most of us know about the golden ratio, \(\phi = \frac{1+\sqrt{5}}{2} \) and how it can be considered the most irrational number, but most people never ask the question, "What is the second and third most irrational number?" In this post, I discuss discuss some special irrational numbers, and why they can be considered the 2nd and 3rd most irrational numbers.
Finding good rational approximations to reals.
In number theory, the field of Diophantine approximation deals with the approximation of real numbers by rational numbers. It is named after Diophantus of Alexandria. The problem on how well a real number can be approximated by rational numbers is an extremely well studied problem .
Let us, say that the rational number $\frac{p}{q}$ is a good approximation of a real number $x$ , only if the absolute value of the difference between $a/b$ and $α$ can not be improved with any other rational rational number with a smaller denominator. That is,
$$ | x-\frac{p}{q} | < | x-\frac{p'}{q'} | \quad \text{for every } p',q' \; \text{such that } 0< q'<q \tag{1}$$
Often, a very similar definition which is generally more amenable to study because it does not include fractions, is used
$$ | p-q x | < |p'- q' x | \quad \text{for every } p',q' \; \text{such that } 0< q'<q \tag{2} $$
Continued fractions can be used to compute the best rational approximations of a real number. This is because the successive convergents of the continued fraction $x$ produce the best approximations of $x$ possible, (as per the second definition above).
For example, for the constant $\pi= 3.14159265358…$, the regular continued fraction is:
$$ \pi = 3 + \cfrac{1}{7+\cfrac{1}{15+\cfrac{1}{1+\cfrac{1}{292+\cfrac{1}{1+\cfrac{1}{1+\ldots}}}}}} $$
Because the numerators for all regular continued fractions are always 1, we can succinctly describe this continued fraction simply by listing the denominators (and the initial whole number)
$$\pi = [3;7,15,1,292,1,1,1,2,1,3,1,\ldots] $$
These produce the following progressively accurate rational approximations of $\pi$.
$$ \{ 3, \frac{22}{7}, \frac{333}{106},\frac{355}{113},\frac{103993}{33102}, \frac{104348}{33215}, \frac{208341}{66317}, \frac{312689}{99532},\frac{833719}{265381}, \ldots \} $$
From this, it is clear why $\frac{22}{7}$ is frequently used as an approximation of $\pi$, and why rationals more accurate than $\frac{355}{113}$ are almost never used.
How accurate are these approximations?
The obvious measure of the accuracy of a rational approximation of a real number $x$ by a rational number $p/q$, is $\epsilon = |x-p/q| $. However, this quantity can always be made arbitrarily small by increasing the absolute values of $p$ and $q$. The previous section showed how we can efficiently find $p,q$ that produce successively more and more accurate rational approximations to $x$.
Therefore, the accuracy of the approximation is usually estimated by comparing this quantity to some the size of the denominator $q$, typically a negative power of it.
The following table shows the same approximations of $\pi$ along with their error $\epsilon$ when multiplied by $q$ and by $q^2$.
$$\begin{matrix} p/q & \epsilon & \epsilon q & \epsilon q^2\\ \hline \frac{3}{1} & 1 \times 10^{-1} & 1 \times 10^{-1} & 0.142 \\ \hline \frac{22}{7} & 1 \times 10^{-3} & 9 \times 10^{-3} & 0.062 \\\hline \frac{333}{106} & 8\times 10^{-5} & 9 \times 10^{-3} & 0.935 \\\hline \frac{355}{113} & 3\times 10^{-7} & 3 \times 10^{-5} & 0.003 \\ \hline \frac{103993}{33102} & 6\times 10^{-10} & 2 \times 10^{-5} & 0.633 \\ \hline \frac{104348}{33215} & 3\times 10^{-10} & 1 \times 10^{-5} & 0.366 \\ \hline \frac{208341}{66317} & 1\times 10^{-10} & 8 \times 10^{-6} & 0.538 \\ \hline \frac{312689}{99532} & 3\times 10^{-11} & 3 \times 10^{-6} & 0.289 \\ \hline \frac{833719}{265381} & 9\times 10^{-12} & 2 \times 10^{-6} & 0.614 \\ \hline \ldots & \ldots & \ldots & \ldots \\ \hline
\end{matrix}$$
From this one can see that the values in the columns $\epsilon$, as well as $\epsilon q$ become arbitrarily small, but those in the final $\epsilon q^2$ column , don't. Said another way, for good approximations, the error term will go down in proportion to square of the denominator.
Thus, it seems that by using the error term $\epsilon q^2$ we can more meaningfully compare how well each of these approximate $\pi$. That is, of those listed, $355/133$ is by far best, followed by $22/7$, then $3/1$ then $312689/99532$, etc,…
Dirichlet's Theorem
Noting that for an irrational number, the equivalent continued fractions are infinite, it means that we can produce an infinite number of convergent rational approximations. Looking at the table above, we see that for the first 10 convergents, $\epsilon q^2 < 1$. The question then naturally arises, "How often is $\epsilon q^2$?"
Dirichlet proved that for any irrational $x$, there are an infinite number of rational approximations such that $\epsilon < 1/q^2$. That is, there are an infinite number of fractions $p/q$ such that :
$$ \left| x- \frac{p}{q} \right | < \frac{1}{q^2} \tag{3} $$
Furthermore, if $p/q$ are regular continued fraction convergents, then $\epsilon q^2$ will always be less than 1.
Although this property has 'long been known from the theory of continued fractions', it is named after Dirichlet after he provided a very elegant proof of it in 1838.
Hurwitz's Theorem
In 1891, Hurwitz showed that Dirichlet's theorem could be improved. Furthermore, he proved that it was a strict upper bound. That is, it could not be improved any further. That is, there are an infinite number of fractions $p/q$ such that :
$$ \left| x- \frac{p}{q} \right | < \frac{1}{\sqrt{5} q^2} \tag{4} $$
Hurwitz showed that the reason that this is a strict upper bound and can not be improved is because if we consider the real number $x = (1+\sqrt{5})/2$, commonly called the golden ratio, and usually denoted $\phi$, then for $C> \sqrt{5}$, there are are only a finite number of rational numbers $p,q$ such that:
$$ \left| \phi – \frac{p}{q} \right | < \frac{1}{C q^2} \tag{5} $$
The continued fraction representation of $\phi$ is simply $[1;1,1,1,1,1,1,1,1,\ldots]$ and its convergent fractions are:
$$ \{ \frac{1}{1}, \frac{2}{1}, \frac{3}{2}, \frac{5}{3}, \frac{8}{5}, \frac{13}{8}, \frac{21}{13}, \frac{34}{21},\ldots \} $$
The error terms $\sqrt{5}q^2 \epsilon$ for these convergents are,
$$ 1.38,0.854,0.1.056,0.978,1.008,0.996,1.001,0.9995,1.0001,0.999994,1.000002,0.999999,\ldots $$
We see that the error terms tend to 1, which is an indication that $\sqrt{5}q^2$ is the bound.
Equivalence Relations
It is very well known that the golden ratio $\phi$, can be considered the most irrational number. From the commentary above, we can see that this is most notably because it provides the limiting bound for Hurwitz's theorem. Technically speaking, the fact that the golden ratio has the continued fraction comprising only of ones, is a consequence rather than a rigorous reason why it is the most irrational number.
One lesser known fact is that there are infinitely many other real numbers that can be considered equally irrational, because they force strictness on the upper bound of the Hurwitz theorem. For example, all of the following numbers are equally irrational and thus could claim the title of 'the most irrational number':
$$ \{ \frac{1}{ \phi+3}, \; \frac{ \phi-1}{\phi}, \; \frac{ \phi+1}{2 \phi+1} ,\; \frac{3 \phi+2}{4 \phi+3}, \; \frac{ \phi+2}{ \phi+3} ,\; \frac{3 \phi-2}{2 \phi-1},\; \frac{ \phi-4}{\phi-3},\; \ldots \} $$
There are an infinite number of them, and they are all considered equivalent, as they are all of the form
$$ \frac{a \phi+b}{c \phi+d}, \quad \textrm{for integers } a,b,c,d \;\; \textrm{such that } ad-bc=\pm 1. \tag{6}$$
The continued fractions for these values are:
$$ \frac{1}{\phi+3} \simeq 0.216542 = [0; 4,1,1,1,1,1,1,1,\ldots] $$
$$ \frac{\phi-1}{\phi} \simeq 0.216542 = [0; 2,1,1,1,1,1,1,1,\ldots] $$
$$\frac{\phi+1}{2\phi+1} \simeq 0.619034 = [0; 1,1,1,1,1,1,1,1,\ldots] $$
$$\frac{3\phi+2}{4\phi+3} \simeq 0.216542 = [0; 1,2,1,1,1,1,1,1,\ldots] $$
$$ \frac{\phi+2}{\phi+3} \simeq 0.783458 = [0; 1,3,1,1,1,1,1,1,\ldots] $$
$$ \frac{3\phi-2}{2\phi-1} \simeq 1.276390 = [1; 3,1,1,1,1,1,1,1,\ldots] $$
Serrat showed that with the exception of a finite initial sequence, equivalent fractions have identical continued fraction expansions.
So although all of the numbers in this class are equally irrational, it is reasonable to select from all of those, the number whose continued fraction is the simplest to be the canonical number in this class. Namely, $\phi = [1;1,1,1,1,1,\;ldots]$.
The second most irrational number
If we consider all the real numbers excluding those of the form described in Eqn 6, then it turns out that Hurwitz's theorem may be made even stronger still. That is, if $x$ is not equivalent to $\phi = (1+\sqrt{5})/2$, then there are an infinite number of fractions $p/q$ such that :
Again, this inequality is strong, because for any number equivalent to $x = \sqrt{2}$, the value of $C=\sqrt{8}$ can not be improved.
Thus the second most irrational class of numbers are those that are equivalent to $x=\sqrt{2}$.
The continued fraction for $\sqrt{2}$ is $[1;2,2,2,2,2,2,\ldots]$,
therefore using Serrat's theorem, the number $1+\sqrt{2} = [2;2,2,2,2,….]$ is equivalently irrational.
Thus, we can say that the second most irrational number is $1+\sqrt{2}$.
The third most irrational number
Repeating this line of argument again, if we exclude all the real numbers that are equivalent to $\phi$, or $1+\sqrt{2}$, then Hurwitz's theorem can be made even stronger.
That is, if $x$ is not equivalent to $\phi = (1+\sqrt{5})/2$ or $1+\sqrt{2}$, then there are an infinite number of fractions $p/q$ such that :
$$ \left| x- \frac{p}{q} \right | < \frac{1}{C q^2} \quad \textrm{where } C=\sqrt{221}/5. \tag{4} $$
Again, this inequality is strong, because for any number equivalent to $x = (9+\sqrt{221})/10$, the value of $C=\sqrt{221}/5$ can not be improved.
The continued fraction for $(9+\sqrt{221})/10$ is $[2;2,1,1,2,2,1,1,2,1,1,2,2,1,1,2,\ldots]$,
Thus we can say that the third most irrational number is $(9+\sqrt{221})/10$.
In a similar fashion, we can say:
the fourth most irrational number is $(23+\sqrt{1517})/26 = [2; 2,1,1,1,1,2,2,1,1,1,1,2 ,…] $
the fifth most irrational number is $(5+\sqrt{7565})/58 = [1; 1,1,2,2,2,2,1,1,2,2,2,2,…]$
Markov Spectrum
Recalling equation 5, the first class of irrationals are those that enforce the upper bound when $C=\sqrt{5}$, and those relating to the second class correspond to the bound where $C=\sqrt{8}$, etc.
The question arises, 'what is the pattern?'. The answer is these numbers form the Lagrange / Markov spectrum, and are all of the form
$$ L_n = \sqrt{9-\frac{4}{m_n^2}}, $$ where $m_n$ is the n-th term of the Markov sequence. The first few terms of the Markov sequence are $ \{ 1,2,5,13,29,34,89,169,…\}$.
Thus, $L_n = \{ \sqrt{5}, \sqrt{8}, \sqrt{221}/5, \sqrt{1517}/13, … \} $
Much of this post was based on a wonderful paper "The hyperbolic geometry of Marjov's theorem on Diophanitne approximation and quadratic forms" by Boris Springborn. | CommonCrawl |
\begin{definition}[Definition:Quadrillion/Short Scale]
'''Quadrillion''' is a name for $10^{15}$ in the short scale system:
:'''One quadrillion''' $= 10^{3 \times 4 + 3}$
\end{definition} | ProofWiki |
Exploring Camera Color Space and Color Correction
Do cameras see the same color as us? Can cameras always accurately reproduce colors that our eyes see? This interactive tutorial explores these questions and many more interesting aspects of camera raw color space. In particular, we will walk you through an important concept in both color science and camera signal processing: color correction, the process of correcting the color perception of a camera such that it is as close to ours as allowed. In the end, you will get to appreciate why you should never trust the color produced by your camera and how you might build your own camera that, in theory, out-performs existing cameras in color reproduction.
Caveats. 1) This tutorial demonstrates the principle of color correction with many important, but subtle, engineering details omitted; we will mention them when appropriate. 2) Color correction is one of the two components in camera color reproduction, the other being white balance (or rather, camera's emulation of chromatic adaption of human visual system). We have a post that discusses the principles of chromatic adaptation and its application in white balance. The relationship of color correction and white balance is quite tricky, but Andrew Rowlands has a fascinating article that demystefies it for you.
Step 1: Exploring Camera Color Space
In principle, cameras work just like our eyes. Our retina has three types of cone cells (L, M, and S), each with a unique spectral sensitivity, translating light into three numbers (the L, M, S cone responses) that give us color perception. Similar to the three cone types, (most) cameras use three color filters (commonly referred to as the R, G, and B filters), each with a unique spectral sensitivity and, thus, also translate light into three numbers. In this sense, you can really think of a camera as a "weird" kind of human being with unconventional LMS cone fundamentals. Not all cameras use three filters though. Telescope imaging cameras use five filters, just like butterflies!
The left chart below shows the measured camera sensitivity functions of 48 cameras in two recent studies. The 48 cameras are classified into four categories: DSLR, point and shoot, industrial cameras, and smartphone cameras. The sensitivities are normalized such that the most sensitive filter (usually the green filter) peaks at 1. What should be noted is that the sensitivities functions measured here are not just the spectral transmittances of the color filters; rather, they are measured by treating the camera as a black box, and thus reflect the combined effects of everything in the camera that has a spectral response to light, such as the anti-aliasing filter, IR filter, micro-lenses, the photosites, etc.
The default view shows the average sensitivities across the 48 cameras, but you can also select a particular camera from the drop-down list. As a comparison, we also plot the LMS cone fundamentals in the same chart as dashed lines. As is customary, the LMS cone fundamentals are, each, normalized to peak at 1. As is usually the case in color science, these normalizations merely introduce some scaling factors that will be canceled out later if we care about just the chromaticity of a color (i.e., the relative ratio of the primaries).
Reset Custom Camera
Draw Corrected Locus
You can see that the shapes of the camera sensitivities functions more or less resemble those of the cone fundamentals, which is perhaps not all that surprising. After all, cameras are built to reproduce colors that our eyes see. The shapes, however, are not an exact match. Most notably, you will see that the L and M cone responses overlap much more than the R and the G filters overlap. As you can imagine, sensitivities that are overly close to each other won't provide a great ability to distinguish colors. In the extreme case where two filters' sensitivities exactly match, the camera is dichromatic.
Although by default disabled, the chart also contains the CIE 1931 XYZ Color Matching Functions (CMFs). You can enable them by clicking on the legend to the left of the chart. In fact, clicking on a legend label toggles the visibility of the corresponding curve in the chart. The XYZ CMFs are just one linear transformation away from the LMS cone fundamentals, and is used as the "common language" in colorimetry when comparing different color spaces. So we will use XYZ as the connection color space in the rest of the tutorial.
In the same way we can plot the spectral locus in the XYZ color space, we can also plot the spectral locus on a camera's native RGB color space. The 3D plot on the right shows you the spectral locus on the XYZ, LMS, and the camera's RGB color space. The LMS locus is by default disabled, but you can enable it by clicking its legend label. Drag your mouse and spin around the 3D plot. You can see that the spectral locus in XYZ and in camera's RGB space do not match.
You can also draw your own camera sensitivity functions. Select "Custom" from the drop-down list, and start dragging your mouse. As you draw, the spectral locus in the camera RGB space will be dynamically updated on the 3D plot on the right. You can hit the Draw Reset button to clean the drawing.
Step 2: Problem Formulation
The fact that the locus in XYZ and in the RGB locus do not match means that each spectral light has different XYZ and RGB tristimulus values. That in itself is not a problem: the LMS and the XYZ tristimulus values of spectral lights don't match either. Critically, however, the LMS and XYZ color spaces are just one linear transformation away from each other. Any light, not just spectral light, can be converted between the XYZ and the LMS space using one single $3\times 3$ matrix multiplication. So it really doesn't matter in which space you express the color of a light; whether in LMS or XYZ (or any other colorimetric space), it's all the same underlying color.
Here comes the central question of this tutorial: is a camera's RGB color space also just a linear transformation from the XYZ/LMS space? If so, then the raw camera RGB values of any light can be translated to the correct XYZ values of that light, essentially recovering the color of the light from camera captures. In other words, the camera sees the same colors as us, and metamers to our eyes are also metamers to the camera. In general, if the camera raw color space is precisely a linear transformation away from the XYZ color space, the camera is said to satisfy the Luther Condition and that the camera color space is colorimetric (in that we can use the camera to measure color). What would happen if a camera doesn't satisty the Luther Condition? Well, lights that are different to us (i.e., have different XYZ values) might be the same to the camera (i.e., have the same raw RGB values), and vice versa. Metamers to the camera would not be metameras to our eyes.
Stated mathematically, we want to find a $3\times 3$ transformation matrix $T$ that satifies the following equation:
$ \begin{bmatrix} X_0 & X_1 & X_2 & \dots \\ Y_0 & Y_1 & Y_2 & \dots \\ Z_0 & Z_1 & Z_2 & \dots \end{bmatrix} = \begin{bmatrix} T_{00} & T_{01} & T_{02} \\ T_{10} & T_{11} & T_{12} \\ T_{20} & T_{21} & T_{22} \end{bmatrix} \times \begin{bmatrix} R_0 & R_1 & R_2 & \dots \\ G_0 & G_1 & G_2 & \dots \\ B_0 & B_1 & B_2 & \dots \end{bmatrix}, $
where $[X_i, Y_i, Z_i]^T$ is the XYZ values of a light and $[R_i, G_i, B_i]^T$ is the camera raw RGB values of the same light. Since the transformation matrix $T$ has 9 unknowns but $T$ should ideally work for any arbitrary light, i.e., infinitely many equations, the system of equations is over-determined. Instead of finding an exact solution, we will instead settle for a best-fit $T$ that works well for a set of lights whose color reproduction we care about.
Step 3: The Correction Targets
What are the lights we care to reproduce their colors? We could certainly just use the spectral lights in the chart above, or could even just pick some random lights. But spectral lights are not common in the real world. We ideally want to correct for colors that we normally see in the real world. Here we provide two options, and you can switch between the two from the drop-down list below.
The first one is human skins. We use the human skin reflectance dataset measured and published by Cooksey, Allen, and Tsai at National Institute of Standards and Technology. The original dataset has 100 participants, and we sampled 24 participants to have a more or less even distribution of the skin tone from light to dark skins. The spectral reflectances of the 24 participants are shown in the chart below. Each sampled participant is denoted by "PN", where N is the participant ID in the original dataset. P41 has the lightest skin tone and P43 has the darkest tone. The legend on the left shows the closest sRGB color of the corresponding skin under D65 Standard Illuminant. One could argue that the measurement wasn't diverse enough — we definitely see darker skin tones in real world.
Another option, which is perhaps more commonly used in practical camera color calibration, is what's called the ColorChecker color redition chart, which contains 24 patches whose "spectral reflectances mimic those of natural objects such as human skin, foliage, and flowers." These are perfect target lights for us to calibrate the transformation matrix. While the original manufacturer does not publish the spectral data, people have measured and published the spectral reflectance of these patches. The ones we plot above are from BabelColor, but you might see slightly different versions.
Step 4: Pick Illuminant
Reset Custom Illuminant
You can select an iluminant in the chart above, which shows the SPD of a few CIE Standard Illuminants normalized to peak at unity. You can also draw your own illuminant by selecting "Custom" from the drop-down list.
How do we obtain the XYZ and RGB values of a ColorChecker patch? These patches do not emit lights themselves; they reflect lights. Therefore, we must pick an illuminant for color correction. This inherently suggests that the color correction matrix will be dependent on the illuminant.
Mathmatically, given an illuminant $\Phi(\lambda)$, the XYZ and the camera raw RGB values of a patch $i$ (with a spectral reflectance of $S_i(\lambda)$) are calculated as:
$X_i = \sum_{400}^{720}\Phi(\lambda)S_i(\lambda)X(\lambda)$
$Y_i = \sum_{400}^{720}\Phi(\lambda)S_i(\lambda)Y(\lambda)$
$Z_i = \sum_{400}^{720}\Phi(\lambda)S_i(\lambda)Z(\lambda)$
$R_i = \sum_{400}^{720}\Phi(\lambda)S_i(\lambda)R(\lambda)$
$G_i = \sum_{400}^{720}\Phi(\lambda)S_i(\lambda)G(\lambda)$
$B_i = \sum_{400}^{720}\Phi(\lambda)S_i(\lambda)B(\lambda)$
In practical camera color calibration, instead of applying these equations to calculate the theoretical RGB values, what people usually do is to actually take a photo of the ColorChecker patches and read the raw RGB values. This is necessary because we mostly won't be able to get the exact SPD of the capturing illuminant, and sometimes we don't want to trust the spectral reflectance data. After all, the equations above simulate how RGB values are generated by cameras anyways.
But if we don't have the illuminant SPD, how do we get the XYZ values of the patches? Fortunately, people have calculated the XYZ values of the patches under different CIE Standard Illuminants. We would then have to estimate the illuminant of the capturing scene, and find the closest illuminant for which the XYZ values of the patches are available. Estimating the scene illuminant is a difficult topic on its own, and has implications in white balance as well, which we leave for another tutorial.
Step 5: Calculate Color Correction Matrix
With the illuminant selected, we now have everything to
Generate Equations
to concretize the linear system:
$\begin{bmatrix} \boxed{??} \\ \boxed{??} \\ \boxed{??} \end{bmatrix}$ $\small{ = \begin{bmatrix} T_{00} & T_{01} & T_{02} \\ T_{10} & T_{11} & T_{12} \\ T_{20} & T_{21} & T_{22} \end{bmatrix} \times}$ $\begin{bmatrix} \boxed{??} \\ \boxed{??} \\ \boxed{??} \end{bmatrix}$
The first matrix has 24 columns, each representing the XYZ values of a ColocChecker patch under your chosen illuminant ${\boxed{??}}$. The last matrix also has 24 columns, each representing the RGB values of a patch in the camera RGB space under the same illuminant. We get an over-determined system with 72 equations and 9 unknowns.
Before solving the system, let's look at the patches in both the XYZ and the camera RGB spaces, which is shown in the left chart below. Evidently, the XYZ and the RGB values of the patches do not overlap, but if you spin around the chart you'll likely see that the relative distributions of the patches within the XYZ and the RGB space are roughly similar, suggesting that a linear transformation might exist.
Solving the Optimization Problem
How to find the best fit $T$? The common approach is to formulate this as a linear least squares problem, which essentially minimizes the sum of the squared differences between the reconstructed XYZ values and the "ground truth" XYZ values. Critically, squared differences represent Euclidean distances in the XYZ space. Note that Euclidean distance in the XYZ space is not proportional to the perceptual color difference. Therefore, it is usually better to convert the color from the XYZ space to a perceptually uniform color space, such as the CIELAB, before formulating the least squares, or using other perceptually-driven color different metrics. For simplicity, however, we will simply use the Euclidean distance in the XYZ space here to show you the idea.
The optimal solution to the linear least squares problem in the form of $A=TB$ is: $T=AB^T (BB^T)^{-1}$, where $A$ is our XYZ matrix and $B$ is the RGB matrix. Now click
Calculate Matrix
to find $T$! The best-fit matrix is calculated as:
${\begin{bmatrix} \boxed{??} & \boxed{??} & \boxed{??} \\ \boxed{??} & \boxed{??} & \boxed{??} \\ \boxed{??} & \boxed{??} & \boxed{??} \end{bmatrix}}$ , for camera ${\boxed{??}}$ and illuminant ${\boxed{??}}$.
The transformed/corrected XYZ values of the patches will be shown in the chart above. Look for the label "XYZ (Corrected)". You can see that the correct XYZ values pretty much align with the "ground truth" XYZ values. But they don't exactly overlap, because the transformation matrix is just the best fit, not the perfect solution. To see the color correction error, the heatmap on the right shows difference between the true color and the correct color for the 24 patches using the CIE $\Delta E^*$ metric (in reality, one might want to use the $\Delta E^*$ metric as the loss function for optimization).
This color correction process described so far is pretty much what's done in real cameras, which just have a few additional customary normalizations. For instance, when the RGB values are directly read from the raw captures, they are normalized to the range of $[0, 1]$ using the maxium raw value of the sensor. Also, the transformation matrix $T$ is normalized such that the capturing illuminant (${\boxed{??}}$ here), when normalized to have a Y value of 1, saturates one of the RGB channels (usually the G channel is the first to saturate since green filter has the highest sensitivity as is evident in the first chart). See Section 2.5 of Andrew Rowlands' article for details.
Customizing the Camera
If you change the camera and the illuminant choice, simply click the "Generate Equations" and "Calculate Matrix" buttons again (in that order). As you change the camera and illuminant, you will see that the correction matrix changes too, confirming that the correction matrix is inherently tied to a particular camera and calibrating illuminant. Here are a few cool things you are invited to try:
Draw camera sensitivity functions that mimic the LMS cone fundamentals or the XYZ CMFs. You will see that the color reproduction error will be relatively small.
Make each camera sensitivity function narrow and have little overlap across the spectrum.
Make each camera sensitivity function narrow and have significant overlap.
Make each camera sensitivity function wide and overlap across the spectrum.
Step 6: Interpreting Camera Color Space
With the transformation matrix $T$ calculated, we can use it to get more insights into a particular camera's color space. The TL;DR is that a typical camera's color space contains imaginary colors. While the observations we are about to make are generally true, but let's take one specific camera just so we can be concrete. Click the
Choose iPhone 12 Pro Max
button to choose iPhone 12 Pro Max as the target camera and D65 as the illuminant. Everything on this page will be updated accordingly.
Differences Between Skin Tones
If we use the human skin tones as the correction target, we'll see that darker skins generally have worse color correction results compared to lighter skins, as it evident in the heatmap above, where bottom rows represent darker skin tones and have higher $\Delta E^*$ errors. Seemingly neutral correction algorithm, as is implemented in this tutorial, results in racial bias manifested as different color reproduction accuracy measures across skin tones. There is a great talk by Theodore Kim on how graphics and camera imaging research could, independent of any individual intent, contain racial biases.
Note that this is not to say that the color correction algorithm used in a particular camera will always produce the exact bias as bad as discussed above. Our intention is to provide a concrete example where camera imaging could inadvertently introduce racial bias.
Are Spectral Lights Corrected?
The first question you can ask is, does $T$ correct colors of other lights that are not used for color correction? In particular, we are interested in how the spectral lights are corrected. Go back to the spectral locus plot in Step 1 and click the "Draw Corrected Locus" button, which will plot the transformed spectral locus (the "XYZ (Corrected)" curve). Ideally, you want that locus to precisely match the actual XYZ spectral locus, but most likely you will see that that are way off. This is perhaps not all that surprising, because the spectral lights are not used for calculating the best-fit matrix.
We can gain a better understanding of the correction results in the chromaticity plot. The chart below plots the "ground truth" spectral locus and the 24 patches in the xy-chromaticity plot, and then overlays the reconstructed locus and patches transformed from the camera RGB space using matrix $T$. Hover on any point will show the coupled counterpart as well. Notice how the original 24 patches and the reconstructed ones are almost overlapped (because they are used to estimate $T$), but the true and corrected spectral locus are miles apart. Those reconstructed spectral colors, when converted to an output-referred color space, will give you incorrect colors.
What's a Camera's Gamut Like?
Recall two things here. First, gamut refers to all the colors that can be produced by a color space by mixing (non-negative amount of) the primaries of that color space. Second, the area enclosed by the spectral locus represents all the colors that huiman visual system can see.
If you click the "sRGB gamut" legend label in the plot above, you will see the gamut of the sRGB color space. It's a triangle inside the spectral locus. The three vertices of the triangle represent the three primaries of the sRGB color space. Clearly, the sRGB priamries are real colors and all the colors produced by the sRGB color space are real colors, too, but there are some real colors that can't produced by sRGB. An interesting observation is that all but one ColorChecker patches lie inside the sRGB gamut. That is intentional. sRGB is the most commonly used color space, and you would want to be able to reproduce its gamut well. By choosing colors within the sRGB gamut as the correction target, it's likely that other colors in the gamut can be reasonably reconstructed as well.
Now click the "Camera RGB gamut" legend label in the plot above. You will see the gamut of ${\boxed{??}}$'s color space, which is still a triangle, because we are still mixing the RGB primaries through a linear combination. This triangle is obtained simply by multiplying $T$ with $[[0, 0, 1], [0, 1, 0], [1, 0, 0]]$ (assuming the raw values are normalized to the range of $[0, 1]$) and converting the results to chromaticities. What this means is that if you manually set the RAW pixel values (e.g., through a raw image editor) and go through the color correction pipeline you will end up at a point within that triangle.
You might be wondering, where do real colors reside then? Are they in the area encloused by the corrected/reconstructed spectral locus? Not really. The reason is that the corrected spectral locus no longer has our familiar horseshoe shape and, more importantly, is no longer convex. So, for instance, if you connect spectral lights $520~nm$ and $550~nm$ from the corrected locus, any point on that line represents the color of a real light mixed from $520~nm$ and $550~nm$. If you use iPhone 12 Pro Max to take a photo of a light mixed from $520~nm$ and $550~nm$ and transform the raw RGB values using matrix $T$, you will end up on that line in the xy-chromaticity plot. But that line lies outside the locus. So we can no longer say that real colors lie inside the locus.
In fact, it is the convex hull of the corrected spectral locus that contains the colors of all the real lights. Click the "Convex Hull of Spectral Locus in Camera" legend label to reveal the convex hull. Any point that's outside the convex hull but inside the camera gamut will have valid (i.e., positive) raw RGB values, but they will never show up in a real ${\boxed{??}}$ capture because they represent imaginary colors. | CommonCrawl |
\begin{document}
\title[Closed sets of functions]{Closed sets of finitary functions between products of finite fields of coprime order}
\author{Stefano Fioravanti}
\subjclass{08A40}
\address{
Institut f\"ur Algebra,
Johannes Kepler Universit\"at Linz,
4040 Linz,
Austria}
\email{\tt [email protected]}
\thanks{The research was supported by the Austrian Science Fund (FWF):P29931.}
\keywords{Clonoids, Clones}
\begin{abstract}
We investigate the finitary functions from a finite product of finite fields $\prod_{j =1}^m\mathbb{F}_{q_j} = {\mathbb K}$ to a finite product of finite fields $\prod_{i =1}^n\mathbb{F}_{p_i} = {\mathbb{F}}$, where $|{\mathbb K}|$ and $|{\mathbb{F}}|$ are coprime. An $({\mathbb{F}},{\mathbb K})$-linearly closed clonoid is a subset of these functions which is closed under composition from the right and from the left with linear mappings.
We give a characterization of these subsets of functions through the ${\mathbb{F}}_p[{\mathbb K}^{\times}]$-submodules of $\mathbb{F}_p^{{\mathbb K}}$, where ${\mathbb K}^{\times}$ is the multiplicative monoid of ${\mathbb K} = \prod_{i=1}^m {\mathbb{F}}_{q_i}$. Furthermore we prove that each of these subsets of functions is generated by a set of unary functions and we provide an upper bound for the number of distinct $({\mathbb{F}},{\mathbb K})$-linearly closed clonoids.
\end{abstract}
\maketitle
\section{Introduction}
Since P. Hall's abstract definition of a clone the problem to describe sets of finitary functions from a set $A$ to a set $B$ which satisfy some closure properties has been a fruitful branch of research. E. Post's characterization of all clones on a two-element set \cite{Pos.TTVI} can be considered as a foundational result in this field, which was developed further, e. g., in \cite{Ros.MCOA,PK.FUR,Sze.CIUA,Leh.CCOF}. Starting from \cite{BJK.TCOC}, clones are used to study the complexity of certain constrain satisfaction problems (CSPs).
The aim of this paper is to describe sets of functions from a finite product of finite fields $\prod_{j =1}^m\mathbb{F}_{q_j} = {\mathbb K}$ to a finite product of finite fields $\prod_{i =1}^n\mathbb{F}_{p_i} = {\mathbb{F}}$, where $|{\mathbb K}|$ and $|{\mathbb{F}}|$ are coprime. The sets of functions we are interested in are closed under composition from the left and from the right with linear mappings. Thus we consider sets of functions with different domains and codomains; such sets are called clonoids and are investigated, e. g., in \cite{AM.FGEC}. Let $\mathbf{B}$ be an algebra, and let $A$ be a non-empty set. For a subset $C$ of $\bigcup_{n \in {\mathbb{N}}} B^{A^n}$ and $k\in {\mathbb{N}}$, we let $C^{[k]} :=C \cap B^{A^k}$. According to Definition $4.1$ of \cite{AM.FGEC} we call $C$ a \emph{clonoid} with source set $A$ and target algebra $\mathbf{B}$ if
\begin{center}
\begin{enumerate}
\item [(1)] for all $k \in {\mathbb{N}}$: $C^{[k]}$ is a subuniverse of $\mathbf{B}^{A^k}$, and
\item [(2)] for all $k,n \in {\mathbb{N}}$, for all $(i_1,\dots,i_k) \in \{1,\dots,n\}^k$, and for all $c \in C^{[k]}$, the function $c' \colon A^n \to B$ with $c'(a_1,\dots,a_n) := c(a_{i_1},\dots,a_{i_k})$ lies in $C^{[n]}$.
\end{enumerate}
\end{center}
By $(1)$ every clonoid is closed under composition with operations of $\mathbf{B}$ on the left. In particular we are dealing with those clonoids whose target algebra is the ring $\prod_{i=1}^m\mathbb{F}_{p_i}$ that are closed under composition with linear mappings from the right side.
\begin{Def}
\theoremstyle{definition}
\label{DefClo-2}
Let $m,s \in {\mathbb{N}}$ and let ${\mathbb K}= \prod_{j=1}^m\mathbb{K}_{j}$, ${\mathbb{F}}= \prod_{i=1}^s\mathbb{F}_{i}$ be products of fields. An \emph{$({\mathbb{F}},{\mathbb K})$-linearly closed clonoid} is a non-empty subset $C$ of $\bigcup_{k \in \mathbb{N}} \prod_{i=1}^s\mathbb{F}_{i}^{{\prod_{j=1}^m\mathbb{K}_{j}^k}}$ with the following properties:
\begin{enumerate}
\item[(1)] for all $n \in {\mathbb{N}}$, $\mathbfsl{a}, \mathbfsl{b} \in \prod_{i=1}^s\mathbb{F}_i$, and $f,g \in C^{[n]}$:
\begin{equation*}
\mathbfsl{a}f + \mathbfsl{b}g \in C^{[n]};
\end{equation*}
\item[(2)] for all $l,n \in {\mathbb{N}}$, $f \in C^{[n]}$, $(\mathbfsl{x}_1,\dots,\mathbfsl{x}_m) \in \prod_{j=1}^m\mathbb{K}_{j}^l$, and $A_j\in \mathbb{K}^{n \times l}_{j}$:
\begin{equation*}
g\colon (\mathbfsl{x}_1,\dots,\mathbfsl{x}_m) \mapsto f(A_1\cdot \mathbfsl{x}_1^t,\cdots,A_m\cdot \mathbfsl{x}_m^t) \text{ is in } C^{[l]},
\end{equation*}
\end{enumerate}
where the juxtaposition $\mathbfsl{a}f$ denotes the Hadamard product of the two vectors (i.e. the component-wise product $(a_1,\dots,a_n)\cdot (b_1,\dots,b_n) = (a_1b_1,\dots,$ $a_nb_n)$).
\end{Def}
Clonoids naturally appear in the study of promise constraint satisfaction problems (PCSPs). These problems are investigated, e. g., in \cite{BG.PCSS}, and in \cite{BKO.AATP} clonoid theory has been used to provide an algebraic approach to PCSPs. In \cite{Spa.OTNO} A. Sparks investigate the number of clonoids for a finite set $A$ and finite algebra $\mathbf{B}$ closed under the operations of $\mathbf{B}$. In \cite{Kre.CFSO} S. Kreinecker characterized linearly closed clonoids on $\mathbb{Z}_p$, where $p$ is a prime. Furthermore, a description of the set of all $({\mathbb{F}},{\mathbb K})$-linearly closed clonoids is a useful tool to investigate (polynomial) clones on ${\mathbb{Z}}_n$, where $n$ is a product of distinct primes or to represent polynomial functions of semidirect products of groups.
In \cite{Fio.CSOF} there is a complete description of the structure of all $({\mathbb{F}},{\mathbb K})$-linearly closed clonoids in case ${\mathbb{F}}$ and ${\mathbb K}$ are fields and the results we will present are a generalization of this description.
The main result of this paper (Theorem \ref{Thm14-2}) states that every $({\mathbb{F}},{\mathbb K})$-linearly closed clonoid is generated by its subset of unary functions.
\begin{Thm}
\label{Thm14-2}
Let ${\mathbb K} = \prod_{i=1}^m\mathbb{F}_{q_i}$, ${\mathbb{F}}= \prod_{i=1}^s\mathbb{F}_{p_i}$ be products of fields such that $|{\mathbb K}|$ and $|{\mathbb{F}}|$ are coprime. Then every $(\mathbb{F},\mathbb{K})$-linearly closed clonoid is generated by a set of unary functions and thus there are finitely many distinct $({\mathbb{F}},{\mathbb K})$-linearly closed clonoids.
\end{Thm}
The proof of this result is given in Section~\ref{AllGen2}. From this follows that under the assumptions of Theorem \ref{Thm14-2} we can bound the cardinality of the lattice of all $({\mathbb{F}},{\mathbb K})$-linearly closed clonoids.
Furthermore, in Section~\ref{TheLattice-2} we find a description of the lattice of all $({\mathbb{F}},{\mathbb K})$-linearly closed clonoids as the direct product of the lattices of all ${\mathbb{F}}_{p_i}[{\mathbb K}^{\times}]$-submodules of $\mathbb{F}_{p_i}^{{\mathbb K}}$, where ${\mathbb K}^{\times}$ is the multiplicative monoid of ${\mathbb K} = \prod_{i=1}^m {\mathbb{F}}_{q_i}$. Moreover, we provide a concrete bound for the cardinality of the lattice of all $({\mathbb{F}},{\mathbb K})$-linearly closed clonoids.
\begin{Thm}
\label{Corfinale-3}
Let ${\mathbb{F}} = \prod_{i=1}^s{\mathbb{F}}_{p_i}$ and ${\mathbb K} = \prod_{j=1}^m{\mathbb{F}}_{q_j}$ be products of finite fields such that $|{\mathbb K}|$ and $|{\mathbb{F}}|$ are coprime. Then the cardinality of the lattice of all $(\mathbb{F},\mathbb{K})$-linearly closed clonoids $\mathcal{L}({\mathbb{F}},{\mathbb K})$ is bounded by:
\begin{equation*}
|\mathcal{L}({\mathbb{F}},{\mathbb K})| \leq \prod_{i=1}^s\sum_{1 \leq r \leq n}{{ n}\choose{r}}_{p_i},
\end{equation*}
where $n = \prod_{j = 1}^mq_i$ and
\begin{equation*}
{{n}\choose{h}}_q = \prod_{i=1}^h \frac{q^{n-h+i}-1}{q^i-1}
\end{equation*}
with $q \in {\mathbb{N}}\backslash\{1\}$.
\end{Thm}
\section{Preliminaries and notation}\label{Preliminaries-2}
We use boldface letters for vectors, e. g., $\mathbfsl{u} = (u_1,\dots,u_n)$ for some $n \in {\mathbb{N}}$. Moreover, we will use $\langle\mathbfsl{v}, \mathbfsl{u}\rangle$ for the scalar product of the vectors $\mathbfsl{v}$ and $\mathbfsl{u}$.
Let $f$ be an $n$-ary function from an additive group $\mathbf{G}_1$ to a group $\mathbf{G}_2$. We say that $f$ is \emph{$0$-preserving} if $f(0_{\mathbf{G}_1},\dots,0_{\mathbf{G}_1}) = 0_{\mathbf{G}_2}$.
A non-trivial $({\mathbb{F}},{\mathbb K})$-linearly closed clonoids is the set of all $0$-preserving finitary functions from ${\mathbb K}$ to ${\mathbb{F}}$. The $({\mathbb{F}},{\mathbb K})$-linearly closed clonoids form a lattice with the intersection as meet and the $({\mathbb{F}},{\mathbb K})$-linearly closed clonoid generated by the union as join. The top element of the lattice is the $({\mathbb{F}},{\mathbb K})$-linearly closed clonoid of all functions and the bottom element consists of only the constant zero functions. We write \index{$\mathrm{Clg}(S)$}$\mathrm{Clg}(S)$ for the $({\mathbb{F}},{\mathbb K})$-linearly closed clonoid generated by a set of functions $S$.
In order to prove Theorem \ref{Thm14-2} we introduce the definition of $0$-absorbing function. This concept is slightly different from the one in \cite{Aic.SSOE} since we consider the source set to be split into a product of sets. Nevertheless, some of the techniques in \cite{Aic.SSOE} can be used also with our definition of $0$-absorbing function.
Let $A_1, \dots, A_m$ be sets, let $0_{A_i} \in A_i$, and let $J \subseteq [m]$. For all $\mathbfsl{a} = (a_1,\dots,a_m) \in \prod_{i=1}^mA_i$ we define \index{$\mathbfsl{a}^{(J)}$}$\mathbfsl{a}^{(J)}\in \prod_{i=1}^mA_i$ by $\mathbfsl{a}_i^{(J)} =a_i$ for $i \in J$ and $(\mathbfsl{a}^{(J)})_i = 0_{A_i}$ for $i \in [m] \backslash J$.
Let $A_1, \dots, A_m$ be sets, let $0_{A_i} \in A_i$, let $\mathbf{G} = \langle G, +, -, 0_G \rangle$ be an abelian group, let $f \colon \prod_{i=1}^mA_i \rightarrow G$, and let $I \subseteq [m]$. By \index{$\mathrm{Dep}(f)$}$\mathrm{Dep}(f)$ we denote $\{i \in [m] \mid f \text{ depends on its $i$th set argument}\}$. We say that $f$ is $0_{A_j}$-\emph{absorbing} in its $j$th argument if for all $\mathbfsl{a} = (a_1, \dots , a_m) \in \prod_{i=1}A_i$ with $a_j = 0_{A_j}$ we have $f(\mathbfsl{a}) = 0_G$. We say that $f$ is $0$-\emph{absorbing} in $I$ if $\mathrm{Dep}(f) \subseteq I$ and for every $i \in I$ $f$ is $0_{A_i}$-absorbing in its $i$th argument.
Using the same proof of \cite[Lemma $3$]{Aic.SSOE} we can find an interesting property of $0$-absorbing functions.
\begin{Lem}
\label{Lem0absor}
Let $A_1, \dots, A_m$ be sets, let $0_{A_i}$ be an element of $A_i$ for all $i \in [m]$. Let $B = \langle B, +, -, 0_G\rangle$ be an abelian group, and let $f \colon \prod_{i=1}^mA_{i} \rightarrow B$. Then there is exactly one sequence $\{f_I\}_{I \subseteq [m]}$ of functions from $\prod_{i=1}^mA_{i} $ to $B$ such that for each $I \subseteq [m]$, $f_I$ is $0$-absorbing in $I$ and $f = \sum_{I \subseteq [m]} f_I$ . Furthermore, each function $f_I$ lies in the subgroup $\mathbf{F}$ of $\mathbf{B}^{\prod_{i=1}^mA_{i}}$ that is generated by the functions $\mathbfsl{x} \rightarrow f(\mathbfsl{x}^{(J)})$, where $J\subseteq [m]$.
\end{Lem}
\begin{proof}
The proof is essentially the same of \cite[Lemma $3$]{Aic.SSOE} substituting $A^m$ with $\prod_{i =1}^mA_i$. We define $f_I$ by recursion on $|I|$. We define $f_{\emptyset} (\mathbfsl{a}) := f(0_{A_1}, \dots , 0_{A_m})$ and for all $I \not=\emptyset$ and $\mathbfsl{a} \in \prod_{i=1}^mA_i$ and $f_{I}$ by:
\begin{equation}
f_I(\mathbfsl{a}) := f(\mathbfsl{a}^{(I)}) - \sum_{J \subset I} f_J (\mathbfsl{a}),
\end{equation}
for all $\mathbfsl{a} \in \prod_{i=1}^mA_i$.
\end{proof}
Furthermore, as in \cite{AM.CWTR}, we can see that the component $f_I$ satisfies $f_I(\mathbfsl{a})$ $ = \sum_{J\subseteq I} (-1)^{|I|+|J|}$ $f(\mathbfsl{a}^{(J)} )$. From now on we will not specify the element that the functions absorb since it will always be the $0$ of a finite field.
\section{Unary generators of $(\mathbb{F},\mathbb{K})$-linearly closed clonoid}\label{AllGen2}
In this section our aim is to find an analogon of \cite[Theorem $4.2$]{Fio.CSOF} for a generic $(\mathbb{F},\mathbb{K})$-linearly closed clonoid $C$, which will allow us to generate $C$ with a set of unary functions. In general we will see that it is the unary part of an $(\mathbb{F},\mathbb{K})$-linearly closed clonoid that determines the clonoid. To this end we shall show the following lemmata. We denote by \index{$\mathbfsl{e}_1^{{\mathbb{F}}^n_{q_i}}$}$\mathbfsl{e}_1^{{\mathbb{F}}^n_{q_i}} = (1,0,\dots,0)$ the first member of the canonical basis of ${\mathbb{F}}^n_{q_i}$ as a vector-space over ${\mathbb{F}}_{q_i}$. Let $f: \prod_{i=1}^m {\mathbb{F}}^k_{q_i} \rightarrow {\mathbb{F}}_{p}$. Let $s \leq m$ and let ${\mathbb K} = \prod_{i=1}^s {\mathbb{F}}_{q_i}$. Then we denote by $f\mid_{{\mathbb K}}: \prod_{i=1}^s {\mathbb{F}}^k_{q_i} \rightarrow {\mathbb{F}}_{p}$ the function such that $f\mid_{{\mathbb K}}(\mathbfsl{x}_1,\dots,\mathbfsl{x}_s) = f(\mathbfsl{x}_1,\dots,\mathbfsl{x}_s,0,\dots,0)$.
\begin{Lem}
\label{Lem1-2}
Let $f,g\colon \prod_{i=1}^m\mathbb{F}_{q_i}^n \to \mathbb{F}_p$ be functions, and let $\mathbfsl{b}_1, \dots, \mathbfsl{b}_m$ be such that $\mathbfsl{b}_i \in \mathbb{F}_{q_i}^n \backslash$ $\{(0,\dots,0)\}$ for all $i \in [m]$. Assume that $f(\lambda_1\mathbfsl{b}_1,\dots,\lambda_m\mathbfsl{b}_m) = g(\lambda_1\mathbfsl{e}_1^{{\mathbb{F}}^n_{q_1}},\dots,\lambda_m\mathbfsl{e}_1^{{\mathbb{F}}^n_{q_m}})$, for all $\lambda_1 \in \mathbb{F}_{q_1},\dots,\lambda_m \in \mathbb{F}_{q_m}$, and $f(\mathbfsl{x}) = g(\mathbfsl{y}) = 0$ for all $\mathbfsl{x} \in \prod_{i=1}^m\mathbb{F}_{q_i}^n \backslash \{(\lambda_1\mathbfsl{b}_1,\dots$ $,\lambda_m\mathbfsl{b}_m) \mid (\lambda_1,\dots,\lambda_m) \in \prod_{i=1}^m\mathbb{F}_{q_i}\}$ and $\mathbfsl{y} \in \prod_{i=1}^m\mathbb{F}_{q_i}^n \backslash \{(\lambda_1\mathbfsl{e}_1^{{\mathbb{F}}^n_{q_1}},\dots,\lambda_m\mathbfsl{e}_1^{{\mathbb{F}}^n_{q_m}})\mid (\lambda_1,\dots,$ $\lambda_m) \in \prod_{i=1}^m\mathbb{F}_{q_i}\}$. Then $f \in \mathrm{Clg}(\{g\})$.
\end{Lem}
\begin{proof}
For $j \leq m$ let $A_j$ be any invertible $n \times n$-matrix over ${\mathbb K}_j$ such that $A_j\mathbfsl{b}_j = \mathbfsl{e}^{{\mathbb{F}}^n_{q_j}}_1$. Then is straighforward to check that $f(\mathbfsl{x}_1,\dots,\mathbfsl{x}_m) =
g(A\mathbfsl{x}_1,$ $\dots,A\mathbfsl{x}_m)$.
\end{proof}
\begin{Lem}
\label{Rem-sum-0-pres2}
Let $q_1,\dots,q_m$ and $p$ be powers of primes and let ${\mathbb K} = \prod_{i=1}^m{\mathbb{F}}_{q_i}$. Let $h \leq m$ and let ${\mathbb K}_1 = \prod_{i=1}^h{\mathbb{F}}_{q_i}$. Let $C$ be an $({\mathbb{F}}_p,{\mathbb K})$-linearly closed clonoid and let
\begin{equation*}
C\mid_{{\mathbb K}_1} := \{g \mid \exists g' \in C \colon g'\mid_{{\mathbb K}_1} = g\}.
\end{equation*}
Let $\mathrm{Dep}(f) = [h]$ and let $f: \prod_{i=1}^m{\mathbb{F}}^s_{q_i} \rightarrow {\mathbb{F}}_p$. Then $f \in \mathrm{Clg}(C^{[1]})^{[s]}$ if and only if $f \mid_{{\mathbb K}_1} \in \mathrm{Clg}(C\mid_{{\mathbb K}_1}^{[1]})^{[s]}$.
\end{Lem}
\begin{proof}
It is clear that if $f \in \mathrm{Clg}(C^{[1]})$ then $f \mid_{{\mathbb K}_1} \in \mathrm{Clg}(C\mid_{{\mathbb K}_1}^{[1]})$, simply restricting to ${\mathbb K}_1$ all the unary generators of $f$. Conversely, let $S'$ be a set of unary generators of $f\mid_{{\mathbb K}_1}$. Let $S \subseteq C^{[1]}$ be defined by
\begin{align*}
S := \{&g \mid \exists g' \in S' : g(x_1,\dots,x_h,0,\dots,0) = g'(x_1,\dots,x_h), \\&\text{ for all } (x_1,\dots,x_h) \in\prod_{i=1}^h{\mathbb{F}}_{q_i} \}.
\end{align*}
From $\mathrm{Dep}(f) = [h]$ follows that $S$ is a set of unary generators of $f$.
\end{proof}
\begin{Lem}
\label{Lemt_k}
Let $q_1,\dots,q_m$ and $p$ be powers of primes with $\prod_{i=1}^mq_i$ and $p$ coprime. Let ${\mathbb K} = \prod_{i=1}^m{\mathbb{F}}_{q_i}$. Let $C$ be an $({\mathbb{F}}_p,{\mathbb K})$-linearly closed clonoid, let $g \in C^{[1]}$ be $0$-absorbing in $[m]$, and let $t_k\colon \prod_{i=1}^m\mathbb{F}_{q_i}^k \to \mathbb{F}_p$ be defined by:
\begin{align*}
&t_k(\lambda_1\mathbfsl{e}_1^{{\mathbb{F}}^k_{q_1}},\dots,\lambda_m\mathbfsl{e}_1^{{\mathbb{F}}^k_{q_m}}) = g(\lambda_1,\dots,\lambda_m) \text{ for all } (\lambda_1,\dots,\lambda_m) \in \prod_{i = 1}^m{\mathbb{F}}_{q_i}
\\&t_k(\mathbfsl{x}) = 0
\\&\text{ for all } \mathbfsl{x} \in \prod_{i=1}^m\mathbb{F}^k_{q_i}\backslash \{( \lambda_1\mathbfsl{e}_1^{{\mathbb{F}}^k_{q_1}}, \dots,\lambda_m\mathbfsl{e}_1^{{\mathbb{F}}^k_{q_m}}) \mid (\lambda_1,\dots,\lambda_m) \in \prod_{i = 1}^m{\mathbb{F}}_{q_i}\}.
\end{align*}
Then $t_k$ is $0$-absorbing in $[m]$, with $A_i = {\mathbb{F}}_{q_i}^k$ and $0_{A_i} = (0_{{\mathbb{F}}_{q_i}},\dots,0_{{\mathbb{F}}_{q_i}})$. Furthermore, $t_k \in \mathrm{Clg}(C^{[1]})$ for all $k \in {\mathbb{N}}$.
\end{Lem}
\begin{proof}
Since $g$ is $0$-absorbing in $[m]$ then also $t_k$ is $0$-absorbing in $[ m]$.
Moreover ,we prove that $t_k \in \mathrm{Clg}(C^{[1]})$ by induction on $k$.
Case $k =1$: if $k = 1$, then $t_1 = g$ is a unary function of $C^{[1]}$.
Case $k>1$: we assume that $t_{k-1} \in \mathrm{Clg}(C^{[1]})$.
For all $1 \leq i \leq m$ we define the two sets of mappings $T_i^{[k]} $ and $R_i^{[k]}$ from ${\mathbb{F}}_{q_i}^k$ to ${\mathbb{F}}_{q_i}^{k-1}$ by:
\begin{align*}
T_i^{[k]} &:=\{u_{a}\colon (x_1,\dots,x_k) \mapsto (x_1-ax_2,x_3\dots,x_k)\mid a \in \mathbb{F}_{q_i}\}
\\R_i^{[k]} &:=\{w_{a}\colon (x_1,\dots,x_k) \mapsto (ax_2,x_3\dots,x_k)\mid a \in \mathbb{F}_{q_i}\backslash\{0\}\}.
\end{align*}
Let $P_i^{[k]} := T_i^{[k]} \cup R_i^{[k]}$. Furthermore, we define the function $c ^{[k]}\colon \bigcup_{i=1}^m P_i^{[k]} \rightarrow {\mathbb{N}}$ by:
\begin{align*}
c^{[k]}(h)= \begin{cases} 0 & \text{if } h \in \bigcup_{i=1}^mT_i^{[k]} \\
1 & \text{if } h \in \bigcup_{i=1}^mR_i^{[k]}. \end{cases}
\end{align*}
Let us define the function $r_k\colon \prod_{i=1}^m\mathbb{F}_{q_i}^k \rightarrow \mathbb{F}_p$ by:
\begin{align}\label{eq4-2}
&r_k(\mathbfsl{x}_1,\dots, \mathbfsl{x}_m) = \nonumber \\& = \sum_{h_1 \in P_1^{[k]},\dots,h_m \in P_m^{[k]}} (-1)^{\sum_{i=1}^m \text{c}^{[k]}(h_i)} t_{k-1}(h_1(\mathbfsl{x}_1), ,\dots,h_m(\mathbfsl{x}_m)),
\end{align}
for all $\mathbfsl{x}_i \in \mathbb{F}_{q_i}^k$.
\textbf{Claim:} $r_k(\mathbfsl{x}_1,\dots, \mathbfsl{x}_m) = \prod_{i =1}^mq_i \cdot t_{k}(\mathbfsl{x}_1,\dots, \mathbfsl{x}_m)$ for all $(\mathbfsl{x}_1,\dots, \mathbfsl{x}_m) \in \prod_{i=1}^m {\mathbb{F}}_{q_i}^k$
Subcase $\exists i \in [m], 3 \leq j \leq k$ with $(\mathbfsl{x}_i)_j \not=0$:
By definition of $t_{k-1}$, we can see that in \eqref{eq4-2} every summand vanishes if there exist $i \in [m]$ and $3 \leq j \leq k$ with $(\mathbfsl{x}_i)_j \not=0$. Thus $r_k(\mathbfsl{x}_1,\dots, \mathbfsl{x}_m) = \prod_{i =1}^mq_i \cdot t_{k}(\mathbfsl{x}_1,\dots, \mathbfsl{x}_m) = 0$ in this case.
Subcase $\exists l \in [m] $ with $(\mathbfsl{x}_l)_2 \not=0$ and $(\mathbfsl{x}_i)_j =0$ for all $ i \in [m], 3 \leq j \leq k$:
We prove that $r_k(\mathbfsl{x}_1,\dots, \mathbfsl{x}_m) = 0$. We can see that for all $(x_1,x_2) \in {\mathbb{F}}_{q_l} \times {\mathbb{F}}_{q_l}\backslash\{0\}$ and for all $b \in {\mathbb{F}}_{q_l}\backslash\{0\}$, there exists $a \in {\mathbb{F}}_{q_l}$ such that $bx_2 = x_1-ax_2$, and clearly $a = x_1x_2^{-1}-b$. Conversely, for all $(x_1,x_2) \in {\mathbb{F}}_{q_l} \times {\mathbb{F}}_{q_l}\backslash\{0\}$ and for all $a \in {\mathbb{F}}_{q_l}\backslash\{x_1x_2^{-1}\}$ there exists $b \in {\mathbb{F}}_{q_l}\backslash\{0\}$ such $bx_2 = x_1-ax_2$, and clearly $b = x_1x_2^{-1}-a$.
With this observation we can see that for all $h_i \in P_i^{[k]}$ with $i \in [m]\backslash\{l\}$ and for all $(\mathbfsl{x}_1,\dots,\mathbfsl{x}_m) \in \prod_{i=1}^m{\mathbb{F}}^k_{q_i}$ with $(\mathbfsl{x}_l)_1 = x_1$ and $(\mathbfsl{x}_l)_2 = x_2$ we have that if $a \not= x_1x_2^{-1}$ then:
\begin{align*}
\label{eqrl1}
&t_{k-1} (h_1(\mathbfsl{x}_1),\dots,h_{l-1}(\mathbfsl{x}_{l-1}),u_{a}(\mathbfsl{x}_l),h_{l+1}(\mathbfsl{x}_{l+1}),\dots,h_m(\mathbfsl{x}_{m}))=
\\&=t_{k-1} (h_1(\mathbfsl{x}_1),\dots,h_{l-1}(\mathbfsl{x}_{l-1}),w_{x_1x_2^{-1}-a}(\mathbfsl{x}_l),h_{l+1}(\mathbfsl{x}_{l+1}),\dots,h_m(\mathbfsl{x}_{m}))
\end{align*}
where $u_{a} \in T_l^{[k]}$ and $w_{x_1x_2^{-1}-a}\in R_l^{[k]}$. Thus they produce summands with different signs in \eqref{eq4-2}. Moreover, if $a = x_1x_2^{-1}$, then
\begin{align*}
&t_{k-1} (h_1(\mathbfsl{x}_1),\dots,h_{l-1}(\mathbfsl{x}_{l-1}),u_{a}(\mathbfsl{x}_l),h_{l+1}(\mathbfsl{x}_{l+1}),\dots,h_m(\mathbfsl{x}_{m})) =
\\&= t_{k-1} (h_1(\mathbfsl{x}_1),\dots,h_{l-1}(\mathbfsl{x}_{l-1}),\mathbfsl{0}_{{\mathbb{F}}^{k-1}_{q_l}},h_{l+1}(\mathbfsl{x}_{l+1}),\dots,h_m(\mathbfsl{x}_{m})) = 0,
\end{align*}
since $t_{k-1}$ is $0$-absorbing in $[m]$. This implies that all the summands of $r_k$ are cancelling if $(\mathbfsl{x}_l)_2 \not= 0$. Thus $r_k(\mathbfsl{x}_1,\dots, \mathbfsl{x}_m) = \prod_{i =1}^mq_i \cdot t_{k}(\mathbfsl{x}_1,\dots, \mathbfsl{x}_m) = 0$ in this case.
Subcase $(\mathbfsl{x}_1,\dots,\mathbfsl{x}_m) = (\lambda_1\mathbfsl{e}_1^{{\mathbb{F}}^{k}_{q_1}},\dots, \lambda_m\mathbfsl{e}_1^{{\mathbb{F}}^{k}_{q_m}})$ for some $(\lambda_1,\dots,\lambda_m) \in \prod_{i=1}^m {\mathbb{F}}_{q_i}$:
We can observe that:
\begin{align*}
&t_{k-1} (h_1(\mathbfsl{x}_1),\dots,h_{l-1}(\mathbfsl{x}_{l-1}),h_l(\lambda_l\mathbfsl{e}_1^{{\mathbb{F}}_{q_l}}),h_{l+1}(\mathbfsl{x}_{l+1}),\dots,h_m(\mathbfsl{x}_{m})) = 0
\\&=t_{k-1} (h_1(\mathbfsl{x}_1),\dots,h_{l-1}(\mathbfsl{x}_{l-1}),\mathbfsl{0}_{{\mathbb{F}}^{k-1}_{q_l}},h_{l+1}(\mathbfsl{x}_{l+1}),\dots,h_m(\mathbfsl{x}_{m})) = 0,
\end{align*}
for all $h_i \in P_i^{[k]}$ with $i \in [m]\backslash\{l\}$, for all $l \leq m$, $\lambda_l \in {\mathbb{F}}_{q_l}$, $\mathbfsl{x}_i \in {\mathbb{F}}^k_{q_i}$, and $h_{l} \in R^{[k]}_l$, since $t_{k-1}$ is $0$-absorbing in $[n]$. Thus we can observe that:
\begin{equation*}\label{eq6}
\begin{split}
&r_k(\lambda_1\mathbfsl{e}_1^{{\mathbb{F}}^k_{q_1}},\dots, \lambda_m\mathbfsl{e}_1^{{\mathbb{F}}^k_{q_m}}) =
\\ =&\sum_{h_i \in P_i^{[k]}} (-1)^{\sum_{i=1}^m \text{c}^{[k]}(h_i)}t_{k-1} (h_1(\lambda_1\mathbfsl{e}_1^{{\mathbb{F}}^k_{q_1}}),\dots, h_m(\lambda_m\mathbfsl{e}_1^{{\mathbb{F}}^k_{q_m}}))
\\=& \sum_{h_i \in T_i^{[k]}} (-1)^{\sum_{i=1}^m \text{c}^{[k]}(h_i)}t_{k-1} (h_1(\lambda_1\mathbfsl{e}_1^{{\mathbb{F}}^k_{q_1}}),\dots, h_m(\lambda_m\mathbfsl{e}_1^{{\mathbb{F}}^k_{q_m}}))
\\=& \sum_{h_i \in T_i^{[k]}} t_{k-1} (h_1(\lambda_1\mathbfsl{e}_1^{{\mathbb{F}}^k_{q_1}}),\dots, h_m(\lambda_m\mathbfsl{e}_1^{{\mathbb{F}}^k_{q_m}}))
\\=& \sum_{h_i \in T_i^{[k]}} t_{k-1} (\lambda_1\mathbfsl{e}_1^{{\mathbb{F}}_{q_1}^{k-1}},\dots, \lambda_m\mathbfsl{e}_1^{{\mathbb{F}}_{q_m}^{k-1}})
\\=&\prod_{i=1}^mq_i\cdot t_{k-1} (\lambda_1\mathbfsl{e}_1^{{\mathbb{F}}_{q_1}^{k-1}},\dots, \lambda_m\mathbfsl{e}_1^{{\mathbb{F}}_{q_m}^{k-1}})
\\ = & \prod_{i=1}^mq_i\cdot t_k(\lambda_1\mathbfsl{e}_1^{{\mathbb{F}}^{k}_{q_1}},\dots, \lambda_m\mathbfsl{e}_1^{{\mathbb{F}}^{k}_{q_m}}).
\end{split}
\end{equation*}
Thus $r_k = \prod_{i =1}^mq_i \cdot t_{k}$.
Because of \eqref{eq4-2} and the inductive hypothesis, we have $r_k \in \mathrm{Clg}(\{t_{k-1}\})$ $ \subseteq \mathrm{Clg}(C^{[1]})$. Thus $\prod_{i = 1}^mq_it_{k} \in \mathrm{Clg}(C^{[1]})$. Since $\prod_{i = 1}^mq_i \not= 0$ modulo $p$ we have that $t_{k} \in \mathrm{Clg}(C^{[1]})$ and this concludes the induction proof.
\end{proof}
\begin{Lem}
\label{Lem-sum-0-pres}
Let $q_1,\dots,q_m$ and $p$ be powers of primes with $\prod_{i=1}^mq_i$ and $p$ coprime and let ${\mathbb K} = \prod_{i=1}^m{\mathbb{F}}_{q_i}$. Let $C$ be an $({\mathbb{F}}_p,{\mathbb K})$-linearly closed clonoid, let $I \subseteq [m]$ and let $f \in C$ be $0$-absorbing in $I$. Then $f \in \mathrm{Clg}(C^{[1]})$.
\end{Lem}
\begin{proof}
Let ${\mathbb K}_1 = \prod_{i \in I}{\mathbb{F}}_{q_i}$ and let $C_1 := \{g \mid \exists g' \in C \colon g'\mid_{{\mathbb K}_1} = g\}$. By Lemma \ref{Rem-sum-0-pres2} $f \in \mathrm{Clg}(C^{[1]})$ if and only if $f \mid_{{\mathbb K}_1} \in \mathrm{Clg}(C\mid_{{\mathbb K}_1}^{[1]})$ and we observe that $f\mid_{{\mathbb K}_1}$ is $0$-absorbing in $I$ . Thus without loss of generality we fix $I = [m]$. The strategy is to interpolate $f$ in all the distinct products of lines of the form $\{(\lambda_{1}\mathbfsl{b}_{1},\dots,\lambda_m\mathbfsl{b}_{m}) \mid (\lambda_1,\dots,\lambda_m) \in \prod_{i = 1}^m{\mathbb{F}}_{q_i}, \mathbfsl{b}_i \in {\mathbb{F}}_{q_i}^n\backslash\{(0,\dots,0)\}$. To this end let $R = \{L_j \mid 1 \leq j \leq \prod_{i = 1}^m(q_i^{n} - 1)/(q_i-1) = s\}$ be the set of all $s$ distinct products of lines of $\prod_{i = 1}^m{\mathbb{F}}_{q_i}$ and let $\mathbfsl{l}_{(i,j)} \in \mathbb{F}_{q_i}^n$ be such that $(\mathbfsl{l}_{(1,j)},\dots,\mathbfsl{l}_{(m,j)})$ generates the products of $m$ lines $L_j$, for $1 \leq j \leq s$, $1 \leq i \leq m$. For all $1 \leq j \leq s$, let $f_{L_j}\colon \prod_{i=1}^m\mathbb{F}_{q_i}^n \to \mathbb{F}_p$ be defined by:
\begin{equation*} f_{L_j}(\lambda_1\mathbfsl{l}_{(1,j)},\dots,\lambda_m\mathbfsl{l}_{(m,j)}) = f(\lambda_1\mathbfsl{l}_{(1,j)},\dots,\lambda_m\mathbfsl{l}_{(m,j)})
\end{equation*}
for $ (\lambda_1,\dots,\lambda_m) \in \prod_{i = 1}^m{\mathbb{F}}_{q_i}$ and $f_{L_j}(\mathbfsl{x}) = 0$ for all $\mathbfsl{x} \in \prod_{i=1}^m\mathbb{F}^n_{q_i}\backslash \{(\lambda_1\mathbfsl{l}_{(1,j)},$ $\dots,\lambda_m\mathbfsl{l}_{(m,j)}) \mid (\lambda_1,\dots,\lambda_m) \in \prod_{i = 1}^m{\mathbb{F}}_{q_i}\}$.
\textbf{Claim 1:}
\begin{equation*}
f= \sum_{j = 1}^{s} f_{L_j}.
\end{equation*}
Since $f$ is $0$-absorbing in $[m]$ we have that:
\begin{align*}
\sum_{j = 1}^{s} f_{L_j}(\lambda_1\mathbfsl{l}_{(1,z)},\dots,\lambda_m\mathbfsl{l}_{(m,z)}) &= f_{L_z}(\lambda_1\mathbfsl{l}_{(1,z)},\dots,\lambda_m\mathbfsl{l}_{(m,z)})=
\\&=f(\lambda_1\mathbfsl{l}_{(1,z)},\dots,\lambda_m\mathbfsl{l}_{(m,z)})
\end{align*}
for all $ (\lambda_1,\dots,\lambda_m) \in \prod_{i = 1}^m{\mathbb{F}}_{q_i}$ and $z \in [s]$, since for all $j_1,j_2 \in [s]$, $L_{j_1}$ and $L_{j_2}$ intersect only in points of the form $(\mathbfsl{x}_1,\dots,\mathbfsl{x}_m)$ $ \in \prod_{i = 1}^m{\mathbb{F}}_{q_i}^n$ with $\mathbfsl{x}_i = (0,\dots,0)$ for some $i \in [m]$.
Let $1 \leq j \leq s$ and let $g\colon \prod_{i=1}^m\mathbb{F}_{q_i} \to \mathbb{F}_p$ be a function such that:
\begin{equation*}
f_{L_j}(\lambda_1\mathbfsl{l}_{(1,j)},\dots,\lambda_m\mathbfsl{l}_{(m,j)}) = g(\lambda_1,\dots,\lambda_m) = f(\lambda_1\mathbfsl{l}_{(1,j)},\dots,\lambda_m\mathbfsl{l}_{(m,j)})
\end{equation*}
for all $ (\lambda_1,\dots,\lambda_m) \in \prod_{i = 1}^m{\mathbb{F}}_{q_i}$. Then $g \in C^{[1]}$.
\textbf{Claim 2:} $f_{L_j} \in \mathrm{Clg}(C^{[1]})$ for all $L_j \in R$.
We can observe that $f_{L_j}(\lambda_1\mathbfsl{l}_{(1,j)},\dots, \lambda_m\mathbfsl{l}_{(m,j)}) = g(\lambda_1,\dots,\lambda_m)$ for all $(\lambda_1,\dots,$ $\lambda_m) \in \prod_{i=1}^m\mathbb{F}_{q_i}$, and $f_{L_j}(\mathbfsl{x}_1,\dots,\mathbfsl{x}_m) = 0$ for all $(\mathbfsl{x}_1,\dots,\mathbfsl{x}_m) \in \prod_{i=1}^m\mathbb{F}_{q_i}^n \backslash \{(\lambda_1\mathbfsl{l}_{(1,j)},$ $\dots, \lambda_m\mathbfsl{l}_{(m,j)})\mid (\lambda_1,\dots,\lambda_m) \in \prod_{i=1}^m\mathbb{F}_{q_i} \}$. Furthermore, $g$ is $0$-absorbing in $[m]$. By Lemmata \ref{Lem1-2} and \ref{Lemt_k}, $f_{L_j} \in \mathrm{Clg}(C^{[1]})$, which concludes the proof of $f \in \mathrm{Clg}(C^{[1]})$.
\end{proof}
We are now ready to prove that an $(\mathbb{F},\mathbb{K})$-linearly closed clonoid $C$ is generated by its unary part.
\begin{Thm}
\label{Th1-2}
Let $q_1,\dots,q_m$ and $p$ be powers of primes with $\prod_{i=1}^mq_i$ and $p$ coprime and let ${\mathbb K} = \prod_{i=1}^m{\mathbb{F}}_{q_i}$. Then every $({\mathbb{F}}_p,{\mathbb K})$-linearly closed clonoid $C$ is generated by its unary functions. Thus $C = \mathrm{Clg}(C^{[1]})$.
\end{Thm}
\begin{proof}
The inclusion $\supseteq$ is obvious. For the other inclusion let $C$ be an $({\mathbb{F}}_p,{\mathbb K})$-linearly closed clonoid and let $f$ be an $n$-ary function in $C$. By Lemma \ref{Lem0absor} with $A_i = {\mathbb{F}}_{q_i}^n$ and $0_{A_i} = (0_{{\mathbb{F}}_{q_i}},\dots,0_{{\mathbb{F}}_{q_i}})$, $f$ can be split in the sum of $n$-ary functions $\sum_{I \subseteq [m]}f_I$ such that for each $I \subseteq [m]$, $f_I$ is $0$-absorbing in $I$. Furthermore, each function $f_I$ lies in the subgroup $\mathbf{F}$ of $\mathbb{F}_p^{{\mathbb K}^n}$ that is generated by the functions $\mathbfsl{x} \rightarrow f(\mathbfsl{x}^{(I)})$, where $I\subseteq [m]$ and thus each summand $f_I$ is in $C$. By Lemma \ref{Lem-sum-0-pres} each of these summands is in $\mathrm{Clg}(C^{[1]})$. and thus $f \in \mathrm{Clg}(C^{[1]})$.
\end{proof}
The next corollary of Theorem \ref{Th1-2} and the following theorem tell us that there are only finitely many distinct $(\mathbb{F},\mathbb{K})$-linearly closed clonoids.
\begin{Cor}
\label{Cor2-2}
Let $q_1,\dots,q_m$ and $p$ be powers of primes with $\prod_{i=1}^mq_i$ and $p$ coprime and let ${\mathbb K}=\prod_{i=1}^m{\mathbb{F}}_{q_i}$. Let $C$ and $D$ be two $({\mathbb{F}}_p,{\mathbb K})$-linearly closed clonoids. Then $C = D$ if and only if $C^{[1]} = D^{[1]}$.
\end{Cor}
Let us denote by $\mathcal{L}({\mathbb{F}},{\mathbb K})$ the lattice of all $({\mathbb{F}},{\mathbb K})$-linearly closed clonoids. We define the functions $\rho_i\colon \mathcal{L}({\mathbb{F}},{\mathbb K}) \rightarrow \mathcal{L}({\mathbb{F}}_{p_i},{\mathbb K})$ such that for all $1 \leq i \leq s$ and for all $C \in \mathcal{L}({\mathbb{F}},{\mathbb K})$:
\begin{equation}
\label{defiso}
\rho_i(C) := \{f \mid \text{ there exists } g \in C \colon f = \pi_i^{{\mathbb{F}}} \circ g\},
\end{equation}
where with $\pi_i^{{\mathbb{F}}}$ we denote the projection over the $i$-th component of the product of fields ${\mathbb{F}}$.
\begin{Thm}
\label{ThmDirProd}
Let $\mathbb{F} = \prod_{i =1}^s{\mathbb{F}}_{p_i}$ and $\mathbb{K} = \prod_{i =1}^m{\mathbb{F}}_{q_i}$ be products of finite fields. Then the lattice of all $({\mathbb{F}},{\mathbb K})$-linearly closed clonoids is isomorphic to the direct product of the lattices of all $({\mathbb{F}}_{p_i},{\mathbb K})$-linearly closed clonoids with $1 \leq i \leq s$.
\end{Thm}
\begin{proof}
Let us define the function $\rho\colon \mathcal{L}({\mathbb{F}},{\mathbb K}) \rightarrow \prod_{i=1}^s\mathcal{L}({\mathbb{F}}_{p_i},{\mathbb K})$ such that $\rho(C) := (\rho_1(C),\dots,\rho_s(C))$. Clearly $\rho$ is well-defined. Conversely, let $\psi\colon $ $\prod_{i=1}^s\mathcal{L}$ $({\mathbb{F}}_{p_i},{\mathbb K}) \rightarrow \mathcal{L}({\mathbb{F}},{\mathbb K})$ be defined by:
\begin{equation*}
\psi(C_1,\dots,C_s) = \bigcup_{k \in {\mathbb{N}}}\{f \colon\mathbfsl{x} \mapsto (f_1(\mathbfsl{x}),\dots,f_s(\mathbfsl{x}))\mid f_1\in C_1^{[k]}, \dots, f_s\in C_s^{[k]}\}.
\end{equation*}
From this definition it is clear that $\psi$ is well defined. Furthermore,
\begin{equation*}
\rho\psi(C_1,\dots,C_s) = (C_1,\dots,C_s)
\end{equation*}
and $C\subseteq\psi\rho(C)$ for all $(C_1,\dots,C_s) \in \prod_{i=1}^s\mathcal{L}({\mathbb{F}}_{p_i},{\mathbb K})$ and $C \in \mathcal{L}({\mathbb{F}},{\mathbb K})$.
To prove that $C \supseteq \psi\rho(C)$ let $f\in \psi \rho (C)$. Then there exists $(f_1,\dots,f_s) \in \rho (C)$ such that $f_i = \pi_i^{{\mathbb{F}}} \circ f$ for all $i \in [s]$. By definition of $\rho$, there exist $g_1,\dots,g_s \in C$ such that $f_i = \pi_i^{{\mathbb{F}}} \circ g_i$ for all $i \in [s]$. Let $\mathbfsl{a}_i \in {\mathbb{F}}$ be such that $\mathbfsl{a}_i(j) = 0$ for $j \not= i$ and $\mathbfsl{a}_i(i) = 1$. It is easy to check that the function $f =\sum_{i=1}^s \mathbfsl{a}_ig_i =f$ and thus $f \in C$.
Hence $\rho$ is a lattice isomorphism.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm14-2}]
Let ${\mathbb{F}}= \prod_{i=1}^s{\mathbb{F}}_{p_i}$ and ${\mathbb K} = \prod_{i=1}^m{\mathbb{F}}_{q_i}$ be products of finite with $|{\mathbb K}|$ and $|{\mathbb{F}}|$ coprime. Let $C \in \mathcal{L}({\mathbb{F}},{\mathbb K})$. By Theorem \ref{ThmDirProd} $C$ is uniquely determined by its projections $C_1 = \rho_1(C),\dots, C_s = \rho_s(C)$ where $\rho_i$ is defined in \eqref{defiso}. By Theorem \ref{Th1-2} we have that for all $i \in [s]$ every $({\mathbb{F}}_{p_i},{\mathbb K})$-linearly closed clonoid $C_i$ is uniquely determined by its unary part $C_i^{[1]}$. Thus $C$ is uniquely determined by its unary part $C^{[1]}$.
\end{proof}
\section{The lattice of all $(\mathbb{F},\mathbb{K})$-linearly closed clonoids}\label{TheLattice-2}
In this section we characterize the structure of the lattice $\mathcal{L}({\mathbb{F}},{\mathbb K})$ of all $(\mathbb{F},\mathbb{K})$-linearly closed clonoids through a description of their unary parts. Let $\mathbb{F} = \prod_{i=1}^s{\mathbb{F}}_{p_i}$ and $\mathbb{K} = \prod_{j=1}^m{\mathbb{F}}_{q_j}$ be products of finite fields such that $|{\mathbb K}|$ and $|{\mathbb{F}}|$ are coprime numbers.
We will see that $\mathcal{L}({\mathbb{F}},{\mathbb K})$ is isomorphic to the product of the lattices of all ${\mathbb{F}}_{p_i}[{\mathbb K}^{\times}]$-submodules of $\mathbb{F}_{p_i}^{{\mathbb K}}$, where ${\mathbb K}^{\times}$ is the multiplicative monoid of ${\mathbb K} = \prod_{i=1}^m {\mathbb{F}}_{q_i}$.
In order to characterize the lattice of all $({\mathbb{F}}, {\mathbb K})$-linearly closed clonoids we need the definition of \emph{monoid ring}.
\begin{Def} Let $\langle M, \cdot\rangle$ be a commutative monoid and let $\langle R, +, \odot\rangle$ be a commutative ring with identity. Let
\begin{equation*}
S := \{f \in R^M \mid f(a) \not= 0 \text{ for only finitely many } a \in M\}.
\end{equation*}
\end{Def}
We define the \emph{monoid ring} of $M$ over $R$ as the ring $(S, +, \cdot)$, where $+$ is the point-wise addition of functions and the multiplication is defined as $f\cdot g : R \rightarrow M $ which maps each $m \in M$ into:
\begin{equation*}
\sum_{m_1,m_2 \in M,m_1m_2=m}f(m_1)g(m_2).
\end{equation*}
We denote by $R[M]$ the monoid ring of $M$ over $R$. Following the notation in \cite{AM.CWTR} for all $a \in A$ we define $\tau_a$ to be the element of $R^M$ with $\tau_a(a) = 1$ and $\tau_a (M\backslash\{a\}) = \{0\}$. We observe that for all $f \in R[M]$ there is an $\mathbfsl{r} \in R^M$ such that $f = \sum_{a\in M} r_a\tau_a $ and that we can multiply such expressions with the rule $\tau_a \cdot \tau_b = \tau_{ab }$.
\begin{Def}\label{DefAct2}
Let $M$ be a commutative monoid and let $R$ be a commutative ring. We denote by $R^M$ the $R[M]$-module with the action $\ast$ defined by:
\begin{equation*}
(\tau_{a} \ast f)(x) = f(ax),
\end{equation*}
for all $a \in M$ and $f \in R^M$.
\end{Def}
Let ${\mathbb K}^{\times}$ be the multiplicative monoid of ${\mathbb K} = \prod_{i=1}^m {\mathbb{F}}_{q_i}$. We can observe that $V$ is an ${\mathbb{F}}_p[{\mathbb K}^{\times}]$-submodule of $\mathbb{F}_p^{{\mathbb K}}$ if and only if it is a subspace of $\mathbb{F}_p^{{\mathbb K}}$ satisfying
\begin{equation}
\label{neweq12}
(x_1,\dots,x_m) \mapsto f(a_1x_1,\dots,a_mx_m) \in V,
\end{equation}
for all $f \in V$ and $(a_1,\dots,a_m) \in \prod_{i=1}^m{\mathbb{F}}_{q_i}$. Clearly the following lemma holds.
\begin{Lem}
Let $p, q_1,\dots q_m$ be powers of primes and let ${\mathbb K} = \prod_{i =1}^m {\mathbb{F}}_{q_i}$. Let $V \subseteq \mathbb{F}_p^{{\mathbb K}}$. Then $V$ is the unary part of an $(\mathbb{F}_p , {\mathbb K})$-linearly closed clonoid if and only if is an ${\mathbb{F}}_p[{\mathbb K}^{\times}]$-submodule of $\mathbb{F}_p^{{\mathbb K}}$.
\end{Lem}
Together with Theorem \ref{ThmDirProd} this immediately yields the following.
\begin{Cor}
\label{CorModuleIso2}
Let ${\mathbb K} = \prod_{i =1}^m {\mathbb{F}}_{q_i}$ and ${\mathbb{F}} = \prod_{i =1}^s {\mathbb{F}}_{p_i}$ be products of finite fields such that $|{\mathbb K}|$ and $|{\mathbb{F}}|$ are coprime. Then the function $\pi^{[1]}$ that sends an $({\mathbb{F}} , {\mathbb K})$-linearly closed clonoid to its unary part is an isomorphism between the lattice of all $({\mathbb{F}}, {\mathbb K})$-linearly closed clonoids and the direct product of the lattices of all ${\mathbb{F}}_{p_i}[{\mathbb K}^{\times}]$-submodules of $\mathbb{F}_{p_i}^{{\mathbb K}}$.
\end{Cor}
With the same strategy of \cite[Lemma $5.6$]{Fio.CSOF} we obtain the following Lemma.
\begin{Lem}
Let ${\mathbb K} = \prod_{i =1}^m {\mathbb{F}}_{q_i}$ and ${\mathbb{F}} = \prod_{i =1}^s {\mathbb{F}}_{p_i}$ be products of finite fields such that $|{\mathbb K}|$ and $|{\mathbb{F}}|$ are coprime. Then every $({\mathbb{F}} , {\mathbb K})$-linearly closed clonoid is finitely related.
\end{Lem}
The next step is to characterize the lattice of all ${\mathbb{F}}_p[{\mathbb K}^{\times}]$-submodules of $\mathbb{F}_p^{{\mathbb K}}$. To this end we observe that $V$ is an ${\mathbb{F}}_p[{\mathbb K}^{\times}]$-submodule of $\mathbb{F}_p^{{\mathbb K}}$ if and only if is a subspace of $\mathbb{F}_p^{{\mathbb K}}$ satisfying \eqref{neweq12}.
We can provide a bound for the lattice of all $({\mathbb{F}},{\mathbb K})$-linearly closed clonoids given by the number of subspaces of ${\mathbb{F}}_{p_i}^{{\mathbb K}}$.
\begin{Rem}
It is a well-known fact in linear algebra that the number of $k$-dimensional subspaces of an $n$-dimensional vector space $V$ over a finite field ${\mathbb{F}}_q$ is the Gaussian binomial coefficient:
\begin{equation}
{{n}\choose{k}}_q = \prod_{i=1}^k \frac{q^{n-k+i}-1}{q^i-1}.
\end{equation}
\end{Rem}
From this remark we directly obtain the bound of Theorem \ref{Corfinale-3}. In order to determine the exact cardinality of the lattice of all $({\mathbb{F}},{\mathbb K})$-linearly closed clonoids we have to deal with the problem to find the ${\mathbb{F}}_p[{\mathbb K}^{\times}]$-submodules of $\mathbb{F}_p^{{\mathbb K}}$. We will not study this problem here because we think that this is an interesting problem that deserves an own research.
\end{document} | arXiv |
\begin{document}
\title{Indecomposable $1$-factorizations of the complete multigraph $\lambda K_{2n}$ for every $\lambda\leq 2n$\thanks{Research performed within the activity of INdAM--GNSAGA with the financial support of the Italian Ministry MIUR, project ``Strutture Geometriche, Combinatoria e loro Applicazioni''} }
\author{S. Bonvicini \thanks{Dipartimento di Scienze Fisiche, Informatiche e Matematiche, Universit\`a di Modena e Reggio Emilia, via Campi 213/b, 41126 Modena (Italy)}\,, G. Rinaldi\thanks{Dipartimento di Scienze e Metodi dell'Ingegneria, Universit\`a di Modena e Reggio Emilia, via Amendola 2, 42122 Reggio Emilia (Italy)}}
\maketitle
\begin{abstract} \noindent A $1$-factorization of the complete multigraph $\lambda K_{2n}$ is said to be indecomposable if it cannot be represented as the union of $1$-factorizations of $\lambda_0 K_{2n}$ and $(\lambda-\lambda_0) K_{2n}$, where $\lambda_0<\lambda$. It is said to be simple if no $1$-factor is repeated. For every $n\geq 9$ and for every $(n-2)/3\leq\lambda\leq 2n$, we construct an indecomposable $1$-factorization of $\lambda K_{2n}$ which is not simple. These $1$-factorizations provide simple and indecomposable $1$-factorizations of $\lambda K_{2s}$ for every $s\geq 18$ and $2\leq\lambda\leq 2\lfloor s/2\rfloor-1$. We also give a generalization of a result by Colbourn et al. which provides a simple and indecomposable $1$-factorization of $\lambda K_{2n}$, where $2n=p^m+1$, $\lambda=(p^m-1)/2$, $p$ prime.
\end{abstract}
\noindent \textit{Keywords: complete multigraph, indecomposable $1$-factorizations, simple $1$-factorizations.}
\noindent\textit{MSC(2010): 05C70}
\section{Introduction}\label{sec:intro}
We refer to \cite{BonMur} for graph theory notation and terminology which are not introduced explicitly here. We recall that the complete multigraph $\lambda K_{2n}$ has $2n$ vertices and each pair of vertices is joined by exactly $\lambda$ edges. A $1$-factor of $\lambda K_{2n}$ is a spanning subgraph of $\lambda K_{2n}$ consisting of $n$ edges that are pairwise independent. If $\mathcal S$ is a set of $1$-factors of $\lambda K_{2n}$, then we will denote by $E(\mathcal S)$ the multiset containing all the edges of the $1$-factors of $\mathcal S$, namely, $E(\mathcal S)=\cup_{F\in\mathcal S}\, E(F)$. A $1$-factorization $\mathcal F$ of $\lambda K_{2n}$ is a partition of the edge-set of $\lambda K_{2n}$ into $1$-factors. A subfactorization of $\mathcal F$ is a subset $\mathcal F_0$ of $1$-factors belonging to $\mathcal F$ that constitute a $1$-factorization of $\lambda_0 K_{2n}$, where $\lambda_0\leq\lambda$. For every $\lambda\geq 1$, it is possible to find a $1$-factorization of $\lambda K_{2n}$. Lucas' construction provides a $1$-factorization for the complete graph $K_{2n}$, denoted by $GK_{2n}$ (see \cite{Lu}). By taking $\lambda$ copies of $GK_{2n}$, we find a $1$-factorization of $\lambda K_{2n}$. Obviously, it contains repeated $1$-factors. Moreover, we can consider $\lambda_0<\lambda$ copies of each $1$-factor so that it is the union of $1$-factorizations of $\lambda_0 K_{2n}$ and $(\lambda-\lambda_0)K_{2n}$. A $1$-factorization of $\lambda K_{2n}$ that contains no repeated $1$-factors is said to be \emph{simple}. A $1$-factorization of $\lambda K_{2n}$ that can be represented as the union of $1$-factorizations of $\lambda_0 K_{2n}$ and $(\lambda-\lambda_0) K_{2n}$, where $\lambda_0<\lambda$, is said to be \emph{decomposable}, otherwise it is called \emph{indecomposable}. An indecomposable $1$-factorization might be simple or not. In this paper, we consider the problem about the existence of indecomposable $1$-factorizations of $\lambda K_{2n}$. Obviously, $\lambda> 1$. In order that the complete multigraph $\lambda K_{2n}$ admits an indecomposable $1$-factorization, the parameter $\lambda$ cannot be arbitrarily large: we have necessarily $\lambda< 3\cdot 4\cdots (2n-3)$ or $\lambda<[n(2n-1)]^{n(2n-1)}\binom{2n^3+n^2-n+1}{2n^2-n}$, according to whether the $1$-factorization is simple or not (see \cite{BaarWall}). Moreover, two non-existence results are known. For every $\lambda >1$ there is no indecomposable $1$-factorization of $\lambda K_4$ (see \cite{CCR}). For every $\lambda \geq 3$ there is no indecomposable $1$-factorization of $\lambda K_6$ (see \cite{BaarWall}). We recall that in \cite{CCR} the authors construct simple and indecomposable $1$-factorizations of $\lambda K_{2n}$ for $2\leq\lambda\leq 12$, $\lambda\neq 7, 11$. They also give a simple and indecomposable $1$-factorization of $\lambda K_{p+1}$, where $p$ is an odd prime and $\lambda=(p-1)/2$. In \cite{ArchDinitz} we can find an indecomposable $1$-factorization of $(n-p)K_{2n}$, where $p$ is the smallest prime not dividing $n$. This $1$-factorization is not simple, but it is used to construct a simple and indecomposable $1$-factorization of $(n-p)K_{2s}$ for every $s\geq 2n$. This construction improves the results in \cite{CCR} for $2\leq\lambda\leq 12$ (see Theorem $2.5$ in \cite{ArchDinitz}). Simple and indecomposable $1$-factorizations of $(n-d)K_{2n}$, with $d\geq 2$, $n-d\geq 5$ and $\gcd(n, d)=1$, are constructed in \cite{Chu}. Other values of $\lambda$ and $n$ for which the existence of a simple and indecomposable $1$-factorization of $\lambda K_{2n}$ is known are the following: $2n=q^2+1$, $\lambda=q-1$, where $q$ is an odd prime power (see \cite{KSS}); $2n=2^h+2$, $\lambda=2$ (see \cite{Son}); $2n=q^2+1$, $\lambda=q+1$, where $q$ is an odd prime power (see \cite{Kiss}); $2n=q^2$, $\lambda=q$, where $q$ is an even prime power (see \cite{Kiss}).
In this paper we prove some theorems about the existence of simple and indecomposable $1$-factorizations of $\lambda K_{2n}$, where most of the parameters $\lambda$ and $n$ were not previously considered in literature. We show that for every $n\geq 9$ and for every $(n-2)/3\leq\lambda\leq 2n$ there exists an indecomposable $1$-factorization of $\lambda K_{2n}$ (see Theorem \ref{th1}). We can also exhibit some examples of indecomposable $1$-factorizations of $\lambda K_{2n}$ for $n\in\{7, 8\}$, $(n-2)/3\leq\lambda\leq n$ (see Proposition \ref{pro4}); and for $n\in\{5, 6\}$, $(n-2)/3\leq\lambda\leq n-2$ (see Proposition \ref{pro1} and \ref{pro2}). The $1$-factorizations in Theorem \ref{th1}, Proposition \ref{pro1}, \ref{pro2} and \ref{pro4} are not simple. By an embedding result in \cite{CCR}, we can use them to prove the existence of simple and indecomposable $1$-factorizations of $\lambda K_{2s}$ for every $s\geq 18$ and for every $2\leq\lambda\leq 2\lfloor s/2\rfloor-1$ (see Theorem \ref{th2}). We note that for odd values of $s$, the parameter $\lambda$ does not exceed the value $s-2$. Nevertheless, if $2s=p^m+1$, where $p$ is a prime, then we can find a simple and indecomposable $1$-factorization of $(s-1) K_{2s}$ (see Theorem \ref{th3}).
By our results we can improve Theorem $2.5$ in \cite{ArchDinitz} about the existence of simple and indecomposable $1$-factorizations of $\lambda K_{2n}$ for $2\leq\lambda\leq 12$. We note that in Theorem 2.5 in \cite{ArchDinitz} the existence of a simple and indecomposable $1$-factorization of $11 K_{2n}$ (respectively, $12 K_{2n}$) is known for every $2n\geq 52$ (respectively, $2n\geq 32$). By Theorem \ref{th2}, a simple and indecomposable $1$-factorization of $11 K_{2n}$ exists for every $2n\geq 36$. By Theorem \ref{th3}, there exists a simple and indecomposable $1$-factorization of $12 K_{26}$. Moreover, Theorem \ref{th3} extends Theorem $2$ in \cite{CCR} to each odd prime power.
\section{Basic lemmas.}\label{sec:set_1factors}
In Section \ref{sec:IOF_nosimple} and \ref{sec:IOF_simple} we will construct indecomposable $1$-factorizations of $\lambda K_{2n}$ for suitable values of $\lambda> 1$. These $1$-factorizations contain $1$-factor-orbits, that is, sets of $1$-factors belonging to the same orbit with respect to a group $G$ of permutations on the vertices of the complete multigraph.
If not differently specified, we use the exponential notation for the action of $G$ and its subgroups on vertices, edges and $1$-factors. So, if $e=[x, y]$ is an edge of $\lambda K_{2n}$ and $g\in G$ we set $e^g=[x^g, y^g]$. Analogously, if $F$ is a $1$-factor we set $F^g=\{e^g : e\in F\}$. Since we shall treat with sets and multisets, we specify that by an edge-orbit $e^H$, where $H\leq G$, we mean the set $e^H=\{e^h : h\in H\}$ and by a $1$-factor-orbit $F^H$ we mean the set $F^H=\{F^h : h\in H\}$. If $h\in H$ leaves $F$ invariant, that is,
$F^h=F$, then $h$ is an element of the stabilizer of $F$ in $G$, which will be denoted by $G_F$. The cardinality of $F^H$ is $|H|/|H\cap G_F|$. The following result holds.
\begin{lemma}\label{lemma1} Let $F$ be a $1$-factor of $\lambda K_{2n}$ containing exactly $\mu$ edges belonging to the same edge-orbit $e^H$, where $H$ is a subgroup of $G$ having trivial intersection with the stabilizer of $F$ in $G$ and with the stabilizer of $e$ in $G$. The multiset $\cup_{h\in H}\, E(F^h)$ contains every edge of $e^H$ exactly $\mu$ times. \end{lemma}
\begin{proof} We denote by $e_1,\ldots, e_{\mu}$ the edges in $F\cap e^H$. We show that every edge $f\in e^H$ appears $t_f\geq\mu$ times in the multiset $E(F^H)=\cup_{h\in H}\, E(F^h)$. For every edge $e_i\in\{e_1,\ldots, e_{\mu}\}$ there exists an element $h_i\in H$ such that $e^{h_i}_i=f$, since $e_i$ and $f$ belong to the same edge-orbit $e^H$. Hence the $1$-factor $F^{h_i}$ contains the edge $f$. The $1$-factors $F^{h_1}$, $F^{h_2},\ldots F^{h_{\mu}}$ are pairwise distinct, since $H$ has trivial intersection with $G_F$. Therefore, every edge $f\in e^H$ appears $t_f\geq\mu$ times in the multiset $E(F^H)$. We prove that $t_f=\mu$. In fact, $t_f>\mu$ implies the existence of $h\in H\smallsetminus\{h_1,\ldots, h_{\mu}\}$ such that $f\in F^h$ and then $e^{h_i}_i=f=e^h_i$ for some $e_i\in\{e_1,\ldots, e_{\mu}\}$. That yields a contradiction, since $e_i$, as well as $e$, has trivial stabilizer in $H$. \end{proof}
To prove the indecomposability of the $1$-factorizations in Section \ref{sec:IOF_nosimple}, we will use the following lemma.
\begin{lemma}\label{lemma2} Let $M$ be a $1$-factor of $\lambda K_{2n}$. Let $\mathcal F$ be a $1$-factorization of $\lambda K_{2n}$ containing $0\leq\lambda-t<\lambda$ copies of $M$ and a subset $\mathcal S$ of $1$-factors satisfying the following properties:
\begin{description}
\item [(i)] the multiset $E(\mathcal S)$ contains every edge of $M$ exactly $t$ times;
\item [(ii)] for every $\mathcal S'\subset\mathcal S$, the multiset $E(\mathcal S')$ contains
$0<\mu< n$ distinct edges of $M$. \end{description}
If $\mathcal F_0\subseteq\mathcal F$ is a $1$-factorization of $\lambda_0 K_{2n}$, where $\lambda_0\leq\lambda$, then $\mathcal S\subseteq\mathcal F_0$ or $\mathcal F_0$ contains no $1$-factor of $\mathcal S$. \end{lemma}
\begin{proof} Assume that $\mathcal F_0$ contains $0< s< |\mathcal S|$ elements of $\mathcal S$, say $F_1,\ldots F_s$. We denote by $M'$ the set consisting of the edges of $M$ that are contained in the multiset $\cup^s_{i=1} E(F_i)$. By property $(ii)$, the set $M'$ is a non-empty proper subset of $M$. It is clear from $(i)$ that the $1$-factors of $\mathcal F$ containing some edges of $M$ are exactly the $\lambda-t$ copies of $M$ together with the $1$-factors of $\mathcal S$. Therefore, the $1$-factorization $\mathcal F_0$ contains $\lambda_0$ copies of $M$, since the edges in $M\smallsetminus M'$ are not contained in $\cup^{s}_{i=1} E(F_i)$. Then the multiset $E(\mathcal F_0)$ contains at least $\lambda_0+1$ copies of each edge in $M'$, a contradiction. Hence $s=n$ or $\mathcal F_0$ contains no $1$-factor of $\mathcal S$.\end{proof}
\section{Indecomposable $1$-factorizations which are not simple.}\label{sec:IOF_nosimple}
In what follows, we consider the group $G$ given by the direct product $\mathbb Z_n\times\mathbb Z_2$ and denote by $H$ the subgroup of $G$ isomorphic to $\mathbb Z_n$. We will identify the vertices of the complete multigraph $\lambda K_{2n}$ with the elements of $G$, thus obtaining the graph $\lambda K_{G}=\left (G, \lambda\binom G2\right)$, where $\binom G2$ is the set of all possible $2$-subsets of $G$ and $\lambda\binom G2$ is the multiset consisting of $\lambda$ copies of $\binom G2$.
In $G$ we will adopt the additive notation and observe that $G$ is a group of permutations on the vertex-set, that is, each $g\in G$ is identified with the permutation $x\to x+g$, for every $x\in G$. For the sake of simplicity, we will represent the elements of $G$ in the form $a_j$, where $a$ and $j$ are integers modulo $n$ and modulo $2$, respectively. The edges of $\lambda K_G$ are of type $[a_0, b_1]$ or $[a_j, b_j]$ and we can observe that each edge $[a_0, b_1]$ has trivial stabilizer in $H$. For every $a\in\mathbb Z_n$, we consider the edge-orbit $M_a=[0_0, a_1]^H$. Each edge-orbit $M_{a}$ is a $1$-factor of $\lambda K_G$. The $1$-factors in $\cup_{a\in\mathbb Z_n} M_a$ partition the edges of type $[a_0, b_1]$. We shall represent the vertices and the $1$-factors $M_{a}$ as in Figure \ref{fig1}. Observe that, if $M_a$ contains the edge $[x_0, (x+a)_1]$, then $M_{n-a}$ contains the edge $[(x+a)_0, x_1]$.
The edges of type $[a_j, b_j]$, with $j=0, 1$, can be partitioned by the $1$-factors (or, near $1$-factors) of a $1$-factorization (or, of a near $1$-factorization) of $K_n$. More specifically, for even values of $n$ we consider the well-known $1$-factorization $GK_n$ defined by Lucas \cite{Lu}. We recall that in $GK_n$ the vertex-set of $K_n$ is $\mathbb Z_{n-1}\cup\{\infty\}$ and $GK_n=\{L_i : i\in\mathbb Z_{n-1}\}$, where $L_0=\{[a, -a]: a\in\mathbb Z_{n-1}-\{0\}\}\cup\{[0,\infty]\}$ and $L_i=L_0+i=\{[a+i, -a+i]: a\in\mathbb Z_{n-1}-\{0\}\}\cup\{[i,\infty]\}$. For odd values of $n$, we consider the $1$-factorization $GK_{n+1}$ and delete the vertex $\infty$. Each $1$-factor $L_i$ yields a near $1$-factor $L^*_i$ of $K_n$ where the vertex $i\in\mathbb Z_n$ in unmatched. We denote by $GK^*_n$ the resulting near $1$-factorization of $K_n$.
For even values of $n$, we partition the edges $[a_j, b_j]$ of $\lambda K_{2n}$ into $1$-factors of $\lambda K_{2n}$ as follows. For $j=0, 1$, we consider the $1$-factorization $GK_n$ of the complete graph $K_n$ with vertex-set $V_j=\{a_j: 0\leq a\leq n-1\}$. It is possible to obtain a $1$-factor of $K_{2n}$ by joining, in an arbitrary way, a $1$-factor on $V_0$ to a $1$-factor on $V_1$. We denote by $\mathcal F(GK_n)$ the resulting set of $1$-factors of $K_{2n}$. We denote by $\mathcal F(\lambda GK_n)$ the multiset consisting of $\lambda$ copies of $\mathcal F(GK_n)$.
For odd values of $n$, we partition the edges $[a_j, b_j]$ of $\lambda K_{2n}$ into $1$-factors of $\lambda K_{2n}$ as follows. For $j=0, 1$, we consider the near $1$-factorization $GK^*_n$ of the complete graph $K_n$ with vertex-set $V_j=\{a_j: 0\leq a\leq n-1\}$. We select an integer $b\in\mathbb Z_n$. For $i=0,\ldots, n-1$, we join the near $1$-factor $L^*_i$ on $V_0$ to the near $1$-factor $L^*_{i+b}$ on $V_1$ (subscripts are considered modulo $n$) and add the edge $[i_0, (i+b)_1]$. We obtain a $1$-factor of $K_{2n}$. We denote by $\mathcal F(GK^*_n, b)$ the resulting set of $1$-factors of $K_{2n}$. We denote by $\mathcal F(\lambda GK_n, b)$ the multiset consisting of $\lambda$ copies of $\mathcal F(GK_n, b)$. Observe that the set $\{[i_0, (i+b)_1]: 0\leq i\leq n-1\}$ corresponds to the $1$-factor $M_b$. Hence $\mathcal F(\lambda GK_n, b)$ contains every edge of $M_b$ exactly $\lambda$ times.
\begin{figure*}
\caption{The vertices $a_0$, $b_1$ of $\lambda K_G$ are represented on the
left and on the right, respectively. Each edge-orbit $M_a$ is a $1$-factor
of $\lambda K_G$. If $M_a$ contains the edge $[0_0, a_1]$, then $M_{n-a}$ contains
the edge $[a_0, 0_1]$.}
\label{fig1}
\end{figure*}
In the following propositions we will construct $1$-factorizations of $\lambda K_G$ which are not simple. They are obtained as described in Lemma \ref{lemma3}. Moreover, Lemma \ref{lemma4} will be usefull to prove that these $1$-factorizations are indecomposable. It is straightforward to prove that the following holds.
\begin{lemma}\label{lemma3} Let ${\cal F}'=\{F_1,\dots ,F_m\}$ be a set of $1-$factors of $\lambda K_G$ such that each $F_i$ contains no edge of type $[a_j, b_j]$, has trivial stabilizer in $H$ and $F_r\notin F_i^H$ for each pair $(i,r)$ with $i\ne r$.
Let ${\cal M}$ be the subset of $\{M_a : a\in \mathbb Z_n\}$ containing all the
$1-$factors $M_a$ such that $t(M_a)=\sum_{i=1}^{m}|E(M_a)\cap E(F_i)| > 0$.
If $|H|=n$ is even and $t(M_a)\le \lambda$ for every $M_a\in {\cal M}$, then there exists a $1-$factorization of $\lambda K_G$ whose $1-$factors are exactly those of $F_1^H\cup \dots \cup F_m^H \cup {\cal F}(\lambda GK_n)$ together with $\lambda - t(M_a)$ copies of each $M_a\in {\cal M}$ and $\lambda$ copies of each $M_a\notin {\cal M}$.
If $|H|=n$ is odd, $t(M_a)\le \lambda$ for every $M_a\in {\cal M}$ and there exists at least one $1-$factor $M_b\in \{M_a \ : \ a\in \mathbb Z_n\} \smallsetminus {\cal M}$, then there exists a $1-$factorization of $\lambda K_G$ whose $1-$factors are exactly those of $F_1^H\cup \dots \cup F_m^H\cup {\cal F}(\lambda GK_n, b)$ together with $\lambda - t(M_a)$ copies of each $M_a\in {\cal M}$ and $\lambda$ copies of each $M_a\notin {\cal M}\cup \{M_b\}$. \qed \end{lemma}
\begin{lemma}\label{lemma4} Let ${\cal F}$ be the $1-$factorization of $\lambda K_G$ obtained in Lemma \ref{lemma3} starting from ${\cal F}'=\{F_1, \dots ,F_m\}$ and the set ${\cal M}$. Let $\mathcal F_0\subseteq\mathcal F$ be a $1$-factorization of $\lambda_0 K_G$, $\lambda_0\leq\lambda$. Let $F_i\in\mathcal F'$ and $M_a\in\mathcal M$ be such that $F_i$ contains exactly one edge of $M_a$. If one of the following conditions holds:
\begin{description}
\item [(i)] each $1$-factor in $\mathcal F'\smallsetminus\{F_i\}$ contains no edge of $M_a$;
\item [(ii)] each $1$-factor $F\in\mathcal F'\smallsetminus\{F_i\}$ containing some edge of $M_a$
is such that either $F^H \subset {\cal F}_0$ or $F^H\cap {\cal F}_0= \emptyset$. \end{description}
then it is either $F_i^H \subset {\cal F}_0$ or $F_i^H \cap {\cal F}_0=\emptyset$. \end{lemma}
\begin{proof} Assume that $F_i$ satisfies property $(i)$. By Lemma \ref{lemma1}, each edge of $M_a$ appears exactly once in the multiset $E(F^H_i)$. Since each $1$-factor in $\mathcal F'\smallsetminus\{F_i\}$ contains no edge of $M_a$, the $1$-factorization $\mathcal F$ contains exactly $\lambda-1$ copies of $M_a$. The assertion follows from Lemma \ref{lemma2} by setting $\mathcal S=F_i^H$ and $M=M_a$.
Assume that $F_i$ satisfies property $(ii)$. We can consider the subset $\mathcal F_1$ of $\mathcal F'\smallsetminus\{F_i\}$ consisting of the $1$-factors $F$ containing $s_F\geq 1$ edges of $M_a$ and whose orbit $F^H$ is contained in $\mathcal F_0$. The set $\mathcal F_1$ might be empty. By Lemma \ref{lemma1}, each edge of $M_a$ appears exactly $s_F\geq 1$ times in the multiset $E(F^H)$, where $F\in\mathcal F_1$. Hence $\lambda_0\geq\sum_{F\in\mathcal F_1} s_F\geq 0$ (if $\mathcal F_1=\emptyset$, then $\sum_{F\in\mathcal F_1} s_F=0$). Set ${\cal S}=F_i^H\cap {\cal F}_0 $ and suppose that
$0 < |\mathcal S| < n$, where $n=|F^H_i|$. Let $M'$ be the subset of $M_a$ consisting of the edges of $M_a$ that are contained in the multiset $E(\cal S)$. By the proof of Lemma \ref{lemma1}, the set $M'$ consists of $|\mathcal S|< n$ distinct edges. Each edge of $M_a\smallsetminus M'$ appears exactly $\sum_{F\in\mathcal F_1} s_F\leq\lambda_0$ times among the edges of the $1$-factors in $\mathcal S\cup (\cup_{F\in\mathcal F_1} F^H)$. Each edge of $M'$ appears exactly $1+\sum_{F\in\mathcal F_1} s_F$ times among the edges of the $1$-factors in $\mathcal S\cup (\cup_{F\in\mathcal F_1} F^H)$. Whence $\sum_{F\in\mathcal F_1} s_F<\lambda_0$, otherwise the edges of $M'$ would appear at least $\lambda_0+1$ times among the edges of the $1$-factors in $\mathcal F_0$. Since the edges of $M_a\smallsetminus M'$ appear $\sum_{F\in\mathcal F_1} s_F<\lambda_0$ times, the $1$-factorization $\mathcal F_0$ must contain $\lambda_0-\sum_{F\in\mathcal F_1} s_F>0$ copies of $M_a$. Consequently, each edge of $M'$ appears at least $\lambda_0+1$ among the edges of the $1$-factors in $\mathcal F_0$. That yields a contradiction. Hence, either $\mathcal F_0$ contains no $1$-factor of $F^H_i$ or $\mathcal F^H_i\subseteq\mathcal F_0$. \end{proof}
\begin{proposition}\label{pro1} Let $n\geq 5$ and $(n-2)/3\leq\lambda\leq n-2$ such that $n-\lambda$ is even. There exists an indecomposable $1$-factorization of $\lambda K_{2n}$ which is not simple. \end{proposition}
\begin{proof} Identify $\lambda K_{2n}$ with $\lambda K_G$. If $\lambda< n-2$, then $n>5$ and we consider the $1$-factor $A$ in Figure \ref{fig2}(a). For $\lambda=n-2$ we consider the $1$-factor $A$ in Figure \ref{fig3_AB} with $\alpha=1$. If $\lambda < n-2$, then $A$ contains exactly $(n-\lambda-2)/2$ edges of $M_1$ as well as $(n-\lambda-2)/2$ edges of $M_{n-1}$. It also contains $\lambda$ edges of $M_0$, one edge of $M_2$ and one edge of $M_{n-2}$. If $\lambda = n-2$, then $A$ contains exactly $\lambda$ edges of $M_0$ as well as one edge of $M_1$ and one edge of $M_{n-1}$. In both cases the stabilizer of $A$ in $H$ is trivial and when $\lambda < n-2$, the condition $(n-2)/3 \le \lambda$ assures that $(n-\lambda -2)/2 \le \lambda$. Therefore ${\cal F}'=\{A\}$ satisfies Lemma \ref{lemma3} and a $1-$factorization ${\cal F}$ of $\lambda K_G$ is constructed as prescribed. We prove that ${\cal F}$ is indecomposable. Suppose that ${\cal F}_0\subseteq {\cal F}$ is a $1-$factorization of $\lambda_0 K_G$, $\lambda_0< \lambda$. The $1-$factor $A$ satisfies condition (i) of Lemma \ref{lemma4} (set $M_a= M_2$ or $M_a=M_1$ according to whether $\lambda < n-2$ or $\lambda = n-2$, respectively). Therefore it is either $A^H\subset {\cal F}_0$ or $A^H\cap {\cal F}_0=\emptyset$. In the former case, each edge of $M_0$ appears $\lambda$ times in the multiset $E({\cal F}_0)$, that is, $\lambda = \lambda_0$, a contradiction. In the latter case, no edge of $M_0$ appears in $E(\mathcal F_0)$, a contradiction. \end{proof}
\begin{proposition}\label{pro2} Let $n\geq 5$ and $(n+1)/3\leq\lambda\leq n-3$ such that $n-\lambda$ is odd. There exists an indecomposable $1$-factorization of $\lambda K_{2n}$ which is not simple. \end{proposition}
\begin{proof} The proof is similar to the proof of Proposition \ref{pro1}.
\end{proof}
\begin{figure*}
\caption{The $1$-factor $A$ in the case: (a) $n-\lambda$ even, $\lambda< n-2$; (b) $n-\lambda$ odd}
\label{fig2}
\end{figure*}
\begin{proposition}\label{pro4} Let $n\geq 7$ and $n-1\leq\lambda\leq n$. There exists an indecomposable $1$-factorization of $\lambda K_{2n}$ which is not simple. \end{proposition}
\begin{proof} Identify $\lambda K_{2n}$ with $\lambda K_G$ and set $\lambda=n-1+r$, where $0\leq r\leq 1$. We consider the $1$-factors $A$ and $B_r$ in Figure \ref{fig3_AB}. In the definition of $A$, we set $\alpha=3$ if $r=0$; $\alpha=2$ if $r=1$. The $1$-factors $A$, $B_r$ have trivial stabilizer in $H$. Moreover, the multiset $E(A)\cup E(B_r)$ is contained in the multiset
$E(\mathcal M)$, where $\mathcal M=\{M_0, M_1, M_{\alpha}, M_{n-\alpha}, M_{r+2}\}$. We note that the $1$-factors in $\mathcal M$ are pairwise distinct, since $n\geq 7$. Whence $t(M_a)=|E(M_a)\cap A|+|E(M_a)\cap E(B_r)|\leq\lambda$ for every $M_a\in\mathcal M$. More specifically, $t(M_0)=(n-2)+(r+1)=\lambda$, $t(M_1)=n-r-2=\lambda-1$, $t(M_a)=1$ for every $a\in\{\alpha, n-\alpha, r+2\}$. By Lemma \ref{lemma3}, we construct a $1$-factorization $\mathcal F$ of $\lambda K_G$ that contains $A^H\cup B^H_r$.
We prove that ${\cal F}$ is indecomposable. Firstly, note that if ${\cal F}_0\subseteq {\cal F}$ is a $1-$factorization of $\lambda_0 K_G$ , $\lambda_0<\lambda$, then $F^H\subset {\cal F}_0$ or $F^H\cap {\cal F}_0=\emptyset$ for $F\in\{A, B_r\}$. This follows from Lemma \ref{lemma4} by observing that $A$ and $M_{\alpha}$ satisfy condition $(i)$. The same can be repeated for $B_r$ and $M_{r+2}$. If $A^H\subset {\cal F}_0$ and $B_r^H\subset {\cal F}_0$, then each edge of $M_0$ appears $\lambda$ times in the multiset $E(\mathcal F_0)$ and then $\lambda_0=\lambda$, a contradiction. In the same manner, if $A^H\cap {\cal F}_0= B_r^H \cap {\cal F}_0=\emptyset$, then no edge of $M_0$ appears in the multiset $E(\mathcal F_0)$, a contradiction. Therefore, exactly one of the orbits $A^H$, $B^H_r$ is contained in $\mathcal F_0$. Without loss of generality, we can assume that $A^H\subset {\cal F}_0$ and $B_r^H \cap {\cal F}_0=\emptyset$. Each edge of $M_0$ appears at least $n-2$ times in the multiset $E({\cal F}_0)$, that is, $\lambda_0\ge n-2$. Each edge of $M_1$ appears at least $n-2-r$ in the multiset $E({\cal F}\smallsetminus {\cal F}_0)$, that is, $\lambda - \lambda_0 \ge n-2-r$. By summing up these two relations, we have $\lambda \ge 2n-4-r$ and since $\lambda \le n$, this yields $n\le 5$, a contradiction. \end{proof}
\begin{figure*}
\caption{The $1$-factors $A$ and $B_r$, $r=0, 1$, defined in the proof of Proposition \protect\ref{pro4}.}
\label{fig3_AB}
\end{figure*}
\begin{proposition}\label{pro3} Let $n\geq 9$ and $n+1\leq\lambda\leq 2n-8$. There exists an indecomposable $1$-factorization of $\lambda K_{2n}$ which is not simple. \end{proposition}
\begin{proof} Identify $\lambda K_{2n}$ with $\lambda K_G$. We distinguish the cases $n\neq 11$ and $n=11$. For $n\neq 11$, we set $\lambda=n+r$, where $1\leq r\leq n-8$, and consider the $1$-factors $A$ and $B=B_0$ in Figure \ref{fig3_AB}. In the definition of $A$ we set $\alpha=3$. We also define the $1$-factors $C$ and $D$ in Figure \ref{fig4_CD}.
For $n=11$, we set $\lambda=9+r$, where $3\leq r\leq 5$. We consider the $1$-factor $A$ in Figure \ref{fig3_AB}, where $\alpha=2$ or $\alpha=3$, according to whether $r=3, 4$ or $r=5$, respectively. For $r=3, 4$ we also consider the $1$-factor $B=\{[i_0, i_1]: 1\leq i\leq r\}\cup$ $\{[i_0, (i+1)_1]: r+1\leq i\leq 10\}\cup$$\{[0_0, (r+1)_1]\}$. For $r=5$, we consider the $1$-factor $B=B_0$ in Figure \ref{fig3_AB} and the $1$-factor $C=\{[i_0, i_1]: 1\leq i\leq 4\}\cup$$\{[i_0, (i+1)_1]: 5\leq i\leq 10, i\neq 6\}\cup$ $\{[0_0, 7_1], [6_0, 5_1]\}$. We can construct a $1$-factorization $\mathcal F$ of $\lambda K_G$ as described in Lemma \ref{lemma3}. By Lemma \ref{lemma4}, the $1$-factorization $\mathcal F$ is indecomposable. The proof is similar to that of Proposition \ref{pro4} \end{proof}
\begin{proposition}\label{pro7} Let $n\geq 9$ and $\lambda=2n-7$. There exists an indecomposable $1$-factorization of $\lambda K_{2n}$ which is not simple. \end{proposition}
\begin{proof} We set $\lambda=n+r$ with $r=n-7$ and consider the $1$-factors in $\mathcal F'=\{A, B, C, D\}$, where $A$ and $B=B_0$ are described in Figure \ref{fig3_AB}. In the definition of $A$ we set $\alpha=3$. The $1$-factors $C$ and $D$ are defined in Figure \ref{fig4_CD}. The assertion follows from Lemma \ref{lemma4}. \end{proof}
\begin{figure*}
\caption{The $1$-factors $C$ and $D$ defined in the proof of Proposition \protect\ref{pro3}.}
\label{fig4_CD}
\end{figure*}
\begin{proposition}\label{pro5} Let $n\geq 9$ and $2n-6\leq\lambda\leq 2n-3$. There exists an indecomposable $1$-factorization of $\lambda K_{2n}$ which is not simple. \end{proposition}
\begin{proof} Identify $\lambda K_{2n}$ with $\lambda K_G$ and set $\lambda=2n-r$, where $3\leq r\leq 6$. We consider the $1$-factors $A$ and $B=B_1$ in Figure \ref{fig3_AB}. In the definition of the $1$-factor $A$, the parameter $\alpha$ assumes the value $\alpha=2$ if $r\in\{3, 5, 6\}$; $\alpha=4$ if $r=4$. We define the $1$-factor $C$ as in Figure \ref{fig6_AC}. We also consider the $1$-factor $D_r$ in Figure \ref{fig7_D} for $r=3, 4$ and in Figure \ref{fig8_D} for $r=5, 6$. We can apply Lemma \ref{lemma3} and construct a $1$-factorization $\mathcal F$ of $\lambda K_G$ as prescribed. By Lemma \ref{lemma4}, we can prove that $\mathcal F$ is indecomposable. \end{proof}
\begin{figure*}
\caption{The $1$-factors $C$ and $R$ defined in the proof of Proposition \protect\ref{pro5} and \protect\ref{pro8},
respectively.}
\label{fig6_AC}
\end{figure*}
\begin{figure*}
\caption{The $1$-factor $D_r$, $r=3, 4$, defined in the proof of Proposition \protect\ref{pro5}.}
\label{fig7_D}
\end{figure*}
\begin{figure*}
\caption{The $1$-factor $D_r$, $r=5, 6$, defined in the proof of Proposition \protect\ref{pro5}.}
\label{fig8_D}
\end{figure*}
\begin{proposition}\label{pro6} Let $n\geq 9$ and $\lambda=2n-2$. There exists an indecomposable $1$-factorization of $\lambda K_{2n}$ which is not simple. \end{proposition}
\begin{proof} Identify $\lambda K_{2n}$ with $\lambda K_G$. We distinguish the cases $n\geq 11$ and $n=9, 10$. For $n\geq 11$ we consider the $1$-factor $A$ in Figure \ref{fig3_AB} with $\alpha=2$ and the $1$-factor $B_1=D$. We also consider the $1$-factors $B$, $C$ in Figure \ref{fig9_BCD}.
For $n=9$, $10$, we consider two copies of the $1$-factor $A$ in Figure \ref{fig3_AB}. We denote by $A$ the copy with $\alpha=2$ and by $B$ the copy with $\alpha=3$ or $4$, according to whether $n=10$ or $n=9$, respectively. We consider the $1$-factors $C$, $D$ and $R_n$, where $C=\{[i_0, (i+1)_1]: 2\leq i\leq n-1\}$$\cup\{[0_0, 2_1], [1_0, 1_1]\}$; $D=\{[i_0, (i+1)_1]: 2\leq i\leq n-3\}$$\cup\{[0_0, (n-1)_1], [(n-1)_0, 0_1], [(n-2)_0, 2_1]$, $[1_0, 1_1]\}$. $R_9=\{[i_0, (i+1)_1]: 0\leq i\leq 2\}$$\cup\{[i_0, (i+2)_1]: 3\leq i\leq 7\}$ $\cup\{[8_0, 4_1]\}$; $R_{10}=\{[i_0, (i+1)_1]: 0\leq i\leq 2\}$$\cup\{[i_0, (i+2)_1]: 3\leq i\leq 7\}$ $\cup\{[8_0, 0_1], [9_0, 4_1]\}$. We can construct a $1$-factorization $\mathcal F$ as described in Lemma \ref{lemma3}. By Lemma \ref{lemma4}, we can prove that $\mathcal F$ is indecomposable. \end{proof}
\begin{proposition}\label{pro8} Let $n\geq 9$ and $2n-1\leq\lambda\leq 2n$. There exists an indecomposable $1$-factorization of $\lambda K_{2n}$ which is not simple. \end{proposition}
\begin{proof} Identify $\lambda K_{2n}$ with $\lambda K_G$. We consider two copies of the $1$-factor $A$ in Figure \ref{fig3_AB}. We denote by $A$ the copy with $\alpha=2$ ($\alpha=4$ if $n=9$ and $\lambda=18$) and by $B$ the copy with $\alpha=3$. We also consider the $1$-factors $C$, $D$, $R$. For $n\geq 9$ and $(n, \lambda)\neq (9, 18)$, the $1$-factor $C$ corresponds to the $1$-factor $B_1$ in Figure \ref{fig3_AB}. For $(n, \lambda)=(9, 18)$ it corresponds to the $1$-factor $C$ in Figure \ref{fig9_BCD}. For $n\geq 9$ and $\lambda=2n-1$, the $1$-factor $D$ corresponds to the $1$-factor $B_0$ in Figure \ref{fig3_AB}. For $n>9$ and $\lambda=2n$, the $1$-factor $D$ is defined in Figure \ref{fig9_BCD}. For $n=9$ and $\lambda=2n$, it corresponds to the $1$-factor $B_0$ in Figure \ref{fig3_AB}. For $n\geq 9$ and $(n, \lambda)\neq (9, 18)$, the $1$-factor $R$ is defined in Figure \ref{fig6_AC}. In the definition of $R$ we set $\beta=3$ or $\beta=4$ according to whether $\lambda=2n-1$ or $\lambda=2n$, respectively ($\beta=5$ if $n=10$ and $\lambda=2n$). For $(n, \lambda)=(9, 18)$, we set $R=\{[i_0, (i+1)_1]: 0\leq i\leq 4, i=8\}$$\cup\{[i_0, (i+2)_1]: 5\leq i\leq 6\}$ $\cup\{[7_0, 6_1]\}$. We construct a $1$-factorization $\mathcal F$ of $\lambda K_G$ as described in Lemma \ref{lemma3}. By Lemma \ref{lemma4}, we can prove that $\mathcal F$ is indecomposable. \end{proof}
\begin{figure*}
\caption{The $1$-factors $B$, $C$ defined in the proof of Proposition \protect\ref{pro6}
and the $1$-factor $D$ defined in the proof of Proposition \protect\ref{pro8} for $\lambda=2n$}
\label{fig9_BCD}
\end{figure*}
Combining the constructions in the previous propositions, the following result holds.
\begin{theorem}\label{th1} Let $n\geq 9$. For every $(n-2)/3\leq\lambda\leq 2n$ there exists an indecomposable $1$-factorization of $\lambda K_{2n}$ which is not simple.\qed \end{theorem}
\section{Simple and indecomposable $1$-factorizations.}\label{sec:IOF_simple}
In this section we use Theorem \ref{th1} and Corollary $4.1$ in \cite{CCR} to find simple and indecomposable $1$-factorizations of $\lambda K_{2n}$. We also generalize the result in \cite{CCR} about the existence of simple and indecomposable $1$-factorizations of $\lambda K_{2n}$, where $2n-1$ is a prime and $\lambda=(n-1)/2$. We recall the statement of Corollary $4.1$ .
\begin{cor4.1}\label{cor4.1}\cite{CCR} If there exists an indecomposable $1$-factorization of $\lambda K_{2n}$ with $\lambda\leq 2n-1$, then there exists a simple and indecomposable $1$-factorization of $\lambda K_{2s}$ for $s\geq 2n$. \end{cor4.1}
The following results hold.
\begin{theorem}\label{th2} Let $s\geq 18$. For every $2\leq\lambda\leq 2\lfloor s/2\rfloor-1$ there exists a simple and indecomposable $1$-factorization of $\lambda K_{2s}$. \end{theorem}
\begin{proof} For every $n\geq 9$ we set $I_n=\{\lambda\in\mathbb Z: (n-2)/3\leq\lambda\leq 2n-1\}$ and note that $I_n\cup I_{n+1}=\{\lambda\in\mathbb Z: (n-2)/3\leq\lambda\leq 2(n+1)-1\}$. Consider $s\geq 2n\geq 2\cdot 9$. By Corollary $4.1$ of \cite{CCR}, for every $\lambda\in I_n$ there exists a simple and indecomposable $1$-factorization of $\lambda K_{2s}$. Since we can consider $9\leq n\leq\lfloor s/2\rfloor$, we obtain a simple and indecomposable $1$-factorization of $\lambda K_{2s}$ for every $\lambda\in\cup^{\lfloor s/2\rfloor}_{n=9}\, I_n=\{\lambda\in\mathbb Z: 7/3\leq\lambda\leq 2\lfloor s/2\rfloor-1\}$. Since $s\geq 2\cdot 5$, from Proposition \ref{pro2} and Corollary $4.1$ we also obtain a simple and indecomposable $1$-factorization of $\lambda K_{2s}$ for $\lambda=2$. Hence the assertion follows. \end{proof}
\begin{theorem}\label{th3} Let $2n-1$ be a prime power and let $\lambda=n-1$. There exists a simple and indecomposable $1$-factorization of $\lambda K_{2n}$. \end{theorem}
\begin{proof} Let $2n-1=p^m$, with $p$ an odd prime and $m\ge 1$. Let $GF(p^m)$ be the Galois field of order $p^m$ and let $v$ be a generator of the cyclic multiplicative group $GF(p^m)^*=GF(p^m)-\{0\}$. It is well known that $v$ is a root of an irriducible polynomial over $\mathbb Z_p$ of degree $m$, the field $GF(p^m)$ is an algebraic extension of $\mathbb Z_p$ and it is
$GF(p^m)=\mathbb Z_p(v)=\{a_0+a_1v+a_2v^2+\dots +a_{m-1}v^{m-1} \ | \ a_i\in \mathbb Z_p\}$. Let $V=GF(p^m)\cup \{\infty \}$, $\infty \notin GF(p^m)$, and identify the vertices of the complete multigraph $(n-1)K_{2n}$ with the elements of $V$, thus the edges are in the multiset $(n-1)\binom{V}{2}$. The affine linear group $AGL(1,p^m) = \{\phi_{b,a}: a, b \in GF(p^m), b\ne 0\}$ is a permutation group on $V$ where each $\phi_{b,a}$ fixes $\infty$ and maps $x\in V\smallsetminus \{\infty\}$ onto $xb + a$. This action extends to edges and $1$-factors. For each edge $e=[x,y]$ and for each $1-$factor $F$, we set $e^{\phi_{b,a}}=eb+a = [xb+a,yb+a]$ and $F^{\phi_{b,a}}=Fb+a$.
If $x\ne \infty$ and $y\ne \infty$ we call $\partial e = \{\pm (y-x)\}$ the {\it difference set} of $e$.
Consider the following set of edges:
\vskip0.2truecm\noindent $A_0=\{[(2i-1)+a_1v+a_2v^2+ \dots + a_{m-1}v^{m-1}, 2i+a_1v+\dots + a_{m-1}v^{m-1}],$
$\ \ \ \ 1\le i \le (p-1)/2, a_r \in \mathbb Z_p, 1\leq r\leq m-1 \}$
\vskip0.2truecm\noindent $A_1=\{[(2i-1)v+a_2v^2+ \dots + a_{m-1}v^{m-1}, (2i)v+a_2v^2+\dots + a_{m-1}v^{m-1}],$
$\ \ \ \ \ 1\le i \le (p-1)/2, \ a_r \in \mathbb Z_p, 2\leq r\leq m-1 \}$
\vskip0.2truecm\noindent $A_2=\{[(2i-1)v^2+ \dots + a_{m-1}v^{m-1}, (2i)v^2+\dots + a_{m-1}v^{m-1}],$
$\ \ \ \ \ 1\le i \le (p-1)/2, \ a_r \in \mathbb Z_p, 3\leq r\leq m-1 \}$
\vskip0.3truecm\noindent $\dots$
\noindent $A_{m-1}=\{[(2i-1)v^{m-1}, (2i)v^{m-1}], \ 1\le i \le (p-1)/2 \}$.
\vskip 0.3truecm \noindent Obviously if $m=1$ we just have $\mathbb Z_p(v)=\mathbb Z_p$ and we just take the set $A_0$. \vskip0.3truecm \noindent Observe that each set $A_j$, $j=0,\dots, m-1$, contains exactly $p^{m-j-1}(p-1)/2$ edges with difference set $\{\pm v^j\}$. Let $F$ be the $1-$factor given by: $\{[0,\infty]\}\cup A_0\cup A_1 \cup \dots \cup A_{m-1}$. The set ${\cal F}=F^{AGL(1,p^m)}$ is a simple and indecomposable $1-$factorization of $(n-1)K_{2n}$. \end{proof}
\section{Conclusions.}\label{sec:final}
Our methods of construction can be used to obtain indecomposable $1$-factorizations of $\lambda K_{2n}$ for some values of $\lambda >2n$. These $1$-factorizations are not simple and do not provide simple $1$-factorizations, since for these values of $\lambda$ we cannot apply Corollary $4.1$ of \cite{CCR}.
As remarked in Section \ref{sec:intro}, a necessary condition for the existence of an indecomposable $1$-factorization of $\lambda K_{2n}$ is $\lambda<[n(2n-1)]^{n(2n-1)}\binom{2n^3+n^2-n+1}{2n^2-n}$. It would be interesting to know whether for every $n\geq 4$ there exists a parameter $\lambda(n)<[n(2n-1)]^{n(2n-1)}\binom{2n^3+n^2-n+1}{2n^2-n}$ depending from $n$ such that for every $\lambda>\lambda(n)$ there is no indecomposable $1$-factorization of $\lambda K_{2n}$.
\end{document} | arXiv |
arXiv.org > q-bio > q-bio.PE
Populations and Evolution
Submissions received from Wed 19 Jan 22 to Thu 20 Jan 22, announced Fri, 21 Jan 22
[ total of 6 entries: 1-6 ]
[ showing up to 2000 entries per page: fewer | more ]
New submissions for Fri, 21 Jan 22
[1] arXiv:2201.07926 [pdf, other]
Title: Resolving conceptual issues in Modern Coexistence Theory
Authors: Evan Johnson, Alan Hastings
Subjects: Populations and Evolution (q-bio.PE)
In this paper, we discuss the conceptual underpinnings of Modern Coexistence Theory (MCT), a quantitative framework for understanding ecological coexistence. In order to use MCT to infer how species are coexisting, one must relate a complex model (which simulates coexistence in the real world) to simple models in which previously proposed explanations for coexistence have been codified. This can be accomplished in three steps: 1) relating the construct of coexistence to invasion growth rates, 2) mathematically partitioning the invasion growth rates into coexistence mechanisms (i.e., classes of explanations for coexistence), and 3) relating coexistence mechanisms to simple explanations for coexistence. Previous research has primarily focused on step 2. Here, we discuss the other crucial steps and their implications for inferring the mechanisms of coexistence in real communities.
Our discussion of step 3 -- relating coexistence mechanisms to simple explanations for coexistence -- serves a heuristic guide for hypothesizing about the causes of coexistence in new models; but also addresses misconceptions about coexistence mechanisms. For example, the storage effect has little to do with bet-hedging or "storage" via a robust life-history stage; relative nonlinearity is more likely to promote coexistence than originally thought; and fitness-density covariance is an amalgam of a large number of previously proposed explanations for coexistence (e.g., the competition-colonization trade-off, heteromyopia, spatially-varying resource supply ratios). Additionally, we review a number of topics in MCT, including the role of "scaling factors"; whether coexistence mechanisms are approximations; whether the magnitude or sign of invasion growth rates matters more; whether Hutchinson solved the paradox of the plankton; the scale-dependence of coexistence mechanisms; and much more.
Title: Coexistence in spatiotemporally fluctuating environments
Comments: 40 pages, 1 figure, 2 tables
Ecologists have put forward many explanations for coexistence, but these are only partial explanations; nature is complex, so it is reasonable to assume that in any given ecological community, multiple mechanisms of coexistence are operating at the same time. Here, we present a methodology for quantifying the relative importance of different explanations for coexistence, based on an extension of Modern Coexistence Theory. Current versions of Modern Coexistence Theory only allow for the analysis of communities that are affected by spatial or temporal environmental variation, but not both. We show how to analyze communities with spatiotemporal fluctuations, how to parse the importance of spatial variation and temporal variation, and how to measure everything with either mathematical expressions or simulation experiments. Our extension of Modern Coexistence Theory allows empiricists to use realistic models and more data to better infer the mechanisms of coexistence in real communities.
Title: Invasion Dynamics in the Biased Voter Process
Authors: Loke Durocher, Panagiotis Karras, Andreas Pavlogiannis, Josef Tkadlec
Comments: 8 pages, 3 figures
Subjects: Populations and Evolution (q-bio.PE); Computational Complexity (cs.CC); Data Structures and Algorithms (cs.DS); Computer Science and Game Theory (cs.GT); Social and Information Networks (cs.SI)
The voter process is a classic stochastic process that models the invasion of a mutant trait $A$ (e.g., a new opinion, belief, legend, genetic mutation, magnetic spin) in a population of agents (e.g., people, genes, particles) who share a resident trait $B$, spread over the nodes of a graph. An agent may adopt the trait of one of its neighbors at any time, while the invasion bias $r\in(0,\infty)$ quantifies the stochastic preference towards ($r>1$) or against ($r<1$) adopting $A$ over $B$. Success is measured in terms of the fixation probability, i.e., the probability that eventually all agents have adopted the mutant trait $A$. In this paper we study the problem of fixation probability maximization under this model: given a budget $k$, find a set of $k$ agents to initiate the invasion that maximizes the fixation probability. We show that the problem is NP-hard for both $r>1$ and $r<1$, while the latter case is also inapproximable within any multiplicative factor. On the positive side, we show that when $r>1$, the optimization function is submodular and thus can be greedily approximated within a factor $1-1/e$. An experimental evaluation of some proposed heuristics corroborates our results.
[4] arXiv:2201.08224 [pdf]
Title: A model of the transmission of SARS-Cov-2 (Covid-19) with an underlying condition of Diabetes
Authors: Samuel Okyere, Joseph Ackora Prah
It is well established that people with diabetes are more likely to have serious complications from COVID-19. Nearly 1 in 5 COVID-19 deaths in the African region are linked to diabetes. World Health Organization finds that 18.3% of COVID-19 deaths in Africa are among people with diabetes. In this paper, we propose a deterministic mathematical model to study the comorbidity of diabetes and COVID-19. We consider a population with an underlying condition of diabetes. The reproductive number of the model has been determined. The steady state of the model and the stability analyses of the disease-free and endemic equilibrium were also established. The endemic equilibrium was found to be stable for . The results of the numerical simulation show more COVID-19 related deaths in the population with an underlying condition of diabetes as compared to the diabetes-free population. Optimal controls were incorporated into the model to determine the effectiveness of two preventive control measures such lockdown and vaccination. Both measures were very effective in curbing the spread of the disease.
Replacements for Fri, 21 Jan 22
[5] arXiv:2112.02809 (replaced) [pdf, other]
Title: A thermodynamic threshold for Darwinian evolution
Authors: Artemy Kolchinsky
Subjects: Populations and Evolution (q-bio.PE); Statistical Mechanics (cond-mat.stat-mech); Biological Physics (physics.bio-ph)
Title: Persistent Strange attractors in 3D Polymatrix Replicators
Authors: Telmo Peixe, Alexandre A. Rodrigues
Subjects: Dynamical Systems (math.DS); Theoretical Economics (econ.TH); Populations and Evolution (q-bio.PE)
Links to: arXiv, form interface, find, q-bio, recent, 2201, contact, help (Access key information) | CommonCrawl |
\begin{document}
\maketitle
\begin{abstract} Let $k$ be a field of characteristic $\neq 2$. We give an answer to the field intersection problem of quartic generic polynomials over $k$ via formal Tschirnhausen transformation and multi-resolvent polynomials. \end{abstract}
\section{Introduction}\label{seIntro}
Let $k$ be a field of char $k\neq 2$ and $k(\bs)$ the rational function field over $k$ with $n$ indeterminates $\bs=(s_1,\ldots,s_n)$. Let $G$ be a finite group.
A polynomial $f_\bs(X)\in k(\bs)[X]$ is called $k$-generic for $G$ if the Galois group of $f_\bs(X)$ over $k(\bs)$ is isomorphic to $G$ and every $G$-Galois extension $L/M$ over an arbitrary infinite field $M\supset k$ can be obtained as $L=\Spl_M f_\ba(X)$, the splitting field of $f_\ba(X)$ over $M$, for some $\ba=(a_1,\ldots,a_n)\in M^n$ (cf. \cite{DeM83}, \cite{Kem01}, \cite{JLY02}).
Note that we always take an infinite field $M$ as a base field $M$, $M\supset k$, of a $G$-extension $L/M$. Examples of $k$-generic polynomials for $G$ are known for various pairs of $(k,G)$ (for example, see \cite{Kem94}, \cite{KM00}, \cite{JLY02}, \cite{Rik04}).
Let $f_\bs^G(X)\in k(\bs)[X]$ be a $k$-generic polynomial for $G$. Kemper \cite{Kem01} showed that for a subgroup $H$ of $G$ every $H$-Galois extension over an infinite field $M\supset k$ is also given by a specialization of $f_\bs^G(X)$ as in the similar manner. The aim of this paper is to study the field intersection problem $\mathbf{Int}(f_\bs^G/M)$ of $f_\bs^G(X)$ over $M$:
\begin{center} $\mathbf{Int}(f_\bs^G/M)$ : for a field $M\supset k$ and $\ba,{\ba'}\in M^n$, determine the\\ \hspace*{1.7cm} intersection of $\Spl_M f_\ba^G(X)$ and $\Spl_M f_{\ba'}^G(X)$. \end{center}
It would be desired to give an answer to the problem within the base field $M$ by using the data $\ba,{\ba'}\in M^n$. As a special case, this problem includes the field isomorphism problem {\bf $\mathbf{Isom}(f_\bs^G/M)$} of $f_\bs^G(X)$ over $M$, i.e., for $\ba,{\ba'}\in M^n$ whether $\Spl_M f_\ba^G(X)$ and $\Spl_M f_{\ba'}^G(X)$ are isomorphic over $M$ or not. Since a $k$-generic polynomial covers all $H$-Galois extensions ($H\leq G$) over $M\supset k$ by specializing parameters, the problem {\bf $\mathbf{Isom}(f_\bs^G/M)$} arises naturally. Moreover we consider the following problem:
\begin{center} {\bf $\mathbf{Isom}^\infty(f_\bs^G/M)$} : for a given $\ba\in M^n$, are there infinitely many\\ \hspace*{3.4cm} ${\ba'}\in M^n$ such that $\Spl_M f_\ba^G(X)=\Spl_M f_{\ba'}^G(X)$\,? \end{center}
Let $\cS_n$ (resp. $\A_n$, $\D_n$, $\C_n$) be the symmetric (resp. the alternating, the dihedral, the cyclic) group of degree $n$ and $\V_4$ the Klein four group ($\V_4\cong\C_2\times\C_2)$. In \cite{HM07} and \cite{HM}, we gave answers to $\mathbf{Int}(f_\bs^G/M)$ and to $\mathbf{Isom^\infty}(f_\bs^G/M)$ for cubic $k$-generic polynomials $f_s^{\C_3}(X)=X^3-sX^2-(s+3)X-1$ and $f_s^{\cS_3}(X)=X^3+sX+s$.
In the present paper we investigate the problems $\mathbf{Int}(f_\bs^G/M)$ and $\mathbf{Isom^\infty}(f_\bs^G/M)$ for quartic generic polynomials $f_\bs^G(X)$ via formal Tschirnhausen transformation and multi-resolvent polynomials.
For $G=\cS_4$, $\D_4$, $\C_4$, $\V_4$, we take the following $k$-generic polynomials \begin{align*} f_{s,t}^{\cS_4}(X)&:=X^4+sX^2+tX+t\, \in k(s,t)[X],\\ f_{s,t}^{\D_4}(X)&:=X^4+sX^2+t\, \in k(s,t)[X],\\ f_{s,u}^{\C_4}(X)&:=X^4+sX^2+\frac{s^2}{u^2+4}\, \in k(s,u)[X],\\ f_{s,v}^{\V_4}(X)&:=X^4+sX^2+v^2\, \in k(s,v)[X], \end{align*} respectively, with two parameters (the least possible number of parameters; cf. \cite{BR97}, \cite[Chapter 8]{JLY02}).
In Section \ref{sePre}, we review some known results about resolvent polynomials and formal Tschirnhausen transformation.
In Section \ref{seS4A4}, we give an answer to $\mathbf{Int}(f_{\bs}^{\cS_4}/M)$ via multi-resolvent polynomial (Theorem \ref{thS4A4}). In Subsection \ref{seIsoS4}, we give a more explicit answer to $\mathbf{Isom}(f_{\bs}^{\cS_4}/M)$ by using formal Tschirnhausen transformation in Theorem \ref{thS4}. A proof of Theorem \ref{thS4} will be given in Subsection \ref{seProof}. A consequence of Theorem \ref{thS4} is the following theorem:
\begin{theorem-nn}[Corollary \ref{cor2}, an answer to $\mathbf{Isom}^\infty(f_{\bs}^{\cS_4}/M)$] Let $M\supset k$ be an infinite field. For $\ba=(a,b)\in M^2$, we assume that $f_\ba^{\cS_4}(X)$ is separable over $M$. Then there exist infinitely many $\ba'=(a',b')\in M^2$ such that $\Spl_M f_\ba^{\cS_4}(X)=\Spl_M f_{\ba'}^{\cS_4}(X)$. \end{theorem-nn}
In Section \ref{seD4}, we treat the problems $\mathbf{Int}(f_\bs^{\D_4}/M)$, $\mathbf{Isom}(f_\bs^{\D_4}/M)$ and $\mathbf{Isom^\infty}(f_\bs^{\D_4}/M)$. In the case of $\D_4$, $\mathbf{Isom^\infty}(f_\bs^{\D_4}/M)$ has a trivial solution because $\Spl_M f_{a,b}^{\D_4}(X)=\Spl_M f_{ac^2,bc^4}^{\D_4}(X)$ for arbitrary $c\in M\backslash\{0\}$. Thus we consider the problem $\mathbf{Isom^\infty}(f_\bs^{\D_4}/M)$ for $\ba=(a,b)$ and $\ba'=(a',b')$ under the condition $a^2b'-{a'}^2b\neq 0$ or $b'/b\neq c^4$ for any $c\in M$.
\begin{theorem-nn}[Theorem \ref{thD4Hil}] Let $M\supset k$ be a Hilbertian field. For $\ba=(a,b)\in M^2$, we assume that $f_\ba^{\D_4}(X)$ is separable over $M$. Then there exist infinitely many $\ba'=(a',b')\in M^2$ which satisfy that $b'/b$ is not a fourth power in $M$ and $\Spl_M f_\ba^{\D_4}(X)=\Spl_M f_{\ba'}^{\D_4}(X)$. \end{theorem-nn}
In Section \ref{seC4V4}, we deal with the cases of $\C_4$ and of $\V_4$ which are treated by suitably specializing the case of $\D_4$. We also treat reducible cases in Section \ref{seRed}.
Most of results in the present paper are given with explicit formulas which are intended to be applied elsewhere, and we also give some numerical examples by using our explicit formulas. The calculations of this paper were carried out with Mathematica \cite{Wol03}.
\section{Preliminaries}\label{sePre}
In this section we review some basic facts, and a result of \cite{HM}. \\
\subsection{Resolvent polynomial}\label{subseResolv}
~\\
One of the fundamental tools in the computational aspects of Galois theory is the resolvent polynomials (cf. the text books \cite{Coh93}, \cite{Ade01}). Several kinds of methods to compute a resolvent polynomial have been developed by many mathematicians (see, for example, \cite{Sta73}, \cite{Gir83}, \cite{SM85}, \cite{Yok97}, \cite{MM97}, \cite{AV00}, \cite{GK00} and the references therein).
Let $M\supset k$ be an infinite field and $\overline{M}$ a fixed algebraic closure of $M$. Let $f(X):=\prod_{i=1}^m(X-\alpha_i) \in M[X]$ be a separable polynomial of degree $m$ with fixed ordering roots $\alpha_1,\ldots,\alpha_m\in \overline{M}$. The information of the splitting field $\Spl_M f(X)$ of $f(X)$ over $M$ and their Galois group is obtained by using resolvent polynomials.
Let $k[\bx]:=k[x_1,\ldots,x_m]$ be the polynomial ring over $k$ with indeterminates $x_1,\ldots,x_m$. Put $R:=k[\bx, 1/\Delta_\bx]$, where $\Delta_\bx:=\prod_{1\leq i<j\leq m}(x_j-x_i)$. We take a surjective evaluation homomorphism \[ \omega_f : R \longrightarrow k(\alpha_1,\ldots,\alpha_m),\quad \Theta(x_1,\ldots,x_m)\longmapsto \Theta(\alpha_1,\ldots,\alpha_m) \] for $\Theta \in R$. We note that $\omega_f(\Delta_\bx)\neq 0$ from the assumption that $f(X)$ is separable over $M$. The kernel of the map $\omega_f$ is the ideal \[ I_f=\mathrm{ker}(\omega_f)=\{\Theta(x_1,\ldots,x_m)\in R \mid \Theta(\alpha_1,\ldots,\alpha_m)=0\}. \] For $\pi\in \cS_m$, we extend the action of $\pi$ on $m$ letters $\{1,\ldots,m\}$ to $R$ by \[ \pi(\Theta(x_1,\ldots,x_m)):=\Theta(x_{\pi(1)},\ldots,x_{\pi(m)}). \] We define the Galois group of a polynomial $f(X)\in M[X]$ over $M$ by \[ \Gal(f/M):=\{\pi\in \cS_m \mid \pi(I_f)\subseteq I_f\}. \] We write $\Gal(f):=\Gal(f/M)$ for simplicity. The Galois group of the splitting field $\Spl_M f(X)$ of a polynomial $f(X)$ over $M$ is isomorphic to $\Gal(f)$. If we take another ordering of roots $\alpha_{\pi(1)},\ldots,\alpha_{\pi(m)}$ of $f(X)$ with some $\pi\in \cS_m$, the corresponding realization of $\Gal(f)$ is the conjugate of the original one given by $\pi$ in $\cS_m$. Hence, for arbitrary ordering of the roots of $f(X)$, $\Gal(f)$ is determined up to conjugation in $\cS_m$.
\begin{definition} For $H\leq G\leq \cS_m$, an element $\Theta\in R$ is called a $G$-primitive $H$-invariant if
$H=\mathrm{Stab}_G(\Theta)$ $:=$ $\{\pi\in G\ |\ \pi(\Theta)=\Theta\}$. For a $G$-primitive $H$-invariant $\Theta$, the polynomial \[ \mathcal{RP}_{\Theta,G}(X):=\prod_{\overline{\pi}\in G/H}(X-\pi(\Theta))\in R^G[X] \] is called the {\it formal} $G$-relative $H$-invariant resolvent by $\Theta$, and a polynomial \[ \mathcal{RP}_{\Theta,G,f}(X):=\prod_{\overline{\pi}\in G/H}\bigl(X-\omega_f(\pi(\Theta))\bigr) \] is called the $G$-relative $H$-invariant resolvent of $f$ by $\Theta$. \end{definition}
The following is fundamental in the theory of resolvent polynomials (cf. \cite[p.95]{Ade01}).
\begin{theorem}\label{thfun} For $H\leq G\leq \cS_m$, let $\Theta$ be a $G$-primitive $H$-invariant. Assume that $\Gal(f)\leq G$. Suppose that $\mathcal{RP}_{\Theta,G,f}(X)$ is decomposed into a product of powers of distinct irreducible polynomials as $\mathcal{RP}_{\Theta,G,f}(X)=\prod_{i=1}^l h_i^{e_i}(X)$ in $M[X]$. Then we have a bijection \begin{align*} \Gal(f)\backslash G/H\quad &\longrightarrow \quad \{h_1^{e_1}(X),\ldots,h_l^{e_l}(X)\},\\ \Gal(f)\, \pi\, H\quad &\longmapsto\quad h_\pi(X) =\prod_{\tau H\subseteq \Gal(f)\,\pi\,H}\bigl(X-\omega_{f}(\tau(\Theta))\bigr) \end{align*} where the product is taken over the left cosets $\tau H$ of $H$ in $G$ contained in $\Gal(f)\, \pi\, H$, that is, over $\tau=\pi_\sigma \pi$ where $\pi_\sigma$ runs through a system of representatives of the left cosets of $\Gal(f) \cap \pi H\pi^{-1}$ in $\Gal(f)$, and each $h_\pi(X)$ is irreducible or a power of an irreducible polynomial with $\mathrm{deg}(h_\pi(X))$
$=$ $|\Gal(f)\, \pi\, H|/|H|$ $=$ $|\Gal(f)|/|\Gal(f)\cap \pi H\pi^{-1}|$. \end{theorem}
\begin{corollary} If $\Gal(f)\leq \pi H\pi^{-1}$ for some $\pi\in G$ then $\mathcal{RP}_{\Theta,G,f}(X)$ has a linear factor over $M$. Conversely, if $\mathcal{RP}_{\Theta,G,f}(X)$ has a non-repeated linear factor over $M$ then there exists $\pi\in G$ such that $\Gal(f)\leq \pi H\pi^{-1}$. \end{corollary}
\begin{remark}\label{remGir} When the resolvent polynomial $\mathcal{RP}_{\Theta,G,f}(X)$ has a repeated factor, there always exists a suitable Tschirnhausen transformation $\hat{f}$ of $f$ over $M$ (resp. $X-\hat{\Theta}$ of $X-\Theta$ over $k$) such that $\mathcal{RP}_{\Theta,G,\hat{f}}(X)$ (resp. $\mathcal{RP}_{\hat{\Theta},G,f}(X)$) has no repeated factors (cf. \cite{Gir83}, \cite[Alg. 6.3.4]{Coh93}, \cite{Col95}). \end{remark}
In the case where $\mathcal{RP}_{\Theta,G,f}(X)$ has no repeated factors, we have the following theorem:
\begin{theorem} For $H\leq G\leq \cS_m$, let $\Theta$ be a $G$-primitive $H$-invariant. We assume $\Gal(f)\leq G$ and $\mathcal{RP}_{\Theta,G,f}(X)$ has no repeated factors. Then the following two assertions hold\,{\rm :}\\ {\rm (i)} For $\pi\in G$, the fixed group of the field $M\bigl(\omega_{f}(\pi(\Theta))\bigr)$ corresponds to $\Gal(f)\cap \pi H\pi^{-1}$. Indeed the fixed group of $\Spl_M \mathcal{RP}_{\Theta,G,f}(X)$ corresponds to $\Gal(f)\cap \bigcap_{\pi\in G}\pi H\pi^{-1}$\,{\rm ;} \\ {\rm (ii)} let $\varphi : G\rightarrow \cS_{[G:H]}$ denote the permutation representation of $G$ on the left cosets of $G/H$ given by the left multiplication. Then we have a realization of the Galois group of $\Spl_M \mathcal{RP}_{\Theta,G,f}(X)$ as a subgroup of $\cS_{[G:H]}$ by $\varphi(\Gal(f))$. \\ \end{theorem}
\subsection{Formal Tschirnhausen transformation}\label{subseTschirn}
~\\
We recall the geometric interpretation of a Tschirnhausen transformation which is given in \cite{HM}. Let $f(X)$ and $g(X)$ be monic separable polynomials of degree $n$ in $M[X]$ and $\alpha_1,\ldots,\alpha_n$ the fixed ordering roots of $f(X)$ in $\overline{M}$. A Tschirnhausen transformation of $f(X)$ over $M$ is a polynomial of the form \[ g(X)=\prod_{i=1}^n \bigl(X-(c_0+c_1\alpha_i+\cdots+c_{n-1}\alpha_i^{n-1})\bigr),\ c_j \in M. \] Two polynomials $f(X)$ and $g(X)$ in $M[X]$ are Tschirnhausen equivalent over $M$ if they are Tschirnhausen transformations over $M$ of each other. For two irreducible separable polynomials $f(X)$ and $g(X)$ in $M[X]$, $f(X)$ and $g(X)$ are Tschirnhausen equivalent over $M$ if and only if the quotient fields $M[X]/(f(X))$ and $M[X]/(g(X))$ are isomorphic over $M$.
In order to obtain an answer to the field intersection problem of $k$-generic polynomials via multi-resolvent polynomials, we first treat a general polynomial whose roots are $n$ indeterminates $x_1,\ldots,x_n$: \begin{align*} f_\bs(X)\, &=\, \prod_{i=1}^n(X-x_i)\, =\, X^n-s_1X^{n-1}+s_2X^{n-2}+\cdots+(-1)^n s_n\ \in k[\bs][X] \end{align*} where $k[x_1,\ldots,x_n]^{\cS_n}=k[\bs]:=k[s_1,\ldots,s_n], \bs=(s_1, \ldots, s_n),$ and $s_i$ is the $i$-th elementary symmetric function in $n$ variables $\bx=(x_1,\ldots,x_n)$.
Let $R_\bx:=k[x_1,\ldots,x_n]$ and $R_\by:=k[y_1,\ldots,y_n]$ be polynomial rings over $k$. Put $R_{\bx,\by}:=k[\bx,\by,1/\Delta_\bx,1/\Delta_\by]$, where $\Delta_\bx:=\prod_{1\leq i<j\leq m}(x_j-x_i)$ and $\Delta_\by:=\prod_{1\leq i<j\leq m}(y_j-y_i)$. We define an involution $\iota$ which exchanges the indeterminates $x_i$'s and the $y_i$'s: \begin{align} \iota\ :\ R_{\bx,\by}\longrightarrow R_{\bx,\by},\ x_i\longmapsto y_i,\ y_i\longmapsto x_i,\quad (i=1,\ldots,n).\label{defiota} \end{align} We take another general polynomial $f_\bt(X):=\iota(f_\bs(X))\in k[\bt][X], \bt=(t_1,\ldots,t_n)$ with roots $y_1,\ldots,y_n$ where $t_i=\iota(s_i)$ is the $i$-th elementary symmetric function in $\by=(y_1,\ldots,y_n)$. We put \[ K\ :=\ k(\bs,\bt); \] it is regarded as the rational function field over $k$ with $2n$ variables. For simplicity, we put \[ f_{\bs,\bt}(X):=f_\bs(X)f_\bt(X). \] The polynomial $f_{\bs,\bt}(X)$ of degree $2n$ is defined over $K$. We denote \begin{align*} \Gs\, :=\, \Gal(f_\bs/K),\quad \Gt\, :=\, \Gal(f_\bt/K),\quad G_{\mathbf{s},\mathbf{t}}\, :=\, \Gal(f_{\bs,\bt}/K). \end{align*}
Then we have $G_{\mathbf{s},\mathbf{t}}=\Gs\times\Gt, \Gs\cong \Gt\cong \cS_n$ and $k(\bx,\by)^{G_{\mathbf{s},\mathbf{t}}}=K$.
We intend to apply the results of the previous subsection for $m=2n$, $G=G_{\mathbf{s},\mathbf{t}}\leq \cS_{2n}$ and $f=f_{\bs,\bt}$.
Note that over the field $\Spl_K f_{\bs,\bt}(X)=k(\bx,\by)$, there exist $n!$ Tschirnhausen transformations from $f_\bs(X)$ to $f_\bt(X)$ with respect to $y_{\pi(1)},\ldots,y_{\pi(n)}$ for $\pi\in \cS_n$. We study the field of definition of each Tschirnhausen transformation from $f_\bs(X)$ to $f_\bt(X)$. Let \begin{align*} D:= \left( \begin{array}{ccccc} 1 & x_1 & x_1^2 & \cdots & x_1^{n-1}\\ 1 & x_2 & x_2^2 & \cdots & x_2^{n-1}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & x_n & x_n^2 & \cdots & x_n^{n-1}\end{array}\right) \end{align*} be the Vandermonde matrix of size $n$. The matrix $D\in M_n(k(\bx))$ is invertible because the determinant of $D$ equals ${\rm det}\, D=\Delta_\bx$. The field $k(\bs)(\Delta_\bx)$ is a quadratic extension of $k(\bs)$ which corresponds to the fixed field of the alternating group of degree $n$.
We define the $n$-tuple $(u_0(\mathbf{x},\mathbf{y}),\ldots, u_{n-1}(\mathbf{x},\mathbf{y}))\in (R_{\bx,\by})^n$ by \begin{align} \left(\begin{array}{c}u_0(\mathbf{x},\mathbf{y})\\ u_1(\mathbf{x}, \mathbf{y})\\ \vdots \\ u_{n-1}(\mathbf{x},\mathbf{y})\end{array}\right) :=D^{-1}\left(\begin{array}{c}y_1\\ y_2\\ \vdots \\ y_n\end{array}\right). \label{defu} \end{align} It follows from Cramer's rule that \begin{align*} u_i(\mathbf{x},\mathbf{y})=\Delta_\bx^{-1}\cdot\mathrm{det} \left(\begin{array}{cccccccc} 1 & x_1 & \cdots & x_1^{i-1} & y_1 & x_1^{i+1} & \cdots & x_1^{n-1}\\ 1 & x_2 & \cdots & x_2^{i-1} & y_2 & x_2^{i+1} & \cdots & x_2^{n-1}\\ \vdots & \vdots & & \vdots & \vdots & \vdots & & \vdots\\ 1 & x_n & \cdots & x_n^{i-1} & y_n & x_n^{i+1} & \cdots & x_n^{n-1} \end{array}\right). \end{align*} In order to simplify the presentation, we write \[ u_i:=u_i(\mathbf{x},\mathbf{y}),\quad (i=0,\ldots,n-1). \]
The Galois group $G_{\mathbf{s},\mathbf{t}}$ acts on the orbit $\{\pi(u_i)\ |\ \pi\in G_{\mathbf{s},\mathbf{t}} \}$ via regular representation from the left. However this action is not faithful.
We put \[
H_{\mathbf{s},\mathbf{t}}:=\{(\pi_\bx, \pi_\by)\in G_{\mathbf{s},\mathbf{t}}\ |\ \pi_\bx(i)=\pi_\by(i)\ \mathrm{for}\ i=1,\ldots,n \}\cong \cS_n. \] If $\pi \in H_{\mathbf{s},\mathbf{t}}$ then we have $\pi(u_i)=u_i$ for $i=0,\ldots,n-1$. Indeed we see the following lemma:
\begin{lemma}\label{stabil} For $i$, $0\leq i\leq n-1$, $u_i$ is a $G_{\mathbf{s},\mathbf{t}}$-primitive $H_{\mathbf{s},\mathbf{t}}$-invariant. \end{lemma}
Let $\Theta:=\Theta(\bx,\by)$ be a $G_{\mathbf{s},\mathbf{t}}$-primitive $H_{\mathbf{s},\mathbf{t}}$-invariant. Let $\overline{\pi}=\piH_{\mathbf{s},\mathbf{t}}$ be a left coset of $H_{\mathbf{s},\mathbf{t}}$ in $G_{\mathbf{s},\mathbf{t}}$.
The group $G_{\mathbf{s},\mathbf{t}}$ acts on the set $\{ \pi(\Theta)\ |\ \overline{\pi}\in G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}\}$ transitively from the left through the action on the set $G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}$ of left cosets.
Each of the sets $\{ \overline{(1,\pi_\by)}\ |\ (1,\pi_\by)\in G_{\mathbf{s},\mathbf{t}}\}$
and $\{ \overline{(\pi_\bx,1)}\ |\ (\pi_\bx,1)\in G_{\mathbf{s},\mathbf{t}}\}$ forms a complete residue system of $G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}$, and hence the subgroups $\Gs$ and $\Gt$ of $G_{\mathbf{s},\mathbf{t}}$ act on the set
$\{ \pi(\Theta)\ |\ \overline{\pi}\in G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}\}$ transitively. For $\overline{\pi}=\overline{(1,\pi_\by)}\in G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}$, we obtain the following equality from the definition (\ref{defu}): \[ y_{\pi_\by(i)} = \pi_\by(u_0)+\pi_\by(u_1) x_i+\cdots+\pi_\by(u_{n-1})x_i^{n-1}\ \mathrm{for}\ i=1,\ldots,n. \]
The set $\{(\pi(u_0),\ldots,\pi(u_{n-1}))\ |\ \overline{\pi}\in G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}\}$ gives coefficients of $n!$ different Tschirnhausen transformations from $f_\bs(X)$ to $f_\bt(X)$ each of which is defined over $K(\pi(u_0),\ldots,\pi(u_{n-1}))$, respectively. We call $K(\pi(u_0),\ldots,\pi(u_{n-1})), (\overline{\pi}\in G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}})$ a field of formal Tschirnhausen coefficients from $f_\bs(X)$ to $f_\bt(X)$.
We put $v_i:=\iota(u_i)$, for $i=0,\ldots,n-1$. Then $v_i$ is also a $G_{\mathbf{s},\mathbf{t}}$-primitive $H_{\mathbf{s},\mathbf{t}}$-invariant, and $K(\pi(v_0),\ldots,\pi(v_{n-1}))$ gives a field of formal Tschirnhausen coefficients from $f_\bt(X)$ to $f_\bs(X)$.
\begin{proposition}\label{prop1} Let $\Theta$ be a $G_{\mathbf{s},\mathbf{t}}$-primitive $H_{\mathbf{s},\mathbf{t}}$-invariant. Then we have $k(\bx,\by)^{\piH_{\mathbf{s},\mathbf{t}} \pi^{-1}}$ $=$ $K(\pi(u_0),\ldots,\pi(u_{n-1}))$ $=$ $K(\pi(\Theta))$ and $[K(\pi(\Theta)) : K]=n!$ for each $\overline{\pi}\in G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}$. \end{proposition}
Hence, for each of the $n!$ fields $K(\pi(\Theta))$, we have $\Spl_{K(\pi(\Theta))} f_\bs(X)=\Spl_{K(\pi(\Theta))} f_\bt(X), (\overline{\pi}\in G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}})$. We also obtain the following proposition:
\begin{proposition}\label{propLL} Let $\Theta$ be a $G_{\mathbf{s},\mathbf{t}}$-primitive $H_{\mathbf{s},\mathbf{t}}$-invariant. Then we have \begin{align*} &{\rm (i)}\ K(\bx)\cap K(\pi(\Theta))=K(\by)\cap K(\pi(\Theta))=K\quad \textrm{for}\quad \overline{\pi}\in G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}\,{\rm ;}\\ &{\rm (ii)}\ K(\bx,\by)=K(\bx,\pi(\Theta))=K(\by,\pi(\Theta))\quad \textrm{for}\quad \overline{\pi}\in G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}\,{\rm ;}\\
&{\rm (iii)}\ K(\bx,\by)=K(\pi(\Theta)\ |\ \overline{\pi}\in G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}). \end{align*} \end{proposition}
We consider the formal $G_{\mathbf{s},\mathbf{t}}$-relative $H_{\mathbf{s},\mathbf{t}}$-invariant resolvent polynomial of degree $n!$ by $\Theta$: \[ \mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}}}(X)=\prod_{\overline{\pi}\in G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}}(X-\pi(\Theta))\in k(\bs,\bt)[X]. \] It follows from Proposition \ref{prop1} that $\mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}}}(X)$ is irreducible over $k(\bs,\bt)$. From Proposition \ref{propLL} we have one of the basic results:
\begin{theorem}\label{th-gen} The polynomial $\mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}}}(X)$ is $k$-generic for $\cS_n\times \cS_n$. \\ \end{theorem}
\subsection{Field intersection problem $\mathrm{Int}(f_\bs/M)$}\label{subseInt}
~\\
For $\ba=(a_1,\ldots,a_n), \bb=(b_1,\ldots,b_n)\in M^n$, we fix the order of roots $\alpha_1,\ldots,\alpha_n$ (resp. $\beta_1,\ldots,\beta_n$) of $f_\ba(X)$ (resp. $f_\bb(X)$) in $\overline{M}$. Put $f_{\ba,\bb}(X):=f_\ba(X)f_\bb(X)\in M[X]$. We denote \begin{align*} L_{\ba} := M(\alpha_1,\ldots,\alpha_n),\quad L_{\bb} := M(\beta_1,\ldots,\beta_n). \end{align*} Then we have \begin{align*} L_{\ba} = \Spl_{M} f_\ba(X),\quad L_{\bb} = \Spl_{M} f_\bb(X),\quad L_{\ba}\,L_{\bb} = \Spl_{M} f_{\ba,\bb}(X). \end{align*} We define a specialization homomorphism $\omega_{f_{\ba,\bb}}$ by
\begin{align*} \omega_{f_{\ba,\bb}} : R_{\bx,\by} &\longrightarrow M(\alpha_1,\ldots,\alpha_n,\beta_1,\ldots,\beta_n)=L_{\ba}\,L_{\bb},\\ \Theta(\bx,\by) &\longmapsto\Theta(\alpha_1,\ldots,\alpha_n,\beta_1,\ldots,\beta_n). \end{align*}
We put \[ D_\ba:=\omega_{f_{\ba,\bb}}(\Delta_\bx^2),\quad D_\bb:=\omega_{f_{\ba,\bb}}(\Delta_\by^2). \] We always assume that both of the polynomials $f_\ba(X)$ and $f_\bb(X)$ are separable over $M$, i.e. $D_\ba\cdot D_\bb\neq 0$. We also put \begin{align*} G_\ba:=\Gal(f_{\ba}/M),\quad G_\bb:=\Gal(f_{\bb}/M),\quad G_{\ba,\bb}:=\Gal(f_{\ba,\bb}/M). \end{align*} Then we may naturally regard $G_{\ba,\bb}$ as a subgroup of $G_{\mathbf{s},\mathbf{t}}$. For $\overline{\pi} \in G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}$, we put \begin{align} c_{i,\pi}:=\omega_{f_{\ba,\bb}}(\pi(u_i)),\quad d_{i,\pi}:=\omega_{f_{\ba,\bb}}\bigl(\pi(\iota(u_i))\bigr),\quad (i=0,\ldots,n-1).\label{defc} \end{align} Then it follows from the definition (\ref{defu}) of $u_i$ that \begin{align*} \beta_{\pi_\by(i)}\,&=\, c_{0,\pi} + c_{1,\pi}\,\alpha_{\pi_\bx(i)} + \cdots + c_{n-1,\pi}\,\alpha_{\pi_\bx(i)}^{n-1},\\ \alpha_{\pi_\bx(i)}\,&=\, d_{0,\pi} + d_{1,\pi}\,\beta_{\pi_\by(i)} + \cdots + d_{n-1,\pi}\,\beta_{\pi_\by(i)}^{n-1} \end{align*} for each $i = 1, \ldots, n$.
For each $\overline{\pi}\inG_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}$, there exists a Tschirnhausen transformation from $f_\ba(X)$ to $f_\bb(X)$ over the field $M(c_{0,\pi},\ldots,c_{n-1,\pi})$, and the $n$-tuple $(d_{0,\pi},\ldots,d_{n-1,\pi})$ gives the coefficients of a transformation of the inverse direction. From the assumption $D_\ba\cdot D_\bb\neq 0$, we see the following elementary lemmas (see, for example, \cite[Lemma 3.1]{HM}):
\begin{lemma}\label{lemM} Let $M'/M$ be a field extension. For $\ba,\bb \in M^n$ with $D_\ba\cdot D_\bb\neq 0$, if $f_\bb(X)$ is a Tschirnhausen transformation of $f_\ba(X)$ over $M'$, then $f_\ba(X)$ is a Tschirnhausen transformation of $f_\bb(X)$ over $M'$. Indeed we have $M(c_{0,\pi},\ldots,c_{n-1,\pi})=M(d_{0,\pi},\ldots,d_{n-1,\pi})$ for every $\overline{\pi}\inG_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}$. \end{lemma}
\begin{lemma}\label{lemMM} For $\ba,\bb \in M^n$ with $D_\ba\cdot D_\bb\neq 0$, the quotient algebras $M[X]/(f_\ba(X))$ and $M[X]/(f_\bb(X))$ are $M$-isomorphic if and only if there exists $\pi\in G_{\mathbf{s},\mathbf{t}}$ such that $M=M(c_{0,\pi},\ldots,c_{n-1,\pi})$. \end{lemma}
In order to obtain an answer to $\mathbf{Int}(f_\bs/M)$ we study the $n!$ fields $M(c_{0,\pi},\ldots,c_{n-1,\pi})$ of Tschirnhausen coefficients from $f_\ba(X)$ to $f_\bb(X)$ over $M$.
\begin{proposition}[{\cite[Proposition 3.2]{HM}}]\label{propc} Under the assumption, $D_\ba\cdot D_\bb\neq 0$, we have the following two assertions\,{\rm :} \begin{align*} &{\rm (i)}\ \ \Spl_{M(c_{0,\pi},\ldots,c_{n-1,\pi})} f_\ba(X) =\Spl_{M(c_{0,\pi},\ldots,c_{n-1,\pi})} f_\bb(X)\, \ \textit{for each}\ \,\overline{\pi}\inG_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}\, {\rm ;}\\ &{\rm (ii)}\ L_\ba L_\bb=L_\ba\, M(c_{0,\pi},\ldots,c_{n-1,\pi})=L_\bb\, M(c_{0,\pi},\ldots,c_{n-1,\pi})\, \ \textit{for each}\ \, \overline{\pi}\inG_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}. \end{align*} \end{proposition}
Let $\Theta$ be a $G_{\mathbf{s},\mathbf{t}}$-primitive $H_{\mathbf{s},\mathbf{t}}$-invariant. Applying the specialization $\omega_{f_{\ba,\bb}}$, we have a $G_{\mathbf{s},\mathbf{t}}$-relative $H_{\mathbf{s},\mathbf{t}}$-invariant resolvent polynomial of $f_{\ba,\bb}$ by $\Theta$: \begin{align*} \mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}},f_{\ba,\bb}}(X)\, =\, \prod_{\overline{\pi}\in G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}} \bigl(X-\omega_{f_{\ba,\bb}}(\pi(\Theta))\bigr)\in M[X]. \end{align*} A polynomial of this kind is called (absolute) multi-resolvent (cf. \cite{GLV88}, \cite{RV99}, \cite{Val}).
\begin{proposition}[{\cite[Proposition 3.7]{HM}}]\label{prop12} Let $\Theta$ be a $G_{\mathbf{s},\mathbf{t}}$-primitive $H_{\mathbf{s},\mathbf{t}}$-invariant. For $\ba,\bb \in M^n$ with $D_\ba\cdot D_\bb\neq 0$, suppose that the resolvent polynomial $\mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}},f_{\ba,\bb}}(X)$ has no repeated factors. Then the following two assertions hold\,{\rm :}\\ $(\mathrm{i})$\ $M(c_{0,\pi},\ldots,c_{n-1,\pi})=M\bigl(\omega_{f_{\ba,\bb}}(\pi(\Theta))\bigr)$ for each $\overline{\pi}\inG_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}$\,{\rm ;}\\ $(\mathrm{ii})$\ ${\rm Spl}_M f_{\ba,\bb}(X)
=M(\omega_{f_{\ba,\bb}}(\pi(\Theta))\ |\ \overline{\pi}\inG_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}})$. \end{proposition}
We also get the followings (see, for example, \cite[Proposition 3.12, Corollary 3.13]{HM}):
\begin{proposition}\label{propAn} For $\ba,\bb \in M^n$ with $D_\ba\cdot D_\bb\neq 0$, if $\sqrt{D_\ba\cdot D_\bb}\in M$ then the polynomial $\mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}},f_{\ba,\bb}}$ splits into two factors of degree $n!/2$ over $M$ which are not necessary irreducible. \end{proposition}
\begin{corollary}\label{corAn} For $\ba,\bb \in M^n$ with $D_\ba\cdot D_\bb\neq 0$, if $G_\ba$, $G_\bb\subset \A_n$ then $\mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}},f_{\ba,\bb}}$ splits into two factors of degree $n!/2$ which are not necessary irreducible. \end{corollary}
\begin{definition} For a separable polynomial $f(X)\in k[X]$ of degree $d$, the decomposition type of $f(X)$ over $M$, denoted by {\rm DT}$(f/M)$, is defined as the partition of $d$ induced by the degrees of the irreducible factors of $f(X)$ over $M$. We define the decomposition type {\rm DT}$(\mathcal{RP}_{\Theta,G,f}/M)$ of $\mathcal{RP}_{\Theta,G,f}(X)$ over $M$ by {\rm DT}$(\mathcal{RP}_{\Theta,G,\hat{f}}/M)$ where $\hat{f}(X)$ is a Tschirnhausen transformation of $f(X)$ over $M$ which satisfies that $\mathcal{RP}_{\Theta,G,\hat{f}}(X)$ has no repeated factors (cf. Remark \ref{remGir}). \end{definition}
We write $\mathrm{DT}(f):=\mathrm{DT}(f/M)$ for simplicity. From Theorem \ref{thfun}, the decomposition type $\mathrm{DT}(\mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}},f_{\ba,\bb}})$ coincides with the partition of $n!$ induced by the lengths of the orbits of $G_{\mathbf{s},\mathbf{t}}/H_{\mathbf{s},\mathbf{t}}$ under the action of $\Gal(f_{\ba,\bb})$. Hence, by Proposition \ref{prop12}, $\mathrm{DT}(\mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}},f_{\ba,\bb}})$ gives the degrees of $n!$ fields of Tschirnhausen coefficients $M(c_{0,\pi},\ldots,c_{n-1,\pi})$ from $f_\ba(X)$ to $f_\bb(X)$ over $M$; the degree of $M(c_{0,\pi},\ldots,c_{n-1,\pi})$
over $M$ is equal to $|\Gal(f_{\ba,\bb})|/|\Gal(f_{\ba,\bb})\cap\piH_{\mathbf{s},\mathbf{t}}\pi^{-1}|$.
We conclude that the decomposition type of the resolvent polynomial $\mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}},f_{\ba,\bb}}(X)$ over $M$ gives us information about the field intersection problem for $f_\bs(X)$ through the degrees of the fields of Tschirnhausen coefficients $M(c_{0,\pi},\ldots,c_{n-1,\pi})$ over $M$ which is determined by the degeneration of the Galois group $\Gal(f_{\ba,\bb})$ under the specialization $(\bs, \bt) \mapsto (\ba, \bb)$.
\begin{theorem}[{\cite[Theorem 3.8]{HM}}]\label{throotf} Let $\Theta$ be a $G_{\mathbf{s},\mathbf{t}}$-primitive $H_{\mathbf{s},\mathbf{t}}$-invariant. For $\ba,\bb \in M^n$ with $D_\ba\cdot D_\bb\neq 0$, the following conditions are equivalent\,{\rm :}\\ {\rm (i)} The quotient algebras $M[X]/(f_\ba(X))$ and $M[X]/(f_\bb(X))$ are $M$-isomorphic\,{\rm ;}\\ {\rm (ii)} The decomposition type ${\rm DT}(\mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}},f_{\ba,\bb}})$ over $M$ includes $1$. \end{theorem}
In the case where $G_\ba$ and $G_\bb$ are isomorphic to a transitive subgroup $G$ of $\cS_n$ and every subgroups of $G$ with index $n$ are conjugate in $G$, the condition that the quotient algebras $M[X]/(f_\ba(X))$ and $M[X]/(f_\bb(X))$ are $M$-isomorphic is equivalent to the condition that $\Spl_M f_\ba(X)$ and $\Spl_M f_\bb(X)$ coincide. Hence we obtain an answer to the field isomorphism problem via the resolvent polynomial $\mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}},f_{\ba,\bb}}(X)$.
\begin{corollary}[An answer to $\mathbf{Isom}(f_\bs^G/M)$]\label{cor1} For $\ba,\bb \in M^n$ with $D_\ba\cdot D_\bb\neq 0$, we assume that both of $f_\ba(X)$ and $f_\bb(X)$ are irreducible over $M$, that $G_\ba$ and $G_\bb$ are isomorphic to $G$ and that all subgroups of $G$ with index $n$ are conjugate in $G$. Then ${\rm DT}(\mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}},f_{\ba,\bb}})$ includes $1$ if and only if $\Spl_M f_\ba(X)$ and $\Spl_M f_\bb(X)$ coincide. \end{corollary}
For subgroups $H_1$ and $H_2$ of $\cS_n$, we obtain a $k$-generic polynomial for $H_1\times H_2$ as a generalization of Theorem \ref{th-gen}.
\begin{theorem}[{\cite[Theorem 3.10]{HM}}]\label{thgen} Let $M=k(q_1,\ldots,q_l,r_1,\ldots,r_m)$, $(1\leq l,\, m\leq n-1)$ be the rational function field over $k$ with $(l+m)$ variables. For $\ba\in {k(q_1,\ldots,q_l)}^n, \bb\in {k(r_1,\ldots,r_m)}^n$, we assume that $f_\ba(X)\in M[X]$ and $f_\bb(X)\in M[X]$ be $k$-generic polynomials for $H_1$ and $H_2$, respectively. If $\mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}},f_{\ba,\bb}}(X)\in M[X]$ has no repeated factors, then $\mathcal{RP}_{\Theta,G_{\mathbf{s},\mathbf{t}},f_{\ba,\bb}}(X)$ is a $k$-generic polynomial for $H_1\times H_2$ which is not necessary irreducible. \end{theorem}
\section{The cases of $\cS_4$ and of $\A_4$}\label{seS4A4}
Let $M$ be an overfield of $k$ of characteristic $\neq 2$. We take a $k$-generic polynomial \[ f_{s,t}^{\cS_4}(X)=X^4+sX^2+tX+t\in k(s,t)[X] \] for $\cS_4$. The discriminant of $f_{s,t}^{\cS_4}(X)$ with respect to $X$ is given by \[ D_{s,t}:=t(16s^4 - 128s^2t - 4s^3t + 256t^2 + 144st^2 - 27t^3). \] For $\ba=(a,b)\in M$, we always assume that $f_\ba^{\cS_4}(X)$ is separable over $M$, i.e. $D_\ba\neq 0$.
From the definition, for a general quartic polynomial \[ g_4(X)=X^4+a_1X^3+a_2X^2+a_3X+a_4\in k[X],\quad (a_1,a_2,a_3,a_4\in M), \] there exist $a,b\in M$ such that $\Spl_M f_{a,b}^{\cS_4}(X)=\Spl_M g_4(X)$. Indeed we may take such $a,b\in M$ as follows: The polynomials $g_4(X)$ and \begin{align*} h_4(X):=g_4(X-a_1/4)=X^4+A_2X^2+A_3X+A_4 \end{align*} have the same splitting field over $M$, where \[ A_2=\frac{-3a_1^2 + 8a_2}{8},\ A_3=\frac{a_1^3 - 4a_1a_2 + 8a_3}{8},\ A_4=\frac{-3a_1^4 + 16a_1^2a_2 - 64a_1a_3 + 256a_4}{256}. \] If we put \[ a:=\frac{A_2A_3^2}{A_4^2}\quad \mathrm{and}\quad b:=\frac{A_3^4}{A_4^3} \] then the polynomials $h_4(X)$ and \[ X^4+aX^2+bX+b=\Bigl(\frac{A_3}{A_4}\Bigr)^4\cdot h_4\Bigl(\frac{A_4}{A_3}X\Bigr) \] have the same splitting field over $M$. Hence we see $\Spl_M f_{a,b}^{\cS_4}(X)=\Spl_M g_4(X)$.
Let $\cS_4$ act on $k(x_1,x_2,x_3,x_4)$ by $\pi(x_i)=x_{\pi(i)}, (\pi\in \cS_4)$. We put \[ \sigma:=(1234),\quad \rho_1:=(123),\quad \rho_2:=(234),\quad \omega:=(12)\in \cS_4. \] For the field $k(\bx,\by):=k(x_1,\ldots,x_4,y_1,\ldots,y_4)$, we take the interchanging involution \begin{align*} \iota\ :\ k(\bx,\by)\,\longrightarrow\,k(\bx,\by),\quad x_i\longmapsto y_i,\ y_i\longmapsto x_i,\ (i=1,\ldots,4) \end{align*} as in (\ref{defiota}). Put $(\sigma',\rho_1',\rho_2',\omega'):= (\iota^{-1}\sigma\iota,\iota^{-1}\rho_1\iota,\iota^{-1}\rho_2\iota,\iota^{-1}\omega\iota)$ then $\sigma',\rho_1',\rho_2',\omega'\in\mathrm{Aut}_k(k(\by))$. For simplicity we write \begin{align*} \cS_4&=\langle\sigma,\omega\rangle,& \cS_4'&=\langle\sigma',\omega'\rangle,& \cS_4''&=\langle\sigma\sigma',\omega\omega'\rangle,\\ \A_4&=\langle\rho_1,\rho_2\rangle,& \A_4'&=\langle\rho_1',\rho_2'\rangle,& \A_4''&=\langle\rho_1\rho_1',\rho_2\rho_2'\rangle. \end{align*} Note that $\cS_4''$ $(\cong\cS_4)$ and $\A_4''$ $(\cong\A_4)$ are subgroups of $\cS_4\times\cS_4'$.
We take an $\cS_4\times \cS_4'$-primitive $\cS_4''$-invariant \[ P:=x_1y_1+x_2y_2+x_3y_3+x_4y_4 \] and we put $f_{\bs,\bs'}^{\cS_4}(X):=f_\bs^{\cS_4}(X)f_{\bs'}^{\cS_4}(X)$ where $(\bs,\bs')=(s,t,s',t')$. Then we get an $\cS_4\times \cS_4'$-relative $\cS_4''$-invariant resolvent polynomial of $f_{\bs,\bs'}^{\cS_4}(X)$ by $P$ as follows: \begin{align} \R_{\bs,\bs'}(X):=\mathcal{RP}_{P,\cS_4\times \cS_4',f_{\bs,\bs'}^{\cS_4}} =\Bigl(G_{\bs,\bs'}^1(X)\Bigr)^2-D_{\bs}D_{\bs'}\Bigl(G_{\bs,\bs'}^2(X)\Bigr)^2\in k(\bs,\bs')[X]\label{polyR} \end{align} where \begin{align*} G_{\bs,\bs'}^1(X)={}&{}X^{12}-8s{s'}X^{10}-24t{t'}X^9+(11s^2{s'}^2+4t{s'}^2+4s^2{t'}-80t{t'})X^8\\ &+128st{s'}{t'}X^7+c_6X^6-64tv(3s^2u^2+4tu^2+4s^2v-16tv)X^5+\textstyle{\sum_{i=0}^4 c_i X^i},\\ G_{\bs,\bs'}^2(X)=&-5X^6+12s{s'}X^4+8t{t'}X^3+(-9s^2{s'}^2+20t{s'}^2+20s^2{t'}-16t{t'})X^2\\ &-32st{s'}{t'}X+2s^3{s'}^3-8st{s'}^3+9t^2{s'}^3-8s^3{s'}{t'}+32st{s'}{t'}\\ &-4t^2{s'}{t'}+9s^3{t'}^2-4st{t'}^2-(t^2{t'}^2/2) \end{align*} and $c_6,c_4,c_3,\ldots,c_0\in k(\bs,\bs')$ are given by \begin{align*} c_6=&-2\bigl{[}8st{s'}^3+13t^2{s'}^3-84t^2{s'}{t'}\bigr{]}-28s^3{s'}^3+576st{s'}{t'} +57t^2{t'}^2,\\ c_4=&\ 8\,\big{[}3s^2t{s'}^4-14t^2{s'}^4+6st^2{s'}^4+304t^2{s'}^2{t'}-4st^2{s'}^2{t'} -208st^2{t'}^2\bigl{]}\\ &+17s^4{s'}^4-1216s^2t{s'}^2{t'}-3840t^2{t'}^2-380st^2{s'}{t'}^2,\\ c_3=&-8t{t'}\bigl(-2\bigl{[}40st{s'}^3+9t^2{s'}^3+60t^2{s'}{t'}\bigr{]}-12s^3{s'}^3+832st{s'}{t'} +37t^2{t'}^2\bigr),\\ c_2=&-2\,\big{[}16s^3t{s'}^5-96st^2{s'}^5+9s^2t^2{s'}^5+108t^3{s'}^5+1280st^2{s'}^3{t'}\\ &+168s^2t^2{s'}^3{t'}-288t^3{s'}^3{t'}-1328s^2t^2{s'}{t'}^2+1472t^3{s'}{t'}^2 -270t^3{s'}^2{t'}^2]\\ &-4s^5{s'}^5+768s^3t{s'}^3{t'}+7168st^2{s'}{t'}^2+141s^2t^2{s'}^2{t'}^2+1616t^3{t'}^3,\\ c_1=&\ 8t{t'}\bigl(-8\bigl{[}3s^2t{s'}^4+18t^2{s'}^4+48t^2{s'}^2{t'}+36st^2{s'}^2{t'} -16st^2{t'}^2\bigr{]}\\ &-s^4{s'}^4+704s^2t{s'}^2{t'}-256t^2{t'}^2+84st^2{s'}{t'}^2\bigr),\\ c_0=&\ \bigl{[}16s^4t{s'}^6-128s^2t^2{s'}^6-4s^3t^2{s'}^6+256t^3{s'}^6+144st^3{s'}^6-27t^4{s'}^6 \\ &+1280s^2t^2{s'}^4{t'}+176s^3t^2{s'}^4{t'}-2048t^3{s'}^4{t'}-1728st^3{s'}^4{t'}+540t^4{s'}^4{t'} \\ &-704s^3t^2{s'}^2{t'}^2+4096t^3{s'}^2{t'}^2+4864st^3{s'}^2{t'}^2-720t^4{s'}^2{t'}^2+256t^3{s'}^3{t'}^2\\ &+1008st^3{s'}^3{t'}^2 -270t^4{s'}^3{t'}^2-1024st^3{t'}^3+64t^4{t'}^3-72t^4{s'}{t'}^3\bigr{]}\\ &-256s^4t{s'}^4{t'}-4096s^2t^2{s'}^2{t'}^2 -76s^3t^2{s'}^3{t'}^2-704st^3{s'}{t'}^3-(27t^4{t'}^4/2) \end{align*} with simplifying notation $\bigl{[}a\bigr{]}:=a+\iota(a)$. It follows from the definition of $\iota$ that $\iota (s,t,s',t')=(s',t',s,t)$.
Note that the polynomial $\R_{\bs,\bs'}(X)$ splits into two factors of degree $12$ over the field $k(\bs,\bs')(\sqrt{D_{\bs}D_{\bs'}})$ as \begin{align*} \R_{\bs,\bs'}(X)= \Bigl(G_{\bs,\bs'}^1(X)+\sqrt{D_{\bs}D_{\bs'}}\,G_{\bs,\bs'}^2(X)\Bigr) \Bigl(G_{\bs,\bs'}^1(X)-\sqrt{D_{\bs}D_{\bs'}}\,G_{\bs,\bs'}^2(X)\Bigr), \end{align*} and one of the two factors of $\R_{\bs,\bs'}(X)$ above is the $\A_4\times \A_4'$-relative $\A_4''$-invariant resolvent polynomial $\mathcal{RP}_{P,\A_4\times \A_4',f_{\bs,\bs'}^{\cS_4}}(X)$ of $f_{\bs,\bs'}^{\cS_4}(X)$ by $P$.
For $\ba=(a,b)$, $\ba'=(a',b')\in M^2$ with $D_\ba\cdot D_{\ba'}\neq 0$, we put \[ L_\ba:=\Spl_M f_\ba^{\cS_4}(X),\quad G_\ba:=\Gal(f_\ba^{\cS_4}/M),\quad G_{\ba,\ba'}:=\Gal(f_{\ba,\ba'}^{\cS_4}/M). \]
By Theorem \ref{thfun}, we get an answer to $\mathbf{Int}(f_{\bs}^{\cS_4}/M)$ via $\R_{\bs,\bs'}(X)$. Here we treat only the case where both $f_\ba^{\cS_4}(X)$ and $f_{\ba'}^{\cS_4}(X)$ are irreducible over $M$ and $G_\ba=\cS_4$ or $\A_4$. We will treat the case where $G_\ba\leq \D_4$ (resp. $f_{\ba}^{\cS_4}(X)$ is reducible) in Section \ref{seD4} (Table $3$ and Table $4$ in Theorem \ref{thD4}) (resp. Section \ref{seRed} (Table $5$ and Table $6$ in Theorem \ref{thred})).
An answer to $\mathbf{Int}(f_{\bs}^{\cS_4}/M)$ is given as follows:
\begin{theorem}\label{thS4A4} For $\ba=(a,b)$, $\ba'=(a',b')\in M^2$ with $D_\ba\cdot D_{\ba'}\neq 0$, assume that both of $f_\ba^{\cS_4}(X)$ and $f_{\ba'}^{\cS_4}(X)$ are irreducible over $M$, $\#G_{\ba}\geq \#G_{\ba'}$ and $G_{\ba}=\cS_4$ or $\A_4$. If $G_{\ba}=\cS_4$ $($resp. $G_{\ba}=\A_4$$)$ then an answer to the intersection problem of $f_{s,t}^{S_4}(X)$ is given by Table $1$ according to the decomposition types ${\rm DT}(\R_{\ba,\ba'})$. \end{theorem}
\begin{center} {\rm Table} $1$\vspace*{3mm}\\ {\small
\begin{tabular}{|c|c|l|l|c|l|l|}\hline
$G_\ba$& $G_{\ba'}$ & & GAP ID & $G_{\ba,{\ba'}}$ & & ${\rm DT}(\R_{\ba,\ba'})$ \\ \hline
& & (I-1) & $[576,8653]$ & $\cS_4\times \cS_4$ & $L_\ba\cap L_{\ba'}=M$ & $24$\\ \cline{3-7} & \raisebox{-1.6ex}[0cm][0cm]{$\cS_4$} & (I-2) & $[288,1026]$ & $(\A_4\times \A_4)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $12,12$\\ \cline{3-7} & & (I-3) & $[96,227]$ & $(\V_4\times \V_4)\rtimes S_3$ & $[L_\ba\cap L_{\ba'}:M]=6$ & $12,8,4$\\ \cline{3-7} & & (I-4) & $[24,12]$ & $\cS_4$ & $L_\ba=L_{\ba'}$ & $8,6,6,3,1$\\ \cline{2-7}
& $\A_4$ & (I-5) & $[288,1024]$ & $\cS_4\times \A_4$ & $L_\ba\cap L_{\ba'}=M$ & $24$\\ \cline{2-7}
& & (I-6) & $[192,1472]$ & $\cS_4\times \D_4$ & $L_\ba\cap L_{\ba'}=M$ & $24$\\ \cline{3-7} \raisebox{-1.6ex}[0cm][0cm]{$\cS_4$} & \raisebox{-1.6ex}[0cm][0cm]{$\D_4$} & (I-7) & $[96,187]$ & $(\A_4\times \C_4)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $24$\\ \cline{3-7} & & (I-8) & $[96,195]$ & $(\A_4\times \V_4)\rtimes \C_2$
& $[L_\ba\cap L_{\ba'}:M]=2$ & $24$\\ \cline{3-7} & & (I-9) & $[96,195]$ & $(\A_4\times \V_4)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $12,12$\\ \cline{2-7}
& \raisebox{-1.6ex}[0cm][0cm]{$\C_4$} & (I-10) & $[96,186]$ & $\cS_4\times \C_4$ & $L_\ba\neq L_{\ba'}$ & $24$\\ \cline{3-7} & & (I-11) & $[48,30]$ & $\A_4\rtimes \C_4$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $12,12$\\ \cline{2-7}
& \raisebox{-1.6ex}[0cm][0cm]{$\V_4$} & (I-12) & $[96,226]$ & $\cS_4\times \V_4$ & $L_\ba\neq L_{\ba'}$ & $24$\\ \cline{3-7} & & (I-13) & $[48,48]$ & $\cS_4\times \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $24$\\ \cline{1-7}
& & (I-14) & $[144,184]$ & $\A_4\times \A_4$ & $L_\ba\cap L_{\ba'}=M$ & $12,12$\\ \cline{3-7} & $\A_4$ & (I-15) & $[48,50]$ & $(\V_4\times \V_4)\rtimes \C_3$ & $[L_\ba\cap L_{\ba'}:M]=3$ & $12,4,4,4$\\ \cline{3-7} \raisebox{-1.6ex}[0cm][0cm]{$\A_4$} & & (I-16) & $[12,3]$ & $\A_4$ & $L_\ba=L_{\ba'}$ & $6,6,4,4,3,1$\\ \cline{2-7}
& $\D_4$ & (I-17) & $[96,197]$ & $\A_4\times \D_4$ & $L_\ba\cap L_{\ba'}=M$ & $24$\\ \cline{2-7}
& $\C_4$ & (I-18) & $[48,31]$ & $\A_4\times \C_4$ & $L_\ba\cap L_{\ba'}=M$ & $24$\\ \cline{2-7}
& $\V_4$ & (I-19) & $[48,49]$ & $\A_4\times \V_4$ & $L_\ba\cap L_{\ba'}=M$ & $12,12$\\ \cline{1-7}
\end{tabular} }\vspace*{5mm} \end{center}
We checked the decomposition types by using the computer algebra system GAP \cite{GAP} (with the command \texttt{DoubleCosetRepsAndSizes}). We note that the cases $\{$(I-6),(I-7),(I-8)$\}$ and $\{$(I-12),(I-13)$\}$ may be distinguished by comparing the quadratic extensions of $M$ in the splitting fields. In the case where $G_\ba=\cS_4$, the unique quadratic extension of $M$ is given by \begin{align*} M\bigl(\sqrt{b(16a^4 - 128a^2b - 4a^3b + 256b^2 + 144ab^2 - 27b^3)}\bigr) \end{align*} (see Section \ref{seD4} for the case of $G_{\ba'}=\D_4$).
\begin{example}\label{exS4A4} We give some numerical examples of Theorem \ref{thS4A4}. \\
(i) Take $M=\mathbb{Q}$ and $\ba=(0,1)$, $\ba'=(2,1)$. Then \[ f_\ba^{\cS_4}(X)=X^4+X+1\quad \mathrm{and}\quad f_{\ba'}^{\cS_4}(X)=X^4+2X^3+X+1. \] We see that $G_\ba=G_{\ba'}=\cS_4$ and $\R_{\ba,\ba'}(X)$ splits over $\mathbb{Q}$ as \begin{align*} \R_{\ba,\ba'}(X)=&\ (X-3)(X+1)^3(X^6-6X^5+12X^4-8X^3-64X^2+128X-64)\\ &\cdot(X^6+6X^5+24X^4+56X^3+32X^2-32X-256)\\ &\cdot(X^8+6X^6-16X^5-89X^4-48X^3+686X^2-1048X+4233). \end{align*} Hence it follows from Theorem \ref{thS4A4} that $\Spl_M f_\ba^{\cS_4}(X)=\Spl_M f_{\ba'}^{\cS_4}(X)$.\\
(ii) Take $M=\mathbb{Q}$ and \[ \ba=(0,b),\quad \ba'=(a',b')=(2b,b^2)\quad\mathrm{with}\quad b=\frac{2^6}{3^2}. \] Then we see that $G_\ba=G_{\ba'}=\A_4$ and $\R_{\ba,\ba'}(X)$ splits over $\mathbb{Q}$ as \begin{align*} \R_{\ba,\ba'}(X)=&\ \Bigl(X-\frac{2^6}{3}\Bigr)\Bigl(X+\frac{2^6}{3^2}\Bigr)^3 \Bigl(X^4+\frac{2^{13}}{3^3}X^2-\frac{2^{21}}{3^6}X+\frac{2^{24}}{3^6}\Bigr) \Bigl(X^4-\frac{2^{21}}{3^{6}}X+\frac{2^{26}}{3^7}\Bigr)\\ &\cdot\Bigl(X^6-\frac{2^7}{3}X^5+\frac{2^{14}}{3^3}X^4-\frac{2^{21}}{3^6}X^3 -\frac{2^{24}}{3^6}X^2+\frac{2^{31}}{3^8}X-\frac{2^{36}}{3^{10}}\Bigr)\\ &\cdot\Bigl(X^6+\frac{2^7}{3}X^5+\frac{2^{15}}{3^3}X^4+\frac{2^{21}\cdot7}{3^6}X^3 +\frac{2^{24}\cdot29}{3^7}X^2+\frac{2^{31}\cdot13}{3^9}X+\frac{2^{36}\cdot19}{3^{12}}\Bigr). \end{align*} By Theorem \ref{thS4A4}, we get $\Spl_M f_\ba^{\cS_4}(X)=\Spl_M f_{\ba'}^{\cS_4}(X)$.\\ \end{example}
\subsection{Isomorphism problem of $f_\bs^{\cS_4}(X)=X^4+sX^2+tX+t$}\label{seIsoS4}
~\\
Now we consider the problems $\mathbf{Isom}(f_\bs^{\cS_4}/M)$ and $\mathbf{Isom^\infty}(f_\bs^{\cS_4}/M)$. By Theorem \ref{thS4A4}, we have a criterion to the field isomorphism problem $\mathbf{Isom}(f_\bs^{\cS_4}/M)$ for fixed $\ba$, $\ba'\in M^2$. However we may not know when $\R_{\ba,\ba'}(X)$ has a linear factor over $M$. In particular, we may not answer to $\mathbf{Isom^\infty}(f_\bs^{\cS_4}/M)$, i.e., for a fixed $\ba\in M^2$ whether there exist infinitely many $\ba'\in M^2$ such that $\Spl_M f_\ba^{\cS_4}(X)=\Spl_M f_{\ba'}^{\cS_4}(X)$ or not.
In \cite{HM07}, \cite{HM} we gave an answer to $\mathbf{Isom^\infty}(f_s^{\cS_3}/M)$, $\mathbf{Isom^\infty}(f_s^{\C_3}/M)$ by using formal Tschirnhausen transformation (cf. Section \ref{sePre}). We use the same technique to $\mathbf{Isom^\infty}(f_s^{\cS_4}/M)$. Here we explain an outline of the proof and we will give the proof in the next subsection. \\
For $\ba=(a,b)$, $\ba'=(a',b')\in M^2$ with $\ba\neq\ba'$ and $D_\ba\cdot D_{\ba'}\neq 0$, we take $c_{i,\pi}=\omega_{f_{\ba,\bb}}(\pi(u_i))$, $(i=0,\ldots,3)$, and the field of coefficients $M(c_{0,\pi},\ldots,c_{3,\pi})$ of Tschirnhausen transformations from $f_\ba^{\cS_4}(X)$ to $f_{\ba'}^{\cS_4}(X)$ as in Subsection \ref{subseTschirn}. Then we have \begin{align} f_{\ba'}^{\cS_4}(X)=\mathrm{Resultant}_Y (f_{\ba}^{\cS_4}(Y),X-(c_{0,\pi}+c_{1,\pi}Y+c_{2,\pi}Y^2+c_{3,\pi}Y^3)).\label{eqRR} \end{align} By Lemma \ref{lemMM}, the splitting field of $f_{\ba}^{\cS_4}(X)$ and of $f_{\ba'}^{\cS_4}(X)$ over $M$ coincide if and only if $M=M(c_{0,\pi},\ldots,c_{3,\pi})$ for some $\pi\in\cS_4$ unless $\Gal(f_{\ba}^{\cS_4}/M)=\D_4$. Thus we take such $\pi\in\cS_4$, and put \[ (x,y,z,w):=(c_{0,\pi},\ldots,c_{3,\pi}). \] From the assumption $\ba\neq\ba'$, we see $(z,w)\neq (0,0)$. Hence we should consider the two cases (i) $w=0$ and $z\neq 0$, (ii) $w\neq 0$. In the case of (i) (resp. of (ii)), we put \[ p:=\frac{2y}{z},\quad \Bigl(\mathrm{resp.}\quad u:=\frac{4y}{w},\ v:=\frac{2z}{w}\Bigr). \] Then by the equality (\ref{eqRR}) we get the following result:
\begin{theorem}\label{thS4} For $\ba=(a,b)$, $\ba'=(a',b')\in M^2$ with $\ba\neq\ba'$ and $D_\ba\cdot D_{\ba'}\neq 0$, two $M$-algebras $M[X]/(f_\ba^{\cS_4}(X))$ and $M[X]/(f_{\ba'}^{\cS_4}(X))$ are $M$-isomorphic if and only if either $\mathrm{(i)}$ there exists $p\in M$ such that \begin{align} a'=\frac{P_{\ba,p} Q_{\ba,p}^2}{R_{\ba,p}^2},\quad b'=\frac{Q_{\ba,p}^4}{R_{\ba,p}^3}\label{eq1thS4} \end{align} where $P_{\bs,p},Q_{\bs,p},R_{\bs,p}\in M[\bs,p]$ with $\bs=(s,t)$ are given by \begin{align*} P_{\bs,p}&=-2(s^2-4t)+6tp+sp^2,\\ Q_{\bs,p}&=-8t^2-8stp-2(s^2-4t)p^2+tp^3,\\ R_{\bs,p}&=s^4-8s^2t+16t^2+8st^2+2t(s^2-4t)p+s(s^2-4t)p^2-stp^3+tp^4, \end{align*} or $\mathrm{(ii)}$ there exist $u,v\in M$ such that \begin{align} a'=\frac{U_{\ba,u,v} V_{\ba,u,v}^2}{W_{\ba,u,v}^2},\quad b'=\frac{V_{\ba,u,v}^4}{W_{\ba,u,v}^3}\label{eq2thS4} \end{align} where $U_{\bs,u,v},V_{\bs,u,v},W_{\bs,u,v}\in M[\bs,u,v]$ with $\bs=(s,t)$ are given by \begin{align*} U_{\bs,u,v}&=16s^3-48st-6t^2-8s^2u+16tu+su^2-28stv+6tuv-2s^2v^2+8tv^2,\\ V_{\bs,u,v}&=96s^3t-96st^2+8t^3-64s^2tu+16t^2u+14stu^2-tu^3+32s^4v-160s^2tv\\ &+128t^2v-40st^2v-16s^3uv+64stuv+12t^2uv+2s^2u^2v-8tu^2v-32s^2tv^2\\ &+64t^2v^2+8stuv^2+8t^2v^3,\\ W_{\bs,u,v}&=144s^3t^2+256t^3+144st^3-3t^4-128st^2u-120s^2t^2u-32t^3u+16s^2tu^2\\ &+32t^2u^2+33st^2u^2-8stu^3-3t^2u^3+tu^4+96s^4tv-288s^2t^2v+256t^3v+68st^3v\\ &-64s^3tuv+80st^2uv-18t^3uv+14s^2tu^2v-stu^3v+16s^5v^2-112s^3tv^2+192st^2v^2\\ &+2s^2t^2v^2+120t^3v^2-8s^4uv^2+48s^2tuv^2-64t^2uv^2+s^3u^2v^2-4stu^2v^2-4s^3tv^3\\ &+16st^2v^3+24t^3v^3+2s^2tuv^3-8t^2uv^3+s^4v^4-8s^2tv^4+16t^2v^4+8st^2v^4. \end{align*} \end{theorem}
\begin{corollary}[An answer to $\mathbf{Isom}(f_\bs^{\cS_4}/M)$]\label{cor12} Let $\ba$, $\ba'\in M^2$ be as in Theorem \ref{thS4}. We also assume that $\Gal(f_\ba^{\cS_4}/M)\neq \D_4$. Then two splitting fields of $f_\ba^{\cS_4}(X)=X^4+aX^2+bX+b$ and of $f_{\ba'}^{\cS_4}(X)=X^4+a'X^2+b'X+b'$ over $M$ coincide if and only if either $\mathrm{(i)}$ there exists $p\in M$ which satisfies $(\ref{eq1thS4})$ or $\mathrm{(ii)}$ there exists a pair of $u,v\in M$ which satisfies $(\ref{eq2thS4})$ \end{corollary} By Theorem \ref{thS4} we obtain an answer to $\mathbf{Isom^\infty}(f_\bs^{\cS_4}/M)$ as follows: We use the case (i) of Theorem \ref{thS4} (we may also use (ii) instead of (i)). We regard $p$ as an independent parameter over $M$ formally and take $f_{\ba'}^{\cS_4}(X)\in M(p)[X]$ where \begin{align*} a'=\frac{P_{\ba,p} Q_{\ba,p}^2}{R_{\ba,p}^2},\quad b'=\frac{Q_{\ba,p}^4}{R_{\ba,p}^3} \end{align*} as in (\ref{eq1thS4}). Then we have $\Spl_{M(p)} f_\ba(X)=\Spl_{M(p)} f_{\ba'}(X)$. The discriminant of $f_{\ba'}^{\cS_4}(X)$ with respect to $X$ is given by \[ \frac{D_\ba Q_{\ba,p}^{12}S_{\ba,p}^2}{R_{\ba,p}^{12}} \] where $S_{\ba,p}=-64b^2+16(a^2-4b)p^2+8sp^4+p^6$. Thus for $p\in M$ we have $\Spl_M f_\ba(X)=\Spl_M f_{\ba'}(X)$ unless $Q_{\ba,p}R_{\ba,p}S_{\ba,p}=0$. Since only finitely many $p\in M$ satisfy $Q_{\ba,p}R_{\ba,p}S_{\ba,p}=0$, we have the following corollary:
\begin{corollary}[An answer to $\mathbf{Isom}^\infty(f_\bs^{\cS_4}/M)$]\label{cor2} Let $M\supset k$ be an infinite field. For $\ba\in M^2$ with $D_\ba\neq 0$, there exist infinitely many $\ba'\in M^2$ such that $\Spl_M f_\ba^{\cS_4}(X)=\Spl_M f_{\ba'}^{\cS_4}(X)$. \end{corollary}
\begin{remark} By eliminating the variable $v$ (resp. $u$) from the two equalities in (\ref{eq2thS4}) of Theorem \ref{thS4}, we get the equation $h=0$ where $h\in M(a,b,a',b')[u]$ is a polynomial in $u$ (resp. $v$) of degree $24$. This polynomial $h$ coincides with the $\cS_4\times \cS_4'$-relative $\cS_4''$-invariant resolvent polynomial of $f_{\ba,\ba'}^{\cS_4}(X)$ by $u$ (resp. $v$); for, from the definition of $u$ (resp. $v$), we may regard $u=4u_1/u_3$ (resp. $v=2u_2/u_3$) where $u_i$ is the formal Tschirnhausen coefficient which is defined in (\ref{defu}). Hence from Theorem \ref{thS4} we also get a solution to $\mathbf{Int}(f_{\bs}^{\cS_4}/M)$ by using Table $1$ via $\mathrm{DT}(h)$ instead of $\mathrm{DT}(\R_{\ba,\ba'})$. \end{remark}
\begin{example} We give some numerical examples of Theorem \ref{thS4}. Note that we always assume $D_\ba\neq 0$ for $\ba=(a,b)\in M^2$. \\
(i) If we take $p=0$ then we have \begin{align*} (P_\ba,Q_\ba,R_\ba)=(-2(a^2-4b),-8b^2,a^4-8a^2b+16b^2+8ab^2). \end{align*} Hence two splitting fields of $f_\ba^{\cS_4}(X)=X^4+aX^2+bX+b$ and of \begin{align*} f_{\ba'}^{\cS_4}(X)=X^4-\frac{2^7(a^2-4b)b^4}{(a^4-8a^2b+16b^2+8ab^2)^2}X^2 +\frac{2^{12}b^8}{(a^4-8a^2b+16b^2+8ab^2)^3}(X+1) \end{align*} over $M$ coincide. The corresponding Tschirnhausen transformation from $f_{\ba}^{\cS_4}(X)$ to $f_{\ba'}^{\cS_4}(X)$ as in (\ref{eqRR}) is given by \[ f_{\ba'}^{\cS_4}(X)=\mathrm{Resultant}_Y \Bigl(f_{\ba}^{\cS_4}(Y),X-\Bigl(-\frac{8b^2(a+2Y^2)}{(a^4-8a^2b+16b^2+8ab^2)}\Bigr)\Bigr). \] In particular, if we take $a=0$ then we see that the polynomials \[ f_{0,b}^{\cS_4}(X)=X^4+bX+b\quad \mathrm{and}\quad f_{2b,b^2}^{\cS_4}(X)=X^4+2bX^2+b^2(X+1) \] have the same splitting field over $M$. We remark that this example is a generalization of Example \ref{exS4A4} (i), (ii). \\
(ii) If we take $p=2$ then we have \begin{align*} \bigl(P_\ba,Q_\ba,R_\ba\bigr)&=\bigl(-2(-2a+a^2-10b),-8(a^2-5b+2ab+b^2),\\ &\hspace*{9mm} 4a^3+a^4+16b-24ab-4a^2b+8ab^2\bigr). \end{align*} Hence two splitting fields of $f_\ba^{\cS_4}(X)=X^4+aX^2+bX+b$ and of \begin{align*} f_{\ba'}^{\cS_4}(X)=X^4&+\frac{2^7(2a-a^2+10b)(a^2-5b+2ab+b^2)^2} {(4a^3+a^4+16b-24ab-4a^2b+8ab^2)^2}X^2\\ &+\frac{2^{12}(a^2-5b+2ab+b^2)^4}{(4a^3+a^4+16b-24ab-4a^2b+8ab^2)^3}(X+1) \end{align*} over $M$ coincide. The corresponding Tschirnhausen transformation from $f_{\ba}^{\cS_4}(X)$ to $f_{\ba'}^{\cS_4}(X)$ is given by \[ f_{\ba'}^{\cS_4}(X)=\mathrm{Resultant}_Y \Bigl(f_{\ba}^{\cS_4}(Y),X-\Bigl(-\frac{8(a^2-5b+2ab+b^2)(a+2Y+2Y^2)} {4a^3+a^4+16b-24ab-4a^2b+8ab^2}\Bigr)\Bigr). \] In particular, if we take $a=0$ then we see that the polynomials \[ f_{0,b}^{\cS_4}(X)=X^4+bX+b\ \ \mathrm{and}\ \ f_{5b(b-5)^2,b(b-5)^4}^{\cS_4}(X)=X^4+5b(b-5)^2X^2+b(b-5)^4(X+1) \] have the same splitting field over $M$. \\
(iii) If we take $u=v=0$ then we have \begin{align*} \bigl(U_\ba,V_\ba,W_\ba\bigr)=\bigl(&2(8a^3-24ab-3b^2),8b(12a^3-12ab+b^2),\\ &\ b^2(144a^3+256b+144ab-3b^2)\bigr). \end{align*} Hence two splitting fields of $f_\ba^{\cS_4}(X)=X^4+aX^2+bX+b$ and of \begin{align*} f_{\ba'}^{\cS_4}(X)=X^4&+\frac{2^7(8a^3-24ab-3b^2)(12a^3-12ab+b^2)^2} {b^2(144a^3+256b+144ab-3b^2)^2}X^2\\ &+\frac{2^{12}(12a^3-12ab+b^2)^4}{b^2(144a^3+256b+144ab-3b^2)^3}(X+1) \end{align*} over $M$ coincide. The corresponding Tschirnhausen transformation from $f_{\ba}^{\cS_4}(X)$ to $f_{\ba'}^{\cS_4}(X)$ is given by \[ f_{\ba'}^{\cS_4}(X)=\mathrm{Resultant}_Y \Bigl(f_{\ba}^{\cS_4}(Y),X-\Bigl(-\frac{8(12a^3-12ab+b^2)(3b+4Y^3)} {b(144a^3+256b+144ab-3b^2)}\Bigr)\Bigr). \] In particular, if we take $a=0$ then we see that the polynomials \[ f_{0,b}^{\cS_4}(X)=X^4+bX+b\ \ \mathrm{and}\ \ f_{-6B^2,-8B^3}^{\cS_4}(X)=X^4-6B^2X^2-8B^3(X+1)\ \ \mathrm{with}\ \ B=\frac{8b}{3b-256} \] have the same splitting field over $M$. We give examples in the case of $b,B\in\mathbb{Z}$ in Table $2$.
\begin{center} {\rm Table} $2$\vspace*{3mm}\\
\begin{tabular}{|c||c|c|c|c|c|c|}\hline $b$ & $-256$ & $64$ & $80$ & $84$ & $85$ & $86$\\ \hline $B$ & $2$ & $-8$ & $-40$ & $-168$ & $-680$ & $344$\\ \hline $-6B^2$ & $-24$ & $-384$ & $-9600$ & $-169344$ & $-2774400$ & $-710016$\\ \hline $-8B^3$ & $-64$ & $4096$ & $512000$ & $37933056$ & $2515456000$ & $-325660672$\\ \hline
\end{tabular} \vspace*{2mm}\\
\begin{tabular}{|c||c|c|c|c|c|}\hline $b$ & $88$ & $96$ & $128$ & $256$ & $768$ \\ \hline $B$ & $88$ & $24$ & $8$ & $4$ & $3$\\ \hline $-6B^2$ & $-46464$ & $-3456$ & $-384$ & $-96$ & $-54$\\ \hline $-8B^3$ & $-5451776$ & $-110592$ & $-4096$ & $-512$ & $-216$\\ \hline
\end{tabular}\hspace*{2.8cm} \vspace*{5mm} \end{center}
We note that $\Gal(f_{0,b}^{\cS_4}/\mathbb{Q})=\cS_4$ for the $b$'s in Table $2$ except for $b=-256$, $128$, $768$ and that $\Gal(f_{0,b}^{\cS_4}/\mathbb{Q})=\D_4$ for the exceptional cases $b=-256$, $128$, $768$. \\ \end{example}
\subsection{Proof of Theorem \ref{thS4}}\label{seProof}
~\\
By Lemma \ref{lemMM}, two splitting fields of $f_\ba^{\cS_4}(X)$ and of $f_{\ba'}^{\cS_4}(X)$ over $M$ coincide if and only if there exist $x,y,z,w\in M$ such that \begin{align} f_{\ba'}^{\cS_4}(X)=R'(x,y,z,w,a,b;X)\label{eqR} \end{align} where \begin{align*} &R'(x,y,z,w,s,t;X)\\ &:=\mathrm{Resultant}_Y(f_{\ba}^{\cS_4}(Y),X-(x+yY+zY^2+wY^3))\\ &=t^3w^4+3st^2w^3x-t^3w^3x+s^3w^2x^2-3stw^2x^2+3t^2w^2x^2-3twx^3+x^4-2st^2w^3y\\ &+t^3w^3y-s^2tw^2xy-5t^2w^2xy-2s^2wx^2y+4twx^2y+s^2tw^2y^2+2t^2w^2y^2+2stwxy^2\\ &+sx^2y^2-2stwy^3-txy^3+ty^4-t^3w^3z-2s^2tw^2xz+4t^2w^2xz+st^2w^2xz+stwx^2z\\ &-2sx^3z-st^2w^2yz+4stwxyz-3t^2wxyz+3tx^2yz+3t^2wy^2z-4txy^2z+st^2w^2z^2\\ &+t^2wxz^2+s^2x^2z^2+2tx^2z^2-4t^2wyz^2-stxyz^2+sty^2z^2-2stxz^3+t^2xz^3-t^2yz^3\\ &+t^2z^4+\bigl(-3st^2w^3+t^3w^3-2s^3w^2x+6stw^2x-6t^2w^2x+9twx^2-4x^3+s^2tw^2y\\ &+5t^2w^2y+4s^2wxy-8twxy-2stwy^2-2sxy^2+ty^3+2s^2tw^2z-4t^2w^2z-st^2w^2z\\ &-2stwxz+6sx^2z-4stwyz+3t^2wyz-6txyz+4ty^2z-t^2wz^2-2s^2xz^2-4txz^2\\ &+styz^2+2stz^3-t^2z^3\bigr)X+\bigl(s^3w^2-3stw^2+3t^2w^2-9twx+6x^2-2s^2wy+4twy\\ &+sy^2+stwz-6sxz+3tyz+s^2z^2+2tz^2\bigr)X^2+\bigl(3tw-4x+2sz\bigr)X^3+X^4.\\ \end{align*}
We first see that $(z,w)\neq (0,0)$ as follows: If we assume $(z,w)=(0,0)$ then we should have \begin{align*} R'(x,y,0,0,s,t;X)=x^4&+sx^2y^2-txy^3+ty^4+(-4x^3-2sxy^2+ty^3)X\\ &+(6x^2+sy^2)X^2-4xX^3+X^4. \end{align*} By comparing the coefficients of $X^3$ in (\ref{eqR}), we obtain $x=0$. It also follows that $y=1$ because we see $ty^3=ty^4$ by $R'(0,y,0,0,s,t;X)=X^4+sy^2X^2+ty^3X+ty^4$. Thus we obtain $\ba=\ba'$ which contradicts the assumption. \\
{\bf (i) The case of $w=0$ and $z\neq 0$.} By comparing the coefficients of $X^3$ in (\ref{eqR}), we see $-4x+2sz=0$; hence we have $x=sz/2$. By direct computation, we then have \begin{align*} R'(sz/2,y,z,w,s,t;X)=c_0+c_1X+c_2X^2+X^4 \end{align*} where \begin{align*} c_0&=\bigl(16ty^4-8sty^3z+4s^3y^2z^2-16sty^2z^2+4s^2tyz^3\\ &\quad\ -16t^2yz^3+s^4z^4-8s^2tz^4+16t^2z^4+8st^2z^4\bigr)\big/16,\\ c_1&=ty^3-s^2y^2z+4ty^2z-2styz^2-t^2z^3,\\ c_2&=\bigl(2sy^2+6tyz-s^2z^2+4tz^2\bigr)\big/2. \end{align*}
Now it follows from (\ref{eqR}) that $c_0=c_1$. We put \[ p:=\frac{2y}{z}. \] Then, by $c_0=c_1$, we get an equation which is linear in $z$. From this equation we have \begin{align*} z&=\frac{2(-2p^2s^2+8p^2t+p^3t-8pst-8t^2)}{p^2s^3+s^4+p^4t-4p^2st-p^3st-8s^2t +2ps^2t+16t^2-8pt^2+8st^2}=:z'. \end{align*} Thus we get $\widetilde\bx:=(x,y,z)=(sz'/2,pz'/2,z')$ and \begin{align*} R'(\widetilde\bx,s,t;X)=\frac{Q_{s,t}^4}{R_{s,t}^3}+\frac{Q_{s,t}^4}{R_{s,t}^3}X +\frac{P_{s,t} Q_{s,t}^2}{R_{s,t}^2}X^2+X^4. \end{align*}
{\bf (ii) The case of $w\neq 0$.} By comparing the coefficients of $X^3$ in (\ref{eqR}), we see $3tw-4x+2sz=0$. Hence follows $x=(3tw+2sz)/4$. By direct computation, we have \begin{align*} R'((3tw+2sz)/4,y,z,w,s,t;X)=C_0+C_1X+C_2X^2+X^4 \end{align*} where \begin{align*} C_0&=\bigl(144s^3t^2w^4+256t^3w^4+144st^3w^4-3t^4w^4-512st^2w^3y-480s^2t^2w^3y\\ &\quad\ -128t^3w^3y+256s^2tw^2y^2+512t^2w^2y^2+528st^2w^2y^2-512stwy^3-192t^2wy^3\\ &\quad\ +256ty^4+192s^4tw^3z-576s^2t^2w^3z+512t^3w^3z+136st^3w^3z-512s^3tw^2yz\\ &\quad\ +640st^2w^2yz-144t^3w^2yz+448s^2twy^2z-128sty^3z+64s^5w^2z^2-448s^3tw^2z^2\\ &\quad\ +768st^2w^2z^2+8s^2t^2w^2z^2+480t^3w^2z^2-128s^4wyz^2+768s^2twyz^2\\ &\quad\ -1024t^2wyz^2+64s^3y^2z^2-256sty^2z^2-32s^3twz^3+128st^2wz^3+192t^3wz^3\\ &\quad\ +64s^2tyz^3-256t^2yz^3+16s^4z^4-128s^2tz^4+256t^2z^4+128st^2z^4\bigr)\big/256,\\ C_1&=\bigl(-12s^3tw^3+12st^2w^3-t^3w^3+32s^2tw^2y-8t^2w^2y-28stwy^2+8ty^3\\ &\quad\ -8s^4w^2z+40s^2tw^2z-32t^2w^2z+10st^2w^2z+16s^3wyz-64stwyz\\ &\quad\ -12t^2wyz -8s^2y^2z+32ty^2z+16s^2twz^2-32t^2wz^2-16styz^2-8t^2z^3\bigr)\big/8,\\ C_2&=\bigl(8s^3w^2-24stw^2-3t^2w^2-16s^2wy+32twy+8sy^2-28stwz+24tyz\\ &\quad\ -4s^2z^2+16tz^2\bigr)\big/8. \end{align*}
Now it follows from (\ref{eqR}) that $C_0=C_1$. We put \[ u:=\frac{4y}{w},\quad v:=\frac{2z}{w}. \] Then, by $C_0=C_1$, we get an equation which is linear in $w$. From this equation we have \begin{align*} w&=-4\bigl(96s^3t-96st^2+8t^3-64s^2tu+16t^2u+14stu^2-tu^3+32s^4v-160s^2tv\\ &+128t^2v-40st^2v-16s^3uv+64stuv+12t^2uv+2s^2u^2v-8tu^2v-32s^2tv^2\\ &+64t^2v^2+8stuv^2+8t^2v^3\bigr)\big/\bigl(144s^3t^2+256t^3+144st^3-3t^4-128st^2u\\ &-120s^2t^2u-32t^3u+16s^2tu^2+32t^2u^2+33st^2u^2-8stu^3-3t^2u^3+tu^4\\ &+96s^4tv-288s^2t^2v+256t^3v+68st^3v-64s^3tuv+80st^2uv-18t^3uv\\ &+14s^2tu^2v-stu^3v+16s^5v^2-112s^3tv^2+192st^2v^2+2s^2t^2v^2+120t^3v^2\\ &-8s^4uv^2+48s^2tuv^2-64t^2uv^2+s^3u^2v^2-4stu^2v^2-4s^3tv^3+16st^2v^3\\ &+24t^3v^3+2s^2tuv^3-8t^2uv^3+s^4v^4-8s^2tv^4+16t^2v^4+8st^2v^4\bigr)=:w'. \end{align*} We finally have $\widetilde\bx:=(x,y,z,w)=((3t+2sv)w'/4,uw'/4,vw'/2,w')$ and \begin{align*} R'(\widetilde\bx,s,t;X)=\frac{V_{s,t}^4}{W_{s,t}^3}+\frac{V_{s,t}^4}{W_{s,t}^3}X +\frac{U_{s,t} V_{s,t}^2}{W_{s,t}^2}X^2+X^4.\quad \qed \end{align*}
\section{The case of $\D_4$}\label{seD4}
Let $M$ be an infinite overfield of $k$ with char $k\neq 2$. We take a $k$-generic polynomial \[ f_{s,t}^{\D_4}(X)=X^4+sX^2+t\in k(s,t)[X]. \] The discriminant of $f_{s,t}^{\D_4}(X)$ with respect to $X$ is given by \[ D_{s,t}:=16t(s^2-4t)^2. \] We always assume that for $\ba=(a,b)\in M^2$, $f_\ba^{\D_4}(X)$ is separable over $M$, i.e. $D_\ba\neq 0$. \\
\subsection{Transformation to $X^4+sX^2+t$}\label{seTran}
~\\
From the definition of generic polynomial, for a separable quartic polynomial \[ g_4(X)=X^4+a_1X^3+a_2X^2+a_3X+a_4\in M[X],\quad (a_1,a_2,a_3,a_4\in M), \] with $\Gal(g_4/M)\leq \D_4$, there exist $a,b\in M$ such that $\Spl_M f_{a,b}^{\D_4}(X)=\Spl_M g_4(X)$. \\
Indeed, in 1928, Garver \cite{Gar28-1} proved that $g_4(X)$ $(a_1=0)$ can be transformed into the form $f_{a,b}^{\D_4}(X)$ by certain Tschirnhausen transformation for $k=\mathbb{Q}$. The aim of this subsection is to give an explicit formula of such a transformation for general $g_4(X)$ (including the case $a_1\neq 0$) via resolvent polynomial.
Let $k(\bx):=k(x_1,\ldots,x_4)$ be the rational function field over $k$ with variables $x_1,\ldots,x_4$ as in Section \ref{sePre}. We put \[ z_1:=x_1-x_3,\qquad z_2:=x_2-x_4; \] then the group $\D_4=\langle\sigma,\tau_1\rangle$ where $\sigma=(1234)$ and $\tau_1=(13)$ acts on $k(z_1,z_2)\subset k(x_1,\ldots,x_4)$ as \[ \sigma\,:\, z_1\mapsto z_2,\quad z_2\mapsto -z_1,\quad \tau_1\,:\, z_1\mapsto -z_1,\quad z_2\mapsto z_2. \] Take an $\cS_4$-primitive $\D_4$-invariant $\theta:=x_1x_3+x_2x_4$. Then we have $k(\bx)^{\D_4}=k(s_1,\ldots,s_4)(\theta)$ where $s_i$ is the $i$-th elementary symmetric functions in $\bx$. We consider the minimal polynomial of $z_1$ over $k(s_1,\ldots,s_4)(\theta)$: \begin{align*} &(X-z_1)(X+z_1)(X-z_2)(X+z_2)\\ &=X^4+\bigl(-(x_1^2+x_2^2+x_3^2+x_4^2)+2(x_1x_3+x_2x_4)\bigr)X^2+(x_1-x_3)^2(x_2-x_4)^2\\ &=X^4+(-s_1^2+2s_2+2\theta)X^2+(s_2^2-4s_1s_3+16s_4+2s_2\theta-3\theta^2). \end{align*} Then we have
\begin{lemma}\label{lemh4} The polynomials $f_\bs(X)=X^4-s_1X^3+s_2X^2-s_3X+s_4$ and \[ X^4+(-s_1^2+2s_2+2\theta)X^2+(s_2^2-4s_1s_3+16s_4+2s_2\theta-3\theta^2) \] are Tschirnhausen equivalent over $k(s_1,\ldots,s_4)(\theta)$. \end{lemma}
\begin{proof} It can be checked directly that \begin{align*} x_1=\frac{s_1^2s_2-4s_2^2+4s_1s_3-s_1^2\theta+4\theta^2 +(s_1^3-4s_1s_2+8s_3)z_1+(s_1^2-4s_2+4\theta)z_1^2}{2(s_1^3-4s_1s_2+8s_3)}. \end{align*} By the successive actions of $\sigma$ on both sides of this equality, we obtain the assertion. \end{proof}
We take an absolute (i.e. $\cS_4$-primitive) $\D_4$-invariant resolvent polynomial of $g_4(X)$ by $\theta$: \begin{align} \mathcal{RP}_{\theta,\cS_4,g_4}(X) &=(X-(x_1x_3+x_2x_4))(X-(x_1x_2+x_3x_4))(X-(x_1x_4+x_2x_3))\label{resS4}\\ &=X^3-a_2X^2+(a_1a_3-4a_4)X-a_3^2-a_1^2a_4^2+4a_2a_4.\nonumber \end{align} We note that if $g_4(X)$ is separable over $M$ then $\mathcal{RP}_{\theta,\cS_4,g_4}(X)$ is also separable over $M$, because their discriminants exactly coincide.
From the assumption of $\Gal(g_4/M)\leq \D_4$, the resolvent polynomial $\mathcal{RP}_{\theta,\D_4,g_4}(X)$ has a root $c\in M$. By specializing parameters $(s_1,s_2,s_3,s_4)\mapsto (-a_1,a_2,-a_3,a_4)\in M^4$ in Lemma \ref{lemh4}, we get
\begin{lemma}\label{lemTT} For $(a_1,a_2,a_3,a_4)\in M^4$, we assume that $a_1^3-4a_1a_2+8a_3\neq 0$. Then the two polynomials $g_4(X)=X^4+a_1X^3+a_2X^2+a_3X+a_4$ with $\Gal(g_4/M)\leq \D_4$ and $f_{a,b}^{\D_4}(X)=X^4+aX^2+b$ are Tschirnhausen equivalent over $M$ $($in particular, $\Spl_M g_4(X)=\Spl_M f_{a,b}^{\D_4}(X)$$)$ where \begin{align*} a=-a_1^2+2a_2+2c,\quad b=a_2^2-4a_1a_3+16a_4+2a_2c-3c^2 \end{align*} and $c\in M$ is a root of $\mathcal{RP}_{\theta,\D_4,g_4}(X)$ as in $(\ref{resS4})$. \end{lemma}
\begin{remark} We may assume $a_1^3-4a_1a_2+8a_3\neq 0$ for our purpose because we should treat only the case of $a_1=0$ and $a_3\neq 0$. \end{remark}
\begin{example} Take $M=\mathbb{Q}$ and $(a_1,a_2,a_3,a_4)=(1,1,1,1)$. Then we obtain the $5$-th cyclotomic polynomial $g_4(X)=X^4+X^3+X^2+X+1$ and the corresponding resolvent polynomial $\mathcal{RP}_{\theta,\D_4,g_4}(X)=X^3-X^2-3X+2$ which splits as $(X-2)(X^2+X-1)$ over $\mathbb{Q}$. Thus we take $c=2$ to have $(a,b)=(5,5)$. Hence it follows that $g_4(X)=X^4+X^3+X^2+X+1$ and $X^4+5X+5$ have the same splitting field over $\mathbb{Q}$. \\ \end{example}
\subsection{Intersection problem of $f_\bs^{\D_4}(X)=X^4+sX^2+t$}\label{seIntD4}
~\\
We take the rational function field $k(\bx):=k(x_1,\ldots,x_4)$ over $k$ with variables $x_1,\ldots,x_4$ as in Section \ref{sePre}. In the case of $\D_4$, by a result of the previous subsection, we may specialize $x_3:=-x_1$, $x_4:=-x_2$ and consider the field $k(x_1,x_2)=k(\bx)$. Put \[ \sigma:=(1234),\quad \tau_1:=(13),\quad \tau_2:=(24),\quad \tau_3:=(12)(34),\quad \tau_4:=(14)(23). \] Then the group $\D_4=\langle\sigma,\tau_i\rangle$, $(i=1,\ldots,4)$ acts on $k(x_1,x_2)$ as in the previous subsection by \begin{align*} \sigma\,&:\, x_1\mapsto x_2,\, x_2\mapsto -x_1,\\ \tau_1\,&:\,x_1\mapsto -x_1,\, x_2\mapsto x_2,\quad \tau_2\,:\, x_1\mapsto x_1,\, x_2\mapsto -x_2,\\ \tau_3\,&:\, x_1\mapsto x_2\mapsto x_1,\hspace*{14mm} \tau_4\,:\, x_1\mapsto -x_2,\, x_2\mapsto -x_1. \end{align*} We first see that $k(x_1,x_2)^{\D_4}=k(s,t)=:k(\bs)$ where \[ s:=-x_1^2-x_2^2,\quad t:=x_1^2x_2^2. \] The element $x_1$ (resp. $x_2$) is a $\D_4$-primitive $\langle\tau_2\rangle$-invariant (resp. $\langle\tau_1\rangle$-invariant). Thus two fields $k(x_1,x_2)^{\langle\tau_2\rangle}=k(\bs)(x_1)$ and $k(x_1,x_2)^{\langle\tau_1\rangle}=k(\bs)(x_2)$ are non-Galois quartic fields over $k(\bs)=k(s,t)$.
By Kemper-Mattig's theorem \cite{KM00}, we see that the $\D_4$-primitive $\langle\tau_1\rangle$-invariant resolvent polynomial \begin{align*} f_{s,t}^{\D_4}(X)&:=\mathcal{RP}_{x_2,\D_4}(X)=(X^2-x_1^2)(X^2-x_2^2)\\ &\ =X^4+sX^2+t\in k(s,t)[X] \end{align*} by $x_2$ is a $k$-generic polynomial for $\D_4$. In this section, we treat only the case where $f_\ba^{\D_4}(X)$ is irreducible over $M$. (See Section \ref{seRed} for reducible cases.)
The group $\D_4$ has five elements of order two and they form three $\cS_4$-conjugacy classes $\{\tau_1,\tau_2\}$, $\{\tau_3,\tau_4\}$, $\{\sigma^2=\tau_1\tau_2=\tau_3\tau_4\}$, and the group $\langle\sigma^2\rangle$ is the center of $\D_4$.
The element $x_1+x_2$ (resp. $x_1-x_2$) is a $\D_4$-primitive $\langle\tau_3\rangle$-invariant (resp. $\langle\tau_4\rangle$-invariant). Hence the fields $k(\bx)^{\langle\tau_3\rangle}=k(\bs)(x_1+x_2)$ and $k(\bx)^{\langle\tau_4\rangle}=k(\bs)(x_1-x_2)$ are also non-Galois quartic fields over $k(\bs)$. Thus the $\D_4$-primitive $\langle\tau_3\rangle$-invariant resolvent polynomial \begin{align*} g_{s,t}^{\D_4}(X)&:=\mathcal{RP}_{x_1+x_2,\D_4}(X) =\bigl(X^2-(x_1+x_2)^2\bigr)\bigl(X^2-(x_1-x_2)^2\bigr)\\ &\ =X^4+2sX^2+(s^2-4t)\in k(s,t)[X] \end{align*} by $x_1+x_2$ is also a $k$-generic polynomial for $\D_4$. We see $g_{s,t}^{\D_4}(X)=f_{2s,s^2-4t}^{\D_4}(X)$ and that the discriminant of $g_{s,t}^{\D_4}(X)$ with respect to $X$ equals $2^{12}t^2(s^2-4t)$.
We note that $k(\bs)[X]/(f_{\bs}^{\D_4}(X))$ and $k(\bs)[X]/(g_{\bs}^{\D_4}(X))$ are not isomorphic over $k(\bs)$ although $\Spl_{k(\bs)} f_{\bs}^{\D_4}(X) =\Spl_{k(\bs)} g_{\bs}^{\D_4}(X)=k(x_1,x_2)$. From above we see
\begin{lemma}\label{lemRF} Assume that $\Gal(f_{\ba}^{\D_4}/M)=\D_4$ for $\ba=(a,b)\in M^2$. For $\ba'=(a',b')\in M^2$, the following two conditions are equivalent\,$:$\\ {\rm (i)}\ \ $\Spl_M f_{\ba'}^{\D_4}(X) =\Spl_M f_{\ba}^{\D_4}(X)\,;$\\ {\rm (ii)} $M[X]/(f_{\ba'}^{\D_4}(X))$ is $M$-isomorphic to either $M[X]/(f_{\ba}^{\D_4}(X))$ or $M[X]/(f_{2a,a^2-4b}^{\D_4}(X))$. \end{lemma}
In the case of $\Gal(f_{\ba}^{\D_4}/M)=\C_4$ or $\V_4$, we see that $\Spl_M f_{\ba'}^{\D_4}(X)=\Spl_M f_{\ba}^{\D_4}(X)$ if and only if $M[X]/(f_{\ba'}^{\D_4}(X))\cong M[X]/(f_{\ba}^{\D_4}(X))$ (cf. Corollary \ref{cor1}).
The Galois biquadratic field $k(\bx)^{\langle\sigma^2\rangle}$ of $k(\bs)$ is given as $k(\bx)^{\langle\sigma^2\rangle}=k(\bs)(x_1/x_2)$ which is obtained as the minimal splitting field of \begin{align*} \mathcal{RP}_{x_1/x_2,\D_4}(X) =\Bigl(X^2-\Bigl(\frac{x_1}{x_2}\Bigr)^2\Bigr)\Bigl(X^2-\Bigl(\frac{x_2}{x_1}\Bigr)^2\Bigr) =X^4-\frac{s^2-2t}{t}X^2+1 \end{align*} over $k(\bs)$. The group $\D_4$ has three subgroups $\langle\tau_1,\tau_2\rangle$, $\C_4={\langle\sigma\rangle}$ and $\langle\tau_3,\tau_4\rangle$ of index two. The cyclic group $\C_4=\langle\sigma\rangle$ acts on $k(\bx)^{\langle\sigma^2\rangle}=k(\bs)(x_1/x_2)$ by $\sigma\,:\, x_1/x_2\mapsto -x_2/x_1$. Hence we take \begin{align*} u:=\frac{x_1}{x_2}-\frac{x_2}{x_1}=\frac{x_1^2-x_2^2}{x_1x_2}=\sqrt{(s^2-4t)/t},\quad v:=x_1x_2=\sqrt{t}. \end{align*} Then three quadratic fields $k(\bx)^{\langle\sigma\rangle}$, $k(\bx)^{\langle\tau_1,\tau_2\rangle}$ and $k(\bx)^{\langle\tau_3,\tau_4\rangle}$ of $k(\bs)$ are given as \begin{align*} k(\bx)^{\langle\sigma\rangle}&=k(\bs)(u)=k(\bs)(\sqrt{(s^2-4t)/t}),\\ k(\bx)^{\langle\tau_1,\tau_2\rangle}&=k(\bs)(v)=k(\bs)(\sqrt{t}),\\ k(\bx)^{\langle\tau_3,\tau_4\rangle}&=k(\bs)\bigl((x_1+x_2)(x_1-x_2)\bigr)=k(\bs)(\sqrt{s^2-4t}). \end{align*} Note that $t=s^2/(u^2+4)$. From the above observation, we see the following three elementary lemmas (cf. \cite{Buc1910}, \cite{Gar28-2}, \cite{Les38}, \cite{Plo87}, \cite{KW89}, \cite[Chapter 2]{JLY02}):
\begin{lemma}\label{lemgenC4V4} Let $k$ be a field of char $k\neq 2$. Then we have\\ {\rm (i)} $f_{s,u}^{\C_4}(X)=X^4+sX^2+s^2/(u^2+4)\in k(s,u)[X]$ is $k$-generic for $\C_4\,;$\\ {\rm (ii)} $f_{s,v}^{\V_4}(X)=X^4+sX^2+v^2\in k(s,v)[X]$ is $k$-generic for $\V_4$. \end{lemma}
\begin{lemma} For $\ba=(a,b)\in M^2$ with $D_\ba\neq 0$, the polynomial $f_\ba^{\D_4}(X)=X^4+aX^2+b$ is reducible over $M$ if and only if either $\sqrt{a^2-4b}\in M$, $\sqrt{-a+2\sqrt{b}}\in M$ or $\sqrt{-a-2\sqrt{b}}\in M$. \end{lemma}
Note that \begin{align*} g_\ba^{\D_4}(X)&=\mathcal{RP}_{x_1+x_2,\D_4,f_\ba^{\D_4}}(X)=X^4+2aX^2+(a^2-4b)\\ &=\textstyle{\bigl(X-\sqrt{-a+2\sqrt{b}}\bigr)\bigl(X+\sqrt{-a+2\sqrt{b}}\bigr) \bigl(X-\sqrt{-a-2\sqrt{b}}\bigr)\bigl(X+\sqrt{-a-2\sqrt{b}}\bigr)}. \end{align*}
\begin{lemma}\label{lem2} For $\ba=(a,b)\in M^2$ with $D_\ba\neq 0$, assume that $f_\ba^{\D_4}(X)=X^4+aX^2+b$ is irreducible over $M$. Then the following assertions hold\,{\rm :}\\ $(\mathrm{i})$ $\sqrt{b}\in M$ if and only if $\Gal(f_\ba^{\D_4}/M)=\V_4$\,{\rm; }\\ $(\mathrm{ii})$ $\sqrt{(a^2-4b)/b}\in M$ if and only if $\Gal(f_\ba^{\D_4}/M)=\C_4$\,{\rm ;}\\ $(\mathrm{iii})$ $\sqrt{b}\not\in M$ and $\sqrt{(a^2-4b)/b}\not\in M$ if and only if $\Gal(f_\ba^{\D_4}/M)=\D_4$.\\ \end{lemma}
In the case of $\Gal(f_\ba^{\D_4}/M)=\D_4$, three quadratic extensions of $M$ are given as \begin{align} M(\sqrt{b}),\qquad M(\sqrt{(a^2-4b)/b}),\qquad M(\sqrt{a^2-4b}).\label{3quad} \end{align}
For the field $k(\bx,\by):=k(x_1,x_2,y_1,y_2)$, we take the interchanging involution \begin{align*} \iota\ :\ k(\bx,\by)\,\longrightarrow\,k(\bx,\by),\quad x_1\longmapsto y_1,\ y_1\longmapsto x_1,\ x_2\longmapsto y_2,\ y_2\longmapsto x_2 \end{align*} as in Section \ref{seS4A4}. For $\sigma=(1234)$, $\tau_1=(13)\in \cS_4$, we put $(\sigma',\tau_1'):=(\iota^{-1}\sigma\iota,\iota^{-1}\tau_1\iota)$; then $\sigma',\tau_1'\in\mathrm{Aut}_k(\by)$; and we write \begin{align*} \D_4=\langle\sigma,\tau_1\rangle,\qquad \D_4'=\langle\sigma',\tau_1'\rangle,\qquad \D_4''=\langle\sigma\sigma',\tau_1\tau_1'\rangle. \end{align*} Note that $\D_4''$ $(\cong \D_4)$ is a subgroup of $\D_4\times\D_4'$.
Take an $\cS_4\times \cS_4'$-primitive $\cS_4''$-invariant $P:=x_1y_1+x_2y_2+x_3y_3+x_4y_4$ as in Section \ref{seS4A4}. Put $f_{\bs,\bs'}^{\D_4}(X):=f_\bs^{\D_4}(X)f_{\bs'}^{\D_4}(X)$ where $(\bs,\bs')=(s,t,s',t')$. Then the $\cS_4\times \cS_4'$-relative $\cS_4''$-invariant resolvent polynomial $\R_{\bs,\bs'}(X)$ of $f_{\bs,\bs'}^{\D_4}$ by $P$ splits as \begin{align*} \R_{\bs,\bs'}(X):= \mathcal{RP}_{P,\cS_4\times \cS_4',f_{\bs,\bs'}^{\D_4}}(X) =\R_{\bs,\bs'}^{1}(X)\cdot\bigl(\R_{\bs,\bs'}^{2}(X)\bigr)^2 \end{align*} where \begin{align} \R_{\bs,\bs'}^{1}(X):\! &=\mathcal{RP}_{P,\D_4\times \D_4',f_{\bs,\bs'}^{\D_4}}(X)\nonumber\\ &=X^8-8s{s'}X^6+16(s^2{s'}^2+2t{s'}^2+2s^2{t'}-16t{t'})X^4\label{eqR1}\\ &\quad\ -128s{s'}(t{s'}^2+s^2{t'}-8t{t'})X^2+256(t{s'}^2-s^2{t'})^2,\nonumber\\ \R_{\bs,\bs'}^{2}(X):\! &=X^8-4s{s'}X^6+2(3s^2{s'}^2-4t{s'}^2-4s^2{t'}-16t{t'})X^4\nonumber\\ &\quad\ -4s{s'}(s^2-4t)({s'}^2-4{t'})X^2+(s^2-4t)^2({s'}^2-4{t'})^2.\nonumber \end{align}
Note that $P$ is regarded as $\D_4\times \D_4'$-primitive $\D_4''$-invariant in (\ref{eqR1}). The discriminant of $\R_{\bs,\bs'}^{1}(X)$ with respect to $X$ is given by \begin{align*} 2^{80}t^4{t'}^4(s^2-4t)^4({s'}^2-4{t'})^4(s^2{t'}-{s'}^2t)^2(s^2{s'}^2-4{s'}^2t-4s^2{t'})^4. \end{align*}
For $\ba=(a,b)$, $\ba'=(a',b')\in M^2$ with $D_\ba\cdot D_{\ba'}\neq 0$, we put \[ L_\ba:=\Spl_M f_\ba^{\D_4}(X),\quad G_\ba:=\Gal(f_\ba^{\D_4}/M),\quad G_{\ba,\ba'}:=\Gal(f_{\ba,\ba'}^{\D_4}/M). \] Using Theorem \ref{thfun}, we obtain an answer to $\mathbf{Int}(f_{\bs}^{\D_4}/M)$ via resolvent polynomial $\R_{\bs,\bs'}^1(X)$ as follows:
\begin{theorem}\label{thD4} For $\ba=(a,b)$, $\ba'=(a',b')\in M^2$ with $D_\ba\cdot D_{\ba'}\neq 0$, assume that both of $f_\ba^{\D_4}(X)$ and $f_{\ba'}^{\D_4}(X)$ are irreducible over $M$ and $\#G_{\ba}\geq \#G_{\ba'}$. If $G_{\ba}=\D_4$ $($resp. $G_{\ba}=\C_4$ or $\V_4)$ then an answer to the intersection problem of $f_{s,t}^{\D_4}(X)$ is given by Table $3$ $($resp. by Table $4$$)$ according to ${\rm DT}(\R_{\ba,\ba'})$. \end{theorem}
\begin{center} {\rm Table} $3$\vspace*{3mm}\\ {\small
\begin{tabular}{|c|c|l|l|c|c|l|l|}\hline $G_\ba$& $G_{\ba'}$ & & GAP ID & $G_{\ba,{\ba'}}$ & $$ & ${\rm DT}(\R_{\ba,\ba'}^{1})$ & ${\rm DT}(\R_{\ba,\ba'})$\\ \hline
& & (II-1) & $[64,226]$ & $\D_4\times \D_4$ & $L_\ba\cap L_{\ba'}=M$ & $8$ & $16,8$\\ \cline{3-8} & & (II-2) & $[32,27]$ & $(\V_4\times \V_4)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $8$ & $16,8$\\ \cline{3-8} & & (II-3) & $[32,27]$ & $(\V_4\times \V_4)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $4,4$ & $16,4,4$\\ \cline{3-8} & & (II-4) & $[32,27]$ & $(\V_4\times \V_4)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $4,4$ & $8,8,4,4$\\ \cline{3-8} & & (II-5) & $[32,28]$ & $(\C_4\times \V_4)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $8$ & $16,8$\\ \cline{3-8} & & (II-6) & $[32,34]$ & $(\C_4\times \C_4)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $4,4$ & $16,4,4$\\ \cline{3-8}
& $\D_4$ & (II-7) & $[16,3]$ & $(\C_4\times \C_2)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=4$ & $8$ & $16,8$\\ \cline{3-8} & & (II-8) & $[16,3]$ & $(\C_4\times \C_2)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=4$ & $4,4$ & $16,4,4$\\ \cline{3-8} & & (II-9) & $[16,3]$ & $(\C_4\times \C_2)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=4$ & $4,4$ & $8,8,4,4$\\ \cline{3-8} & & (II-10) & $[16,11]$ & $\D_4\times \C_2$ & $[L_\ba\cap L_{\ba'}:M]=4$ & $4,4$ & $16,4,4$\\ \cline{3-8} & & (II-11) & $[16,11]$ & $\D_4\times \C_2$ & $[L_\ba\cap L_{\ba'}:M]=4$ & $2,2,2,2$ & $8^2,2^4$\\ \cline{3-8}
$\D_4$ & & (II-12) & $[8,3]$ & $\D_4$ & $L_\ba=L_{\ba'}$ & $4,2,2$ & $8,8,4,2,2$\\ \cline{3-8} & & (II-13) & $[8,3]$ & $\D_4$ & $L_\ba=L_{\ba'}$ & $2,2,2,1,1$ & $8,4^2,2^3,1^2$\\ \cline{2-8}
& & (II-14) & $[32,25]$ & $\D_4\times \C_4$ & $L_\ba\cap L_{\ba'}=M$ & $8$ & $16,8$\\ \cline{3-8} & \raisebox{-1.6ex}[0cm][0cm]{$\C_4$}
& (II-15) & $[16,3]$ & $(\C_4\times \C_2)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $4,4$ & $16,4,4$\\ \cline{3-8} & & (II-16) & $[16,3]$ & $(\C_4\times \C_2)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $4,4$ & $8,8,4,4$\\ \cline{3-8} & & (II-17) & $[16,4]$ & $\C_4\rtimes \C_4$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $8$ & $16,8$\\ \cline{2-8}
& & (II-18) & $[32,46]$ & $\D_4\times \V_4$ & $L_\ba\cap L_{\ba'}=M$ & $8$ & $8,8,8$\\ \cline{3-8} & & (II-19) & $[16,11]$ & $\D_4\times \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $8$ & $8,8,8$\\ \cline{3-8} & \raisebox{-1.6ex}[0cm][0cm]{$\V_4$}
& (II-20) & $[16,11]$ & $\D_4\times \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $8$ & $8,8,4,4$\\ \cline{3-8} & & (II-21) & $[16,11]$ & $\D_4\times \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $4,4$ & $8,8,4,4$\\ \cline{3-8} & & (II-22) & $[8,3]$ & $\D_4$ & $L_\ba\supset L_{\ba'}$ & $8$ & $8,4,4,4,4$\\ \cline{3-8} & & (II-23) & $[8,3]$ & $\D_4$ & $L_\ba\supset L_{\ba'}$ & $4,4$ & $8,4,4,4,4$\\ \cline{1-8} \end{tabular} }\vspace*{5mm} \end{center}
\begin{center} {\rm Table} $4$\vspace*{3mm}\\ {\small
\begin{tabular}{|c|c|l|l|c|c|l|l|}\hline $G_\ba$& $G_{\ba'}$ & & GAP ID & $G_{\ba,{\ba'}}$ & $[L_\ba\cap L_{\ba'}:M]$ & ${\rm DT}(\R_{\ba,\ba'}^{1})$ & ${\rm DT}(\R_{\ba,\ba'})$\\ \hline
& & (III-1) & $[16,2]$ & $\C_4\times \C_4$ & $L_\ba\cap L_{\ba'}=M$ & $4,4$ & $16,4,4$\\ \cline{3-8} & $\C_4$ & (III-2) & $[8,2]$ & $\C_4\times \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $2,2,2,2$ & $8^2,2^4$\\ \cline{3-8} \raisebox{-1.6ex}[0cm][0cm]{$\C_4$} & & (III-3) & $[4,1]$ & $\C_4$ & $L_\ba=L_{\ba'}$ & $2^2,1^4$ & $4^4,2^2,1^4$\\ \cline{2-8}
& & (III-4) & $[16,10]$ & $\C_4\times \V_4$ & $L_\ba\cap L_{\ba'}=M$ & $8$ & $8,8,8$\\ \cline{3-8} & $\V_4$ & (III-5) & $[8,2]$ & $\C_4\times \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $8$ & $8,8,4,4$\\ \cline{3-8} & & (III-6) & $[8,2]$ & $\C_4\times \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $4,4$ & $8,8,4,4$\\ \cline{1-8}
& & (III-7) & $[16,10]$ & $\V_4\times \C_4$ & $L_\ba\cap L_{\ba'}=M$ & $8$ & $8,8,8$\\ \cline{3-8} & $\C_4$
& (III-8) & $[8,2]$ & $\C_2\times \C_4$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $8$ & $8,8,4,4$\\ \cline{3-8} & & (III-9) & $[8,2]$ & $\C_2\times \C_4$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $4,4$ & $8,8,4,4$\\ \cline{2-8}
& & (III-10) & $[16,14]$ & $\V_4\times \V_4$ & $L_\ba\cap L_{\ba'}=M$ & $4,4$ & $4^6$\\ \cline{3-8} $\V_4$ & & (III-11) & $[8,5]$ & $\V_4\times \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $4,4$ & $4^4,2^4$\\ \cline{3-8} & \raisebox{-1.6ex}[0cm][0cm]{$\V_4$}
& (III-12) & $[8,5]$ & $\V_4\times \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $4,2,2$ & $4^4,2^4$\\ \cline{3-8} & & (III-13) & $[8,5]$ & $\V_4\times \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $2,2,2,2$ & $4^4,2^4$\\ \cline{3-8} & & (III-14) & $[4,2]$ & $\V_4$ & $L_\ba=L_{\ba'}$ & $4,2,2$ & $4^2,2^6,1^4$\\ \cline{3-8} & & (III-15) & $[4,2]$ & $\V_4$ & $L_\ba=L_{\ba'}$ & $2^2,1^4$ & $4^2,2^6,1^4$\\ \cline{1-8}
\end{tabular} }\vspace*{5mm} \end{center}
\begin{remark}\label{rem6} By comparing six fields $M(\sqrt{b})$, $M(\sqrt{(a^2-4b)/b})$, $M(\sqrt{a^2-4b})$, $M(\sqrt{b'})$, $M(\sqrt{(a'^2-4b')/b'})$ and $M(\sqrt{a'^2-4b'})$ each of which is a quadratic extension of $M$ or coincides with $M$ as in (\ref{3quad}) (cf. Lemma \ref{lem2}), we may distinguish all cases in Table $3$ and Table $4$ except for $\{$(II-7),\ldots,(II-13)$\}$ and $\{$(III-2),(III-3)$\}$. \end{remark}
\begin{remark} In the case of $G_\ba=G_{\ba'}=\D_4$, the decomposition type $4$, $2$, $2$ of $\R_{\ba,\ba'}^1(X)$ over $M$ means that the splitting fields $L_\ba$ and $L_{\ba'}$ coincide and the quotient fields $M[X]/(f_\ba^{\D_4}(X))$ and $M[X]/(f_{\ba'}^{\D_4}(X))$ are not $M$-isomorphic (cf. Theorem \ref{throotf} and Lemma \ref{lemRF}). \end{remark}
\subsection{Isomorphism problems of $f_\bs^{\D_4}(X)=X^4+sX^2+t$}\label{seIsoD4}
~\\
We treat the problems $\mathbf{Isom}(f_\bs^{\D_4}/M)$ and $\mathbf{Isom}^\infty(f_\bs^{\D_4}/M)$ more explicitly because by Theorem \ref{thD4} we can not clearly see the existence of $\ba'\in M^2$, $(\ba'\neq\ba)$ which satisfies $\mathrm{Spl}_M(f_\ba^{\D_4}(X))=\mathrm{Spl}_M(f_{\ba'}^{\D_4}(X))$, i.e., $\R_{\ba,\ba'}(X)$ has a linear factor or DT($\R_{\ba,\ba'}^1(X))$ is $4,2,2$.
The problem $\mathbf{Isom}(f_\bs^{\D_4}/M)$ was investigated by van der Ploeg \cite{Plo87} in the case $M=\mathbb{Q}$ and $\Gal(f_{\ba}^{\D_4}/\mathbb{Q})=\D_4$ (see Lemma \ref{lemD4rf} below) to explain Shanks' incredible identities \cite{Sha74}. We study $\mathbf{Isom}(f_\bs^{\D_4}/M)$ for general $M\supset k$ and $\Gal(f_{\ba}^{\D_4}/M)\leq \D_4$ via formal Tschirnhausen transformation which is given in Section \ref{sePre}.
For $f_\bs^{\D_4}(X)=X^4+sX^2+t$, the problem $\mathbf{Isom}^\infty(f_\bs^{\D_4}/M)$ has a trivial solution because for an arbitrary $c\in M$, $f_{a,b}^{\D_4}(X)$ and $f_{a',b'}^{\D_4}(X)=f_{ac^2,bc^4}^{\D_4}(X)=f_{a,b}^{\D_4}(X/c)\cdot c^4$ have the same splitting field over the infinite field $M$. Indeed, we have $\Spl_M f_{a,b}^{\D_4}(X)=\Spl_M f_{a',b'}^{\D_4}(X)$ for $a',b'\in M$ with $a^2b'={a'}^2b$ and $b'/b=c^4, c\in M$. Thus for $f_\bs^{\D_4}(X)$ we consider the refined question: \begin{center} {\bf ${\mathbf{Isom}^{\infty}}'(f_\bs^{\D_4}/M)$} : for a given $a,b\in M$, are there infinitely many\\ \hspace*{4.4cm} $a',b'\in M$ with $a^2b'-{a'}^2b\neq0$ or $b'/b\neq c^4$, $(c\in M)$\\ \hspace*{2.4cm} such that $\Spl_M f_{a,b}^{\D_4}(X)=\Spl_M f_{a',b'}^{\D_4}(X)$\,? \end{center}
We take the formal Tschirnhausen coefficients $u_i=u_i(\bx,\by)\in k(\bx,\by)$, $(i=0,1,2,3)$ which is defined in $(\ref{defu})$. Then the element $u_i$, $(i=0,1,2,3)$, becomes an $\cS_4\times\cS_4'$-primitive $\cS_4''$-invariant and we may take the corresponding resolvent polynomials \[ F_{\bs,\bs'}^i(X):=\mathcal{RP}_{u_i,\cS_4\times\cS_4',f_{\bs,\bs'}^{\D_4}}(X),\quad F_{\bs,\bs'}^{i,1}(X):=\mathcal{RP}_{u_i,\D_4\times\D_4',f_{\bs,\bs'}^{\D_4}}(X),\quad (i=0,1,2,3). \] In the latter case, we regard the $u_i$'s as $\D_4\times\D_4'$-primitive $\D_4''$-invariants. Now we put \[ (d,d'):=(s^2-4t,{s'}^2-4t'). \] Then by the definition we may evaluate the resolvent polynomial $F_{\bs,\bs'}^i(X)$, $(i=0,1,2,3)$, which splits as \begin{align*} F_{\bs,\bs'}^0(X)&=X^8\Bigl(X^4+\frac{s^2{s'}}{2d}X^2+\frac{s^4d'}{16d^2}\Bigr)^4,& F_{\bs,\bs'}^1(X)&=F_{\bs,\bs'}^{1,1}(X)\cdot (F_{\bs,\bs'}^{1,2}(X))^2,\\ F_{\bs,\bs'}^2(X)&=X^8\Bigl(X^4+\frac{2{s'}}{d}Z^2+\frac{d'}{d^2}\Bigr)^4,& F_{\bs,\bs'}^3(X)&=F_{\bs,\bs'}^{3,1}(X)\cdot (F_{\bs,\bs'}^{3,2}(X))^2 \end{align*} where \begin{align} F_{\bs,\bs'}^{0,1}(X)&=F_{\bs,\bs'}^{2,1}(X)=X^8,\\ F_{\bs,\bs'}^{1,1}(X)&=X^8-\frac{2s{s'}(s^2-3t)}{dt}X^6\nonumber\\ &+\frac{s^6{s'}^2-6s^4{s'}^2t+9s^2{s'}^2t^2+2{s'}^2t^3+2s^6{t'}-12s^4t{t'}+18s^2t^2{t'} -16t^3{t'}}{d^2t^2}X^4\nonumber\\ &-\frac{2s{s'}(s^2-3t)({s'}^2t^3+s^6{t'}-6s^4t{t'}+9s^2t^2{t'}-8t^3{t'})}{d^3t^3}X^2\nonumber\\ &+\frac{(-{s'}^2t^3+s^6{t'}-6s^4t{t'}+9s^2t^2{t'})^2}{d^4t^4},\nonumber\nonumber\\ F_{\bs,\bs'}^{1,2}(X)&=X^8-\frac{s{s'}(s^2-3t)}{dt}X^6\nonumber\\ &+\frac{3s^6{s'}^2-18s^4{s'}^2t+27s^2{s'}^2t^2-4{s'}^2t^3-4s^6{t'}+24s^4t{t'}-36s^2t^2{t'}- 16t^3{t'}}{8d^2t^2}X^4\nonumber\\ &-\frac{s{s'}(s^2-3t)(s^2-t)^2d'}{16d^2t^3}X^2+\frac{(s^2-t)^4d'^2}{256d^2t^4},\nonumber\\ F_{\bs,\bs'}^{3,1}(X)&=X^8-\frac{2s{s'}}{dt}X^6 +\frac{s^2{s'}^2+2{s'}^2t+2s^2{t'}-16t{t'}}{d^2t^2}X^4 -\frac{2s{s'}({s'}^2t+s^2{t'}-8t{t'})}{d^3t^3}X^2\nonumber\\ &+\frac{(s^2{t'}-{s'}^2t)^2}{d^4t^4},\nonumber\\ F_{\bs,\bs'}^{3,2}(X)&=X^8-\frac{s{s'}}{dt}X^6 +\frac{3s^2{s'}^2-4{s'}^2t-4s^2{t'}-16t{t'}}{8d^2t^2}X^4 -\frac{s{s'}d'}{16d^2t^3}X^2+\frac{{d'}^2}{256d^2t^4}.\nonumber \end{align} The discriminants of $F_{\bs,\bs'}^{1,1}(X)$ and of $F_{\bs,\bs'}^{3,1}(X)$ with respect to $X$ are given by \begin{align*} \mathrm{disc}(F_{\bs,\bs'}^{1,1}(X))&=\frac{2^{24}{t'}^4(s^2-t)^8({s'}^2-4{t'})^4 (s^6{t'}-6s^4t{t'}+9s^2t^2{t'}-{s'}^2t^3)^2} {(s^2-4t)^{24}t^{16}}\\ &\quad \cdot (s^6{s'}^2-6s^4{s'}^2t+9s^2{s'}^2t^2-4{s'}^2t^3-4s^6{t'} +24s^4t{t'}-36s^2t^2{t'})^4,\\ \mathrm{disc}(F_{\bs,\bs'}^{3,1}(X))&=\frac{2^{24}{t'}^4({s'}^2-4{t'})^4 (s^2{t'}-{s'}^2t)^2(s^2{s'}^2-4{s'}^2t-4s^2{t'})^4}{(s^2-4t)^{24}t^{24}}. \end{align*}
For $\ba,\ba'\in M^2$ with $D_\ba\cdot D_{\ba'}\neq 0$, we assume that $f_{\ba}^{\D_4}(X)$ and $f_{\ba'}^{\D_4}(X)$ are irreducible over $M$ and we write \[ G_\ba:=\Gal(f_\ba^{\D_4}/M),\quad G_{\ba'}:=\Gal(f_{\ba'}^{\D_4}/M). \] From Lemma \ref{lemMM}, two fields $M[X]/(f_\ba^{\D_4}(X))$ and $M[X]/(f_{\ba'}^{\D_4}(X))$ are $M$-isomorphic if and only if there exist $x,y,z,w\in M$ such that \begin{align} f_{\ba'}^{\D_4}(X)&=R'(x,y,z,w,a,b;X)\label{eqD4R}\\ &:=\mathrm{Resultant}_X(f_{\ba}^{\D_4}(X),X-(x+yY+zY^2+wY^3))\nonumber \end{align} where \begin{align} (x,y,z,w)=\omega_{f_{\ba,\ba'}}(\pi(u_0),\pi(u_1),\pi(u_2),\pi(u_3))\quad \mathrm{for\ some}\ \ \pi\in\cS_4\times\cS_4'. \end{align}
\begin{lemma}\label{lemTTD4} For $\ba=(a,b)$, $\ba'=(a',b')\in M^2$ with $D_\ba\cdot D_{\ba'}\neq 0$, we assume that $\C_4\leq G_\ba, G_{\ba'}$. If $M[X]/(f_{\ba}^{\D_4}(X))\cong_M M[X]/(f_{\ba'}^{\D_4}(X))$ then $(x,z)=(0,0)$. Namely $f_{\ba'}^{\D_4}(X)$ is obtained from $f_{\ba}^{\D_4}(X)$ by Tschirnhausen transformation of the form $yX+wX^3$. \end{lemma}
\begin{proof} By Theorem \ref{throotf}, two fields $M[X]/(f_\ba^{\D_4}(X))$ and $M[X]/(f_{\ba'}^{\D_4}(X))$ are $M$-isomorphic if and only if $\mathrm{DT}(F_{\bs,\bs'}^i(X))$ includes $1$. It follows from the assumption $\C_4\leq G_\ba, G_{\ba'}$ and Tables $3$ and $4$ of Theorem \ref{thD4} that $\mathrm{DT}(F_{\bs,\bs'}^i(X))$ includes $1$ if and only if $\mathrm{DT}(F_{\bs,\bs'}^{i,1}(X))$ includes $1$. Thus we see that $\pi\in\D_4\times\D_4'$ and $(x,z)=(0,0)$ by $F_{\bs,\bs'}^{0,1}(X)=F_{\bs,\bs'}^{2,1}(X)=X^8$.
We see that $(x,z)=(0,0)$ directly as follows: The coefficient of $X^3$ of $R'(x,y,z,w,a,b;X)$ equals $-2(2x-az)$. Hence by comparing the coefficient of $X^3$ in (\ref{eqD4R}), we see $x=az/2$. We also get \begin{align*} &R'(az/2,y,z,w,a,b;X)=\\ &X^4+\bigl((2a^3w^2-6abw^2-4a^2wy+8bwy+2ay^2-a^2z^2+4bz^2)/2\bigr)X^2\\ &\hspace*{5.5mm}-(a^2-4b)(a^2w^2-bw^2-2awy+y^2)zX+(16b^3w^4-32ab^2w^3y+16a^2bw^2y^2\\ &\hspace*{5.5mm}+32b^2w^2y^2-32abwy^3+16by^4+4a^5w^2z^2-28a^3bw^2z^2+48ab^2w^2z^2-8a^4wyz^2\\ &\hspace*{5.5mm}+48a^2bwyz^2-64b^2wyz^2+4a^3y^2z^2-16aby^2z^2+a^4z^4-8a^2bz^4+16b^2z^4)/16. \end{align*} Hence, by comparing the coefficient of $X$ in (\ref{eqD4R}), we have \[ (a^2-4b)(a^2w^2-bw^2-2awy+y^2)z=0. \] It follows from the assumption $D_\ba=16b(a^2-4b)^2\neq 0$ that $a^2-4b\neq 0$. If $(a^2w^2-bw^2-2awy+y^2)=0$ and $w\neq 0$, then $b=\bigl((aw-y)/w\bigr)^2$ is square in $M$. This contradicts $\C_4\leq G_\ba$. If $(a^2w^2-bw^2-2awy+y^2)=0$ and $w=0$ then we have $y=0$. This contradicts $\C_4\leq G_{\ba'}$ because $f_{\ba'}^{\D_4}(X)=R'(az/2,0,z,0,a,b;X)=(X^2+bz^2-(a^2z^2/4))^2$.
Hence we conclude that $(x,z)=(0,0)$ from the assumption $\C_4\leq G_\ba, G_{\ba'}$. \end{proof}
\begin{remark} From the proof, we see that Lemma \ref{lemTTD4} is not true in general for $G_\ba=\V_4$, because the case where $a^2w^2-bw^2-2awy+y^2=0$ and $w\neq 0$ occurs. This case corresponds to the case of (III-14) on Table $4$ of Theorem \ref{thD4}. \end{remark}
Van der Ploeg \cite{Plo87} showed the following result when $M=\mathbb{Q}$ and $G_\ba=\D_4$.
\begin{lemma}\label{lemD4rf} For $\ba=(a,b)$, $\ba'=(a',b')\in M^2$ with $D_\ba\cdot D_{\ba'}\neq 0$, we assume that $\C_4\leq G_\ba, G_{\ba'}$. Then the following conditions are equivalent\,{\rm :}\\ {\rm (i)} $M[X]/(X^4+aX^2+b)\cong_M M[X]/(X^4+a'X^2+b')\,;$\\ {\rm (ii)} there exist $y,w\in M$ such that \[ a'= a^3w^2-3abw^2-2a^2wy+4bwy+ay^2,\quad b'=b(bw^2-awy+y^2)^2\,; \] {\rm (iii)} there exist $y',w\in M$ such that \[ a'=abw^2-4bw{y'}+a{y'}^2,\quad b'=b(bw^2-aw{y'}+{y'}^2)^2. \] Moreover, if $a^2b'-{a'}^2b\neq0$ or $b'/b$ is not a fourth power in $M$, then the conditions above are equivalent to\\ {\rm (iv)} there exist $u,w\in M$ such that \[ a'= (a^3-3ab-2a^2u+4bu+au^2)w^2,\quad b'=b(b-au+u^2)^2w^4. \] \end{lemma}
\begin{proof} The equivalence of (i) and (ii) follows from Lemma \ref{lemTTD4} and \begin{align*} &R'(0,y,0,w,a,b;X)\\ &=X^4+(a^3w^2-3abw^2-2a^2wy+4bwy+ay^2)X^2+b(bw^2-awy+y^2)^2. \end{align*} By putting $y':=aw-y$, the equivalence of (ii) and (iii) follows. If $a^2b'-{a'}^2b\neq0$ or $b'/b\neq c^4$, $(c\in M)$ then $w\neq 0$. Hence the condition (iv) is obtained by putting $u:=y/w$ in (ii). \end{proof}
By Lemma \ref{lemRF} and Lemma \ref{lemD4rf} we obtain an answer to $\mathbf{Isom}(f_\bs^{\D_4}/M)$:
\begin{proposition}[An answer to $\mathbf{Isom}(f_\bs^{\D_4}/M)$]\label{lemD4spl} For $\ba=(a,b)$, $\ba'=(a',b')\in M^2$ with $D_\ba\cdot D_{\ba'}\neq 0$, we assume that $G_\ba=\D_4$. Then $\Spl_M (X^4+aX^2+b)=\Spl_M (X^4+a'X^2+b') $ if and only if there exist $p,q\in M$ such that either \begin{align*} a'&=ap^2-4bpq+abq^2,\quad b'=b(p^2-apq+bq^2)^2\quad \mathrm{or}\\ a'&=2(ap^2-4bpq+abq^2),\quad b'=(a^2-4b)(p^2-bq^2)^2. \end{align*} \end{proposition}
\begin{proof} It follows from Lemma \ref{lemRF} that $\Spl_M f_{\ba}^{\D_4}(X)=\Spl_M f_{\ba'}^{\D_4}(X)$ if and only if either $M[X]/(f_{\ba'}^{\D_4}(X))\cong_M M[X]/(f_{\ba}^{\D_4}(X))$ or $M[X]/(f_{\ba'}^{\D_4}(X))\cong_M M[X]/(f_{2a,a^2-4b}^{\D_4}(X))$. The former case is obtained by putting $(p,q):=(y',w)$ in Lemma \ref{lemD4rf} (iii) and the latter case is obtained by putting $(a,b):=(2a,a^2-4b)$ and $(p,q):=(y-aw,2w)$ in Lemma \ref{lemD4rf} (ii). \end{proof}
Finally by using Lemma \ref{lemD4rf}, we get an answer to ${\mathbf{Isom}^{\infty}}'(f_\bs^{\D_4}/M)$ over Hilbertian field $M$ as follows:
\begin{theorem}[An answer to ${\mathbf{Isom}^{\infty}}'(f_\bs^{\D_4}/M)$]\label{thD4Hil} Let $M\supset k$ be a Hilbertian field. For $\ba=(a,b)\in M^2$ with $D_\ba\neq 0$, there exist infinitely many $\ba'=(a',b')\in M^2$ which satisfy the condition that $b'/b$ is not a fourth power in $M$ and $\Spl_M f_\ba^{\D_4}(X)=\Spl_M f_{\ba'}^{\D_4}(X)$. \end{theorem}
\begin{proof} By Lemma \ref{lemD4rf} (iv), for an arbitrary $u\in M$, $f_\ba^{\D_4}(X)$ and $f_{\ba'}^{\D_4}(X)$ with \[ a'=(a^3-3ab-2a^2u+4bu+au^2),\quad b'=b(b-au+u^2)^2 \] have the same splitting field over $M$. By Hilbert's irreducibility theorem, there exist infinitely many $u\in M$ such that \[ X^2-(b-au+u^2)=X^2-(u^2-a/2)^2+(a^2-4b)/4 \] is irreducible over $M$ because $a^2-4b\neq 0$. For such infinitely many $u\in M$, $b'/b=(b-au+u^2)^2$ is not a fourth power in $M$. \end{proof}
\section{The cases of $\C_4$ and of $\V_4$}\label{seC4V4}
We take $k$-generic polynomials \begin{align*} f_{s,u}^{\C_4}(X)&:=X^4+sX^2+\frac{s^2}{u^2+4}\in k(s,u)[X],\\ f_{s,v}^{\V_4}(X)&:=X^4+sX^2+v^2\in k(s,v)[X] \end{align*} for $\C_4$ and for $\V_4$ respectively (cf. Lemma \ref{lemgenC4V4}). The discriminant of $f_{s,u}^{\C_4}(X)$ (resp. $f_{s,v}^{\V_4}(X)$) with respect to $X$ is given by $16\, s^6u^4/(u^2+4)^3$ (resp. $16\,v^2(s+2v)^2(s-2v)^2$). We always assume that for $(a,c)\in M^2$ (resp. for $(a,d)\in M^2$), $f_{a,c}^{\C_4}(X)$ (resp. $f_{a,d}^{\V_4}(X)$) is well-defined and separable over $M$, i.e. $ac(a^2+4)\neq 0$ (resp. $d(a+2d)(a-2d)\neq 0$).
As in (\ref{eqR1}) of Section \ref{seD4}, we take $\D_4\times \D_4'$-primitive $\D_4''$-invariant $P:=x_1y_1+x_2y_2+x_3y_3+x_4y_4$ and $\D_4\times \D_4'$-relative $\D_4''$-invariant resolvent polynomial by $P$: \begin{align*} \R_{\bs,\bs'}^{1}(2X)/2^8\! &=\mathcal{RP}_{P,\D_4\times \D_4',f_{\bs,\bs'}^{\D_4}}(2X)/2^8\nonumber\\ &=X^8-2s{s'}X^6+(s^2{s'}^2+2t{s'}^2+2s^2{t'}-16t{t'})X^4\nonumber\\ &\quad\ -2s{s'}(t{s'}^2+s^2{t'}-8t{t'})X^2+(t{s'}^2-s^2{t'})^2. \end{align*}
By specializing parameters $(\bs,\bs')=(s,t,s',t')\mapsto (s,s^2/(u^2+4),s',{s'}^2/({u'}^2+4))$ of $\R_{\bs,\bs'}^{1}(2X)/2^8$, we have the following decomposition: \begin{align*} \omega_{f}(\R_{\bs,\bs'}^{1}(2X)/2^8) =\Bigl(X^4-ss'X^2+\frac{s^2s'^2(u+u')^2}{(u^2+4)(u'^2+4)}\Bigr) \Bigl(X^4-ss'X^2+\frac{s^2s'^2(u-u')^2}{(u^2+4)(u'^2+4)}\Bigr) \end{align*} where $f=f_{s,u}^{\C_4} f_{s',u'}^{\C_4}$. By Theorem \ref{thD4}, we obtain an answer to $\mathbf{Isom}(f_{s,u}^{\C_4}/M)$.
\begin{theorem}[An answer to $\mathbf{Isom}(f_{s,u}^{\C_4}/M)$]\label{thC4} For $\ba=(a,c)$, $\ba'=(a',c')\in M^2$ with $aa'cc'(c^2+4)({c'}^2+4)\neq 0$, we assume that $c\neq\pm c'$ and $c\neq \pm 4/c'$. Then the splitting fields of $f_{a,c}^{\C_4}(X)$ and of $f_{a',c'}^{\C_4}(X)$ over $M$ coincide if and only if either $f_{A,C^+}^{\C_4}(X)$ or $f_{A,C^-}^{\C_4}(X)$ has a linear factor over $M$ where \begin{align*} f_{A,C^\pm}^{\C_4}(X)=X^4-aa'X^2+\frac{a^2a'^2(c\pm c')^2}{(c^2+4)(c'^2+4)} \quad\mathrm{with}\quad A=-aa',\ C^\pm=\frac{cc'\mp 4}{c\pm c'}. \end{align*} \end{theorem}
\begin{remark} The discriminant of $f_{A,C^\pm}^{\C_4}(X)$ with respect to $X$ equals \[ \frac{16\,a^6a'^6(c\pm c')^2(cc'\mp 4)}{(c^2+4)^3(c'^2+4)^3}. \] We may assume that $c\neq \pm c'$ and $c\neq\pm 4/c'$ without loss of generality as we have explained it in Remark \ref{remGir}. \end{remark}
\begin{example} For $f_{a,c}^{\C_4}(X)=X^4+aX^2+a^2/(c^2+4)$, we first note that \[ \Spl_M f_{a,c}^{\C_4}(X)=\Spl_M f_{a,-c}^{\C_4}(X)\quad\mathrm{and}\quad \Spl_M f_{a,c}^{\C_4}(X)=\Spl_M f_{ae^2,c}^{\C_4}(X)\quad \mathrm{for}\quad a,c,e\in M. \] By Theorem \ref{thC4}, we have \begin{align} \Spl_M f_{a,c}^{\C_4}(X)=\Spl_M f_{(c^2+4)/a,c}^{\C_4}(X)\quad\mathrm{for}\quad f_{(c^2+4)/a,c}^{\C_4}(X)=X^4+\frac{c^2+4}{a}X^2+\frac{c^2+4}{a^2}\label{eqC41} \end{align} because we have $f_{A,C^+}^{\C_4}(X)=(X-2)(X+2)(X-c)(X+c)$ for $(a',c')=((c^2+4)/a,c)$. Although Theorem \ref{thC4} is not applicable to the case of $c'=4/c$, it follows from $\Spl_M(X^4+aX^2+b)=\Spl_M(X^4+2aX^2+a^2-4b)$ that \begin{align} \Spl_M f_{a,c}^{\C_4}(X)=\Spl_M f_{2a,4/c}^{\C_4}(X)\quad\mathrm{for}\quad f_{2a,4/c}^{\C_4}(X)=X^4+2aX^2+\frac{a^2c^2}{c^2+4}\label{eqC42}. \end{align} By (\ref{eqC41}) and (\ref{eqC42}), we also see the polynomials \[ f_{a,c}^{\C_4}(X)\quad\mathrm{and}\quad f_{2(c^2+4)/a,4/c}^{\C_4}(X)=X^4+\frac{2(c^2+4)}{a}X^2+\frac{c^2(c^2+4)}{a^2} \] have the same splitting field over $M$. \end{example}
\begin{example} We take $M=\mathbb{Q}$ and the simplest quartic polynomial \[ h_n(X)=X^4-nX^3-6X^2+nX+1\in \mathbb{Q}[X],\quad (n\in\mathbb{Z}) \] with discriminant $4(n^2+16)^3$ whose Galois group over $\mathbb{Q}$ is isomorphic to $\C_4$ except for $n=0,\pm 3$ (cf. for example, \cite{Gra77}, \cite{Gra87}, \cite{Laz91}, \cite{LP95}, \cite{Kim04}, \cite{HH05}, \cite{Duq07}, \cite{Lou07}, and the references therein). By Lemma \ref{lemTT}, we see that $h_n(X)$ and \[ H_n(X):=f_{-(n^2+16),n/2}^{\C_4}(X)=X^4-(n^2+16)X^2+4(n^2+16) \] have the same splitting field over $\mathbb{Q}$. For $n\in\mathbb{Z}$, we may assume $1\leq n$ because $\Spl_\mathbb{Q} H_n(X) =\Spl_\mathbb{Q} H_{-n}(X)$ and $H_n(X)$ splits over $\mathbb{Q}$ only for $n=0$, $\pm 3$. For $1\leq m<n$, we apply Theorem \ref{thC4} to $H_m(X)$, $H_n(X)$ with $(a,c,a',c')=(-(m^2+16),m/2,-(n^2+16),n/2)$, then we see \begin{align*} f_{A,C^+}^{\C_4}(X)&=(X-60)(X+60)(X-80)(X+80)\quad\mathrm{for}\quad (m,n)=(2,22),\\ f_{A,C^-}^{\C_4}(X)&=(X-255)(X+255)(X-340)(X+340)\quad\mathrm{for}\quad (m,n)=(1,103),\\ f_{A,C^+}^{\C_4}(X)&=(X-2080)(X+2080)(X-4992)(X+4992)\quad\mathrm{for}\quad (m,n)=(4,956). \end{align*} Hence we get \[ \Spl_M h_m(X)=\Spl_M h_n(X)\quad \mathrm{for}\quad (m,n)\in \{(1,103),(2,22),(4,956)\}. \] For just two cases $(m,n)=(1,16)$, $(2,8)$, Theorem \ref{thC4} was not applicable. However it works for a suitable Tschirnhausen transformation of $H_n(X)$ as in Remark \ref{remGir}. Indeed we may use (\ref{eqC42}) in the previous example.
We checked by Theorem \ref{thC4} that for integers $m,n$ in the range $1\leq m<n\leq 10^5$, $f_{A,C^\pm}^{\C_4}(X)$ has a linear factor over $\mathbb{Q}$, i.e. $\Spl_M h_m(X)=\Spl_M h_n(X)$, only for the values of $(m,n)=(1,103),(2,22),(4,956)$. \end{example}
\begin{remark} In the case where the field $M$ includes a primitive $4$th root $i:=e^{2\pi \sqrt{-1}/4}$ of unity, the polynomial $g_t^{\C_4}(X):=X^4-t\in k(t)[X]$ is $k$-generic for $\C_4$ by Kummer theory. Indeed we see that the polynomials $f_{a,c}^{\C_4}(X)=X^4+aX^2+a^2/(c^2+4)$ and \[ g_{a^2(c-2i)/(c+2i)}^{\C_4}(X)=X^4-\frac{a^2(c-2i)}{c+2i} \] are Tschirnhausen equivalent over $M$ because \[ \mathrm{Resultant}_X \Bigl(f_{a,c}^{\C_4}(X), Y-\Bigl(\frac{(c+i)(c-2i)}{c}X+\frac{c^2+4}{ac}X^3\Bigr)\Bigr)=g_{a^2(c-2i)/(c+2i)}^{\C_4}(Y) \] (we may assume that $ac\neq 0$ since $f_{0,c}^{\C_4}(X)=X^4$ and $f_{a,0}^{\C_4}=(X^2+a/2)^2$). In this case, for $b,b'\in M$ with $b\cdot b'\neq 0$, the splitting fields of $g_b^{\C_4}(X)$ and of $g_{b'}^{\C_4}(X)$ over $M$ coincide if and only if the polynomial $(X^4-bb')(X^4-b^3b')$ has a linear factor over $M$. \end{remark}
Finally let us check the field isomorphism problem ${\bf Isom}(f_{s,v}^{\V_4}/M)$. By specializing parameters $(\bs,\bs')=(s,t,s',t')\mapsto (s,v^2,s',v'^2)$ of $\R_{\bs,\bs'}^{1}(X)$, we have \begin{align*} \R_{s,v,s',v'}^{\V_4}(X)&:=\omega_{f}(\R_{\bs,\bs'}^{1}(2X)/2^8)\\ &=\bigl(X^4-(ss'+4vv')X^2+(sv'+s'v)^2\bigr)\\ &\ \cdot\bigl(X^4-(ss'-4vv')X^2+(sv'-s'v)^2\bigr) \end{align*} where $f=f_{s,v}^{\V_4} f_{s',v'}^{\V_4}$.
As in the case of $f_{s,u}^{\C_4}(X)$, if $\R_{a,d,a',d'}^{\V_4}(X)$ has a (simple) linear factor over $M$ for $a,d,a',d'\in M$ then $\Spl_M f_{a,d}^{\V_4}(X)=\Spl_M f_{a',d'}^{\V_4}(X)$. However the converse dose not hold by group theoretical reason (see Table $4$ of Theorem \ref{thD4}).
Although we could not get an answer to ${\bf Isom}(f_{s,v}^{\V_4}/M)$ via $\R_{s,v,s',v'}^{\V_4}(X)$, the answer can be obtained easily by comparing quadratic subfields (see Remark \ref{rem6}).
\section{Reducible cases}\label{seRed}
In this section, we treat reducible cases. We take the $k$-generic polynomial \[ f_{s,t}^{\cS_4}(X)=X^4+sX^2+tX+t\in k(s,t)[X] \] for $\cS_4$ with discriminant \[ D_{s,t}:=t(16s^4 - 128s^2t - 4s^3t + 256t^2 + 144st^2 - 27t^3) \] and the $\cS_4\times \cS_4'$-relative $\cS_4''$-invariant resolvent polynomial $\R_{\bs,\bs'}(X)$ of the product $f_{\bs,\bs'}^{\cS_4}(X):=f_{s,t}^{\cS_4}(X)$ $\cdot$ $f_{s',t'}^{\cS_4}(X)$ by $P:=x_1y_1+x_2y_2+x_3y_3+x_4y_4$ as in (\ref{polyR}) of Section \ref{seS4A4}.
For $\ba=(a,b)$, $\ba'=(a',b')\in M^2$ with $D_\ba\cdot D_{\ba'}\neq 0$, we put \[ L_\ba:=\Spl_M f_\ba^{\cS_4}(X),\quad G_\ba:=\Gal(f_\ba^{\cS_4}/M), \quad G_{\ba,\ba'}:=\Gal(f_{\ba,\ba'}^{\cS_4}/M). \]
We assume that $G_\ba=\cS_3$ or $\C_3$ and omit the cases where $\#G_\ba\leq 2$ or $\#G_{\ba'}\leq 2$.
\begin{theorem}\label{thred} For $\ba=(a,b)$, $\ba'=(a',b')\in M^2$ with $D_\ba\cdot D_{\ba'}\neq 0$, assume that $G_{\ba}=\cS_3$ or $\C_3$, and $\#G_{\ba'}\geq 3$. If $G_{\ba}=\cS_3$ $($resp. $G_{\ba}=\C_3$$)$ then an answer to the intersection problem of $f_{s,t}^{\cS_4}(X)$ is given by ${\rm DT}(\R_{\ba,\ba'})$ as Table $5$ $($resp. Table $6$$)$ shows. \end{theorem}
\begin{center} {\rm Table} $5$\vspace*{3mm}\\ {\small
\begin{tabular}{|c|c|l|l|c|l|l|}\hline
$G_\ba$& $G_{\ba'}$ & & GAP ID & $G_{\ba,{\ba'}}$ & & ${\rm DT}(\R_{\ba,\ba'})$ \\ \hline
& & (IV-1) & $[144,183]$ & $\cS_3\times \cS_4$ & $L_\ba\cap L_{\ba'}=M$ & $24$\\ \cline{3-7} & $\cS_4$ & (IV-2) & $[72,43]$ & $(\C_3\times \A_4)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $12,12$\\ \cline{3-7} & & (IV-3) & $[24,12]$ & $\cS_4$ & $L_\ba\subset L_{\ba'}$ & $12,8,4$\\ \cline{2-7}
& $\A_4$ & (IV-4) & $[72,44]$ & $\cS_3\times \A_4$ & $L_\ba\cap L_{\ba'}=M$ & $24$\\ \cline{2-7}
& & (IV-5) & $[48,38]$ & $\cS_3\times \D_4$ & $L_\ba\cap L_{\ba'}=M$ & $24$\\ \cline{3-7} & \raisebox{-1.6ex}[0cm][0cm]{$\D_4$} & (IV-6) & $[24,6]$ & $\D_{12}$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $12,12$\\ \cline{3-7} \raisebox{-1.6ex}[0cm][0cm]{$\cS_3$} & & (IV-7) & $[24,8]$ & $(\C_3\times \V_4)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $24$\\ \cline{3-7} & & (IV-8) & $[24,8]$ & $(\C_3\times \V_4)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $12,12$\\ \cline{2-7}
& \raisebox{-1.6ex}[0cm][0cm]{$\C_4$} & (IV-9) & $[24,5]$ & $\cS_3\times \C_4$ & $L_\ba\cap L_{\ba'}=M$ & $24$\\ \cline{3-7} & & (IV-10) & $[12,1]$ & $\C_3\rtimes \C_4$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $12,12$\\ \cline{2-7}
& & (IV-11) & $[24,14]$ & $\cS_3\times \V_4$ & $L_\ba\cap L_{\ba'}=M$ & $24$\\ \cline{3-7} & & (IV-12) & $[24,14]$ & $\cS_3\times \V_4$ & $L_\ba\cap L_{\ba'}=M$ & $12,12$\\ \cline{3-7} & $\V_4$ & (IV-13) & $[12,4]$ & $\D_6$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $12,12$\\ \cline{3-7} & & (IV-14) & $[12,4]$ & $\D_6$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $12,6,6$\\ \cline{3-7} & & (IV-15) & $[12,4]$ & $\D_6$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $6,6,6,6$\\ \hline\hline
& & (IV-16) & $[36,10]$ & $\cS_3\times \cS_3$ & $L_\ba\cap L_{\ba'}=M$ & $18,6$\\ \cline{3-7} \raisebox{-1.6ex}[0cm][0cm]{$\cS_3$} & $\cS_3$ & (IV-17) & $[18,4]$ & $(\C_3\times \C_3)\rtimes \C_2$ & $[L_\ba\cap L_{\ba'}:M]=2$ & $9,9,3,3$\\ \cline{3-7} & & (IV-18) & $[6,1]$ & $\cS_3$ & $L_\ba=L_{\ba'}$ & $6^2,3^3,2,1$\\ \cline{2-7}
& $\C_3$ & (IV-19) & $[18,3]$ & $\cS_3\times \C_3$ & $L_\ba\cap L_{\ba'}=M$ & $18,6$\\ \cline{1-7}
\end{tabular} }\vspace*{5mm} \end{center}
\begin{center} {\rm Table} $6$\vspace*{3mm}\\ {\small
\begin{tabular}{|c|c|l|l|c|l|l|}\hline
$G_\ba$& $G_{\ba'}$ & & GAP ID & $G_{\ba,{\ba'}}$ & & ${\rm DT}(\R_{\ba,\ba'})$ \\ \hline
& $\cS_4$ & (V-1) & $[72,42]$ & $\C_3\times \cS_4$ & $L_\ba\cap L_{\ba'}=M$ & $24$\\ \cline{2-7}
& \raisebox{-1.6ex}[0cm][0cm]{$\A_4$} & (V-2) & $[36,11]$ & $\C_3\times \A_4$ & $L_\ba\cap L_{\ba'}=M$ & $12,12$\\ \cline{3-7} \raisebox{-1.6ex}[0cm][0cm]{$\C_3$} & & (V-3) & $[12,3]$ & $\A_4$ & $L_\ba\subset L_{\ba'}$ & $12,4,4,4$\\ \cline{2-7} & $\D_4$ & (V-4) & $[24,10]$ & $\C_3\times \D_4$ & $L_\ba\cap L_{\ba'}=M$ & $24$\\ \cline{2-7} & $\C_4$ & (V-5) & $[12,2]$ & $\C_{12}$ & $L_\ba\cap L_{\ba'}=M$ & $12,12$\\ \cline{2-7} & $\V_4$ & (V-6) & $[12,5]$ & $\C_3\times \V_4$ & $L_\ba\cap L_{\ba'}=M$ & $12,12$\\ \hline\hline
& $\cS_3$ & (V-7) & $[18,3]$ & $\C_3\times \cS_3$ & $L_\ba\cap L_{\ba'}=M$ & $18,6$\\ \cline{2-7} $\C_3$& \raisebox{-1.6ex}[0cm][0cm]{$\C_3$} & (V-8) & $[9,2]$ & $\C_3\times \C_3$ & $L_\ba\cap L_{\ba'}=M$ & $9,9,3,3$\\ \cline{3-7} & & (V-9) & $[3,1]$ & $\C_3$ & $L_\ba=L_{\ba'}$ & $3^7,1^3$\\ \cline{1-7} \end{tabular} }\vspace*{5mm} \end{center}
For example, if we take $\ba=(1,-1)$, $\ba'=(-1,1)$ and $M=\mathbb{Q}$ then we have $f_\ba^{\cS_4}(X)=(X-1)(X^3+X^2+2X+1)$ and $f_{\ba'}^{\cS_4}(X)=(X+1)(X^3-X^2+1)$. We see that $\Spl_\mathbb{Q} f_\ba^{\cS_4}(X)=\Spl_\mathbb{Q} f_{\ba'}^{\cS_4}(X)$ because $\mathrm{DT}(\R_{\ba,\ba'})$ is given by $6^2, 3^3, 2, 1$.
{\small \vspace*{1mm} \begin{tabular}{ll} Akinari HOSHI & Katsuya MIYAKE\\ Department of Mathematics & Department of Mathematics\\ Rikkyo University & School of Fundamental Science and Engineering\\ 3--34--1 Nishi-Ikebukuro Toshima-ku & Waseda University\\ Tokyo, 171--8501, Japan & 3--4--1 Ohkubo Shinjuku-ku\\ E-mail: \texttt{[email protected]} & Tokyo, 169--8555, Japan\\ & E-mail: \texttt{[email protected]} \end{tabular} }
\end{document} | arXiv |
\begin{document}
\title{Extending structures for alternative bialgebras}
\setcounter{section}{0}
\begin{abstract} We introduce the concept of braided alternative bialgebra. The theory of unified product for alternative bialgebras is developed. As an application, the extending problem for alternative bialgebra is solved by using some non-abelian cohomology theory. \par
{\bf 2020 MSC:} 17D05, 16T10
\par
{\bf Keywords:} Braided alternative bialgebras, cocycle bicrossproducts, extending structures, non-abelian cohomology. \end{abstract}
\tableofcontents
\section{Introduction}
The concept of alternative bialgebras was introduced by Goncharov in \cite{Gon} which is related to classical Yang--Baxter equation on alternative algebras. This new type of algebraic structure was further studied by Ni and Bai in \cite{NB} and by Sun in \cite{Sun}.
The theory of extending structure for many types of algebras were well developed by A. L. Agore and G. Militaru in \cite{AM1,AM2,AM3,AM4,AM5,AM6}. Let $A$ be an algebra and $E$ be a vector space containing $A$ as a subspace. The extending problem is to describe and classify all algebra structures on $E$ such that $A$ is a subalgebra of $E$. They show that associated to any extending structure of $A$ by a complement space $V$, there is a unified product on the direct sum space $E\cong A\oplus V$. Recently, extending structures for alternative algebras and pre-alternative algebras were studied in \cite{Z5}. Extending structures for 3-Lie algebras, Lie bialgebras, infinitesimal bialgebras and Lie conformal superalgebras were studied in \cite{Hong,Z2,Z3,Z4,ZCY}.
As a continue of our paper \cite{Z5} and \cite{Z3,Z4}, the aim of this paper is to study extending structures for alternative bialgebras. For this purpose, we will introduce the concept of braided alternative bialgebras. Then we give the construction of cocycle bicrossproducts for alternative bialgebras. We will show that these new concept and construction will play a key role in considering extending problem for alternative bialgebras. As an application, we solve the extending problem for alternative bialgebras by using some non-abelian cohomology theory.
This paper is organized as follows. In Section 2, we recall some definitions and fix some notations. In Section 3, we introduced the concept of braided alternative bialgebras and proved the bosonisation theorem associating braided alternative bialgebras to ordinary alternative bialgebras. In Section 4, we define the notion of matched pairs of braided alternative bialgebras. In Section 5, we construct cocycle bicrossproduct alternative bialgebras through two generalized braided alternative bialgebras. In Section 6, we studied the extending problems for alternative bialgebras and proof that they can be classified by some non-abelian cohomology theory.
Throughout the following of this paper, all vector spaces will be over a fixed field of character zero. An algebra or a coalgebra is denoted by $(A, \cdot)$ or $(A, \Delta)$. The identity map of a vector space $V$ is denoted by $\mathrm{id}_V: V\to V$ or simply $\mathrm{id}: V\to V$. The flip map $\tau: V\otimes V\to V\otimes V$ is defined by $\tau(u\otimes v)=v\otimes u$ for all $u, v\in V$.
\section{Preliminaries}
\begin{definition} An alternative algebra is a vector space $A$ with a multiplication $\cdot : A\otimes A\rightarrow A :(x, y)\mapsto x\cdot y$ such that the following identities hold: \begin{equation}\label{eq:LB1}
({a}, {b}, {c})=-({b}, {a}, {c}), \quad ({a}, {b}, {c})=-({a}, {c}, {b}),
\end{equation}
where $({a}, {b}, {c})=({a}\cdot {b})\cdot {c}-{a}\cdot({b}\cdot {c})$ is the associator of the elements ${a}, {b}, {c}\in A$.
In the following, we always omit $"\cdot"$ in calculation of this paper for convenience.
Note that the above identities are equivalent to the following identities: \begin{eqnarray}\label{eq:LB2}
&& ({a} {b}){c}-{a}({b} {c})+({b} {a}){c}-{b}({a} {c})=0,\\ \label{eq:LB3} && ({a} {b}){c}-{a}({b} {c})+({a} {c}){b}-{a}({c} {b})=0.
\end{eqnarray} \end{definition} \begin{definition} An alternative coalgebra $A$ is a vector space equipped with a bilinear coproduct $\Delta:A\rightarrow A\otimes A$ such that the following conditions are satisfied, \begin{equation}\label{eq:LB4}
(\Delta\otimes\mathrm{id})\Delta(a)-(\mathrm{id}\otimes\Delta)\Delta(a)=-\tau_{12}\big((\Delta\otimes\mathrm{id})\Delta(a)-(\mathrm{id}\otimes\Delta)\Delta(a)\big),
\end{equation}
\begin{equation}\label{eq:LB5}
(\Delta\otimes\mathrm{id})\Delta(a)-(\mathrm{id}\otimes\Delta)\Delta(a)=-\tau_{23}\big((\Delta\otimes\mathrm{id})\Delta(a)-(\mathrm{id}\otimes\Delta)\Delta(a)\big),
\end{equation} where $\tau_{12}(a\otimes b\otimes c)=b\otimes a\otimes c$ and $\tau_{23}(a\otimes b\otimes c)=a\otimes c\otimes b$. We denote this alternative coalgebra by $(A,\Delta)$. \end{definition} \begin{definition}\cite{Gon} \label{dfnlb} An alternative bialgebra is a vector space $A$ equipped simultaneously with a multiplication $\cdot : A\otimes A\rightarrow A $ and a comultiplication $\Delta:A\rightarrow A\otimes A$ such that $(A,\cdot)$ is an alternative algebra , $(A,\Delta)$ is an alternative coalgbra and the following conditions are satisfied, \begin{equation}\label{eq:LB6}
\Delta(ab)=\sum a{}_{1} b\otimes a{}_{2}+a{}_{2} b\otimes a{}_{1}-a{}_{2}\otimes b a{}_{1})
+\sum (b{}_{1}\otimes a b{}_{2}+b{}_{1}\otimes b{}_{2} a-a b{}_{1}\otimes b{}_{2}),
\end{equation}
\begin{equation}\label{eq:LB7}
\Delta(ba)+\tau\Delta(b a)=\sum(a{}_{1}\otimes b a{}_{2}+b a{}_{2}\otimes a{}_{1})
+\sum (b{}_{1} a\otimes b{}_{2}+b{}_{2}\otimes b{}_{1} a).
\end{equation} \end{definition}
We denote it by $(A, \cdot , \Delta )$. We can also write the above equations as ad-actions on tensors by \begin{equation}\label{eq:LB8} \Delta(ab)=\Delta (a)\bullet b+\tau\Delta (a)\bullet b- b\bullet\tau\Delta(a)+a\bullet \Delta(b)+[\Delta(b),a],
\end{equation}
\begin{equation}\label{eq:LB9} \Delta(ba)+\tau\Delta(ba)=b\bullet \Delta (a)+b\cdot\tau\Delta (a)+ \Delta(b)\bullet a+\tau \Delta(b)\cdot a,
\end{equation} where $a\cdot \Delta(b):=\sum a b{}_{1}\otimes b{}_{2}$ , $\Delta (a)\cdot b:= \sum a{}_{1}\otimes a{}_{2} b$ , $a\bullet \Delta(b):=\sum b{}_{1}\otimes a b{}_{2}$ , $\Delta(a)\bullet b:=\sum a{}_{1} b\otimes a{}_{2}$ and $[\Delta(b),a]:=\Delta(b)\cdot a-a\cdot\Delta(b)$.
\begin{definition} Let ${H}$ be an alternative algebra and $V$ be a vector space. Then $V$ is called an ${H}$-bimodule if there is a pair of linear maps $ \triangleright :{H}\otimes V \to V,(x,v) \to x \triangleright v$ and $\triangleleft :V\otimes {H} \to V,(v,x) \to v \triangleleft x$ such that the following conditions hold: \begin{eqnarray} \label{(10)} &&(xy) \triangleright v - x \triangleright (y \triangleright v)+(yx)\triangleright v-y\triangleright (x\triangleright v)=0,\\
\label{(11)} && v \triangleleft (xy) -(v\triangleleft x)\triangleleft y +x\triangleright(v\triangleleft y)-(x\triangleright v)\triangleleft y=0,\\
\label{(12)} &&(xy)\triangleright v-x\triangleright(y\triangleright v)+(x\triangleright v)\triangleleft y-x\triangleright(v\triangleleft y) = 0,\\ \label{(13)} &&(v\triangleleft x)\triangleleft y - v\triangleleft (xy)+(v\triangleleft y)\triangleleft x-v\triangleleft (yx)=0, \end{eqnarray} for all $x,y\in {H}$ and $v\in V.$ \end{definition} The category of bimodules over $H$ is denoted by ${}_{H}\mathcal{M}{}_{H}$.
\begin{definition} Let ${H}$ be an alternative coalgebra, $V$ be a vector space. Then $V$ is called an ${H}$-bicomodule if there is a pair of linear maps $\phi:V\to {H}\otimes V$ and $\psi:V\to V\otimes {H}$ such that the following conditions hold: \begin{eqnarray} \label{(14)} &&\left(\Delta_{H} \otimes \mathrm{id} _{V}\right)\phi(v)-(\mathrm{id} _{H}\otimes \phi)\phi(v)=-\tau_{12}\big((\Delta_{H}\otimes\mathrm{id}_{V})\phi(v)-\left(\mathrm{id} _{H} \otimes \phi\right) \phi(v)\big),\\ \label{(15)} &&\left(\psi \otimes \mathrm{id} _{H}\right)\psi(v)-(\mathrm{id} _{V}\otimes \Delta_{H})\psi(v)=-\tau_{12}\big((\phi\otimes\mathrm{id}_{H})\psi(v)-\left(\mathrm{id} _{H} \otimes \psi\right) \phi(v)\big),\\ \label{(16)} &&\left(\Delta_{H} \otimes \mathrm{id} _{V}\right)\phi(v)-(\mathrm{id} _{H}\otimes \phi)\phi(v)=-\tau_{23}\big((\phi\otimes\mathrm{id}_{H})\psi(v)-\left(\mathrm{id} _{H} \otimes \psi\right) \phi(v)\big),\\ \label{(17)} &&\left(\psi \otimes \mathrm{id} _{H}\right)\psi(v)-(\mathrm{id} _{V}\otimes \Delta_{H})\psi(v)=-\tau_{23}\big((\psi\otimes\mathrm{id}_{H})\psi(v)-\left(\mathrm{id} _{V} \otimes \Delta_{H}\right) \psi(v)\big). \end{eqnarray} If we denote by $\phi(v)=v{}_{(-1)}\otimes v{}_{(0)}$ and $\psi(v)=v{}_{(0)}\otimes v{}_{(1)}$, then the above equations can be written as \begin{eqnarray}
&&\Delta_{H}\left(v_{(-1)}\right) \otimes v_{(0)}-v{}_{(-1)}\otimes\phi(v{}_{(0)})=-\tau_{12}\big(\Delta_{H}(v{}_{(-1)})\otimes v{}_{(0)}-v_{(-1)} \otimes \phi\left(v_{(0)}\right)\big),\\
&&\psi(v{}_{(0)})\otimes v{}_{(1)}-v_{(0)} \otimes \Delta_{H}\left(v_{(1)}\right)=-\tau_{12}\big(\phi\left(v_{(0)}\right) \otimes v_{(1)}-v{}_{(-1)}\otimes\psi(v{}_{(0)})\big),\\
&&\Delta_{H}\left(v_{(-1)}\right) \otimes v_{(0)}-v{}_{(-1)}\otimes\phi(v{}_{(0)})=-\tau_{23}\big(\phi(v{}_{(0)})\otimes v{}_{(1)}-v_{(-1)} \otimes \psi\left(v_{(0)}\right)\big),\\
&&\psi(v{}_{(0)})\otimes v{}_{(1)}-v_{(0)} \otimes \Delta_{H}\left(v_{(1)}\right)=-\tau_{23}\big(\psi\left(v_{(0)}\right) \otimes v_{(1)}-v{}_{(0)}\otimes\Delta_{H}(v{}_{(1)})\big). \end{eqnarray} \end{definition} The category of bicomodules over $H$ is denoted by ${}^{H}\mathcal{M}{}^{H}$.
\begin{definition} Let ${H}$ and ${A}$ be alternative algebras. An action of ${H}$ on ${A}$ is a pair of linear maps $\triangleright:{H}\otimes {A} \to {A},(x, a) \to x \triangleright a$ and $\triangleleft: {A}\otimes {H} \to {A}, (a, x) \to a \triangleleft x$ such that $A$ is an $H$-bimodule and the following conditions hold: \begin{eqnarray}
&&(a\triangleleft x)b-a(x\triangleright b)+(x\triangleright a)b-x\triangleright( a b)=0,\\
&&(a b)\triangleleft x-a(b\triangleleft x)+(b a)\triangleleft x-b(a\triangleleft x)=0 ,\\
&&(x\triangleright a)b-x\triangleright( a b)+(x\triangleright b)a-x\triangleright( b a)=0,\\
&&(a\triangleleft x)b-a(x\triangleright b)+(a b)\triangleleft x-a(b\triangleleft x)=0, \end{eqnarray} for all $x\in {H}$ and $a, b\in {A}.$ In this case, we call $(A,\triangleright,\triangleleft)$ to be an $H$-bimodule alternative algebra. \end{definition}
\begin{definition} Let ${H}$ and ${A}$ be alternative coalgebras. An coaction of ${H}$ on ${A}$ is a pair of linear maps $\phi:{A}\to {H}\otimes {A}$ and $\psi:{A}\to {A}\otimes {H}$ such that $A$ is an $H$-bicomodule and the following conditions hold: \begin{eqnarray}
&&(\phi\otimes\mathrm{id}_A)\Delta_{A}(a)-(\mathrm{id}_H \otimes \Delta_{A})\phi(a)=-\tau_{12}\big((\psi\otimes\mathrm{id}_A)\Delta_{A}(a)-(\mathrm{id}_A\otimes \phi) \Delta_{A}(a)\big),\\
&&(\Delta_{A}\otimes\mathrm{id}_H)\psi(a)-(\mathrm{id}_A \otimes \psi)\Delta_{A}(a)=-\tau_{12}\big((\Delta_{A}\otimes\mathrm{id}_H)\psi(a)-(\mathrm{id}_A\otimes \psi) \Delta_{A}(a)\big),\\
&&(\phi\otimes\mathrm{id}_A)\Delta_{A}(a)-(\mathrm{id}_H \otimes \Delta_{A})\phi(a)=-\tau_{23}\big((\phi\otimes\mathrm{id}_A)\Delta_{A}(a)-(\mathrm{id}_H\otimes\Delta_{A} ) \phi(a)\big),\\
&&(\Delta_{A}\otimes\mathrm{id}_H)\psi(a)-(\mathrm{id}_A \otimes \psi)\Delta_{A}(a)=-\tau_{23}\big((\psi\otimes\mathrm{id}_A)\Delta_{A}(a)-(\mathrm{id}_A\otimes \phi) \Delta_{A}(a)\big). \end{eqnarray} If we denote by $\phi(a)=a{}_{(-1)}\otimes a{}_{(0)}$ and $\psi(a)=a{}_{(0)}\otimes a{}_{(1)}$, then the above equations can be written as \begin{eqnarray}
&&\phi(a{}_{1})\otimes a{}_{2}-a{}_{(-1)}\otimes\Delta_{A}(a{}_{(0)})=-\tau_{12}\big(\psi(a{}_{1})\otimes a{}_{2}-a{}_{1}\otimes\phi(a{}_{2})\big),\\
&&\Delta_{A}(a{}_{(0)})\otimes a{}_{(1)}-a{}_{1}\otimes\psi(a{}_{2})=-\tau_{12}\big(\Delta_{A}(a{}_{(0)})\otimes a{}_{(1)}-a{}_{1}\otimes\psi(a{}_{2})\big),\\
&&\phi(a{}_{1})\otimes a{}_{2}-a{}_{(-1)}\otimes\Delta_{A}(a{}_{(0)})=-\tau_{23}\big(\phi(a{}_{1})\otimes a{}_{2}-a{}_{(-1)}\otimes\Delta_{A}(a{}_{(0)})\big),\\
&&\Delta_{A}(a{}_{(0)})\otimes a{}_{(1)}-a{}_{1}\otimes\psi(a{}_{2})=-\tau_{23}\big(\psi(a{}_{1})\otimes a{}_{2}-a{}_{1}\otimes\phi(a{}_{2})\big), \end{eqnarray} for all $a\in {A}.$ In this case, we call $(A,\phi,\psi)$ to be an $H$-bicomodule alternative coalgebra. \end{definition}
\begin{definition} Let $({A},\cdot)$ be a given alternative algebra (alternative coalgebra, alternative bialgebra), $E$ be a vector space. An extending system of ${A}$ through $V$ is an alternative algebra (alternative coalgebra, alternative bialgebra) on $E$ such that $V$ is a complement subspace of ${A}$ in $E$, the canonical injection map $i: A\to E, a\mapsto (a, 0)$ or the canonical projection map $p: E\to A, (a,x)\mapsto a$ is an alternative algebra (alternative coalgebra, alternative bialgebra) homomorphism.
The extending problem is to describe and classify up to an isomorphism the set of all alternative algebra (alternative coalgebra, alternative bialgebra) structures that can be defined on $E$.
\end{definition}
We remark that our definition of extending system of ${A}$ through $V$ contains not only extending structure in \cite{AM1,AM2,AM3} but also the global extension structure in \cite{AM5}. The reason is that when we consider extending problem for Lie bialgebras, both of them are necessarily used, this will be clear in the context of next two sections. In fact, the canonical injection map $i: A\to E$ is an alternative (co)algebra homomorphism if and only if $A$ is an alternative sub(co)algebra of $E$.
\begin{definition} Let ${A} $ be an alternative algebra (alternative coalgebra, alternative bialgebra), $E$ be an alternative algebra (alternative coalgebra, alternative bialgebra) such that ${A} $ is a subspace of $E$ and $V$ is a complement of ${A} $ in $E$. For a linear map $\varphi: E \to E$ we consider the diagram: \begin{equation}\label{eq:ext1} \xymatrix{
0 \ar[r]^{} &A \ar[d]_{\mathrm{id}_A} \ar[r]^{i} & E \ar[d]_{\varphi} \ar[r]^{\pi} &V \ar[d]_{\mathrm{id}_V} \ar[r]^{} & 0 \\
0 \ar[r]^{} & A \ar[r]^{i'} & {E} \ar[r]^{\pi'} & V \ar[r]^{} & 0.
} \end{equation}
where $\pi: E\to V$ are the canonical projection maps and $i: A\to E$ are the inclusion maps. We say that $\varphi: E \to E$ \emph{stabilizes} ${A}$ if the left square of the diagram \eqref{eq:ext1} is commutative.
Let $(E, \cdot)$ and $(E,\cdot')$ be two alternative algebra (alternative coalgebra, alternative bialgebra) structures on $E$. $(E, \cdot)$ and $(E, \cdot')$ are called \emph{equivalent}, and we denote this by $(E, \cdot) \equiv (E, \cdot')$, if there exists an alternative algebra (alternative coalgebra, alternative bialgebra) isomorphism $\varphi: (E, \cdot) \to (E, \cdot')$ which stabilizes ${A} $. Denote by $Extd(E,{A} )$ ($CExtd(E,{A} )$, $BExtd(E,{A} )$) the set of equivalent classes of alternative algebra (alternative coalgebra, alternative bialgebra) structures on $E$. \end{definition}
\section{Braided alternative bialgebras} In this section, we introduce the concept of alternative Hopf bimodule and braided alternative bialgebra which will be used in the following sections.
\subsection{Alternative Hopf bimodule and braided alternative bialgebra} \begin{definition} Let $H$ be an alternative bialgebra. An alternative Hopf bimodule over $H$ is a space $V$ endowed with maps \begin{align*} &\triangleright: H\otimes V \to V,\quad \triangleleft: V\otimes H \to V,\\ &\phi: V \to H \otimes V,\quad \psi: V \to V\otimes H, \end{align*} denoted by \begin{eqnarray*} && \triangleright (x \otimes v) = x \triangleright v, \quad \triangleleft(v\otimes x) = v \triangleleft x, \\ && \phi (v)=\sum v{}_{(-1)}\otimes v{}_{(0)}, \quad \psi (v) = \sum v{}_{(0)} \otimes v{}_{(1)}, \end{eqnarray*} such that $V$ is simultaneously a bimodule, a bicomodule over $H$ and satisfying
the following compatibility conditions \begin{enumerate} \item[(HM1)] $\phi(x\triangleright v)=v{}_{(-1)}\otimes(x\triangleright v{}_{(0)})+v{}_{(-1)}\otimes(v{}_{(0)}\triangleleft x)-x{}_{2}\otimes(v\triangleleft x{}_{1})-x v{}_{(-1)}\otimes v{}_{(0)}$, \end{enumerate}
\begin{enumerate} \item[(HM2)] $\psi(v\triangleleft x)=(v{}_{(0)}\triangleleft x)\otimes v{}_{(1)}+(v{}_{(0)}\triangleleft x)\otimes v{}_{(-1)}-v{}_{(0)}\otimes x v{}_{(-1)}-(v\triangleleft x{}_{1})\otimes x{}_{2}$, \end{enumerate} \begin{enumerate} \item[(HM3)] $\psi(x\triangleright v)=(x{}_{1}\triangleright v)\otimes x{}_{2}+(x{}_{2}\triangleright v)\otimes x{}_{1}+v{}_{(0)}\otimes x v{}_{(1)}+v{}_{(0)}\otimes v{}_{(1)} x-(x\triangleright v{}_{(0)})\otimes v{}_{(1)}$, \item[(HM4)] $\phi(v \triangleleft x)=v{}_{(-1)} x\otimes v{}_{(0)}+v{}_{(1)} x\otimes v{}_{(0)}-v{}_{(1)}\otimes(x\triangleright v{}_{(0)})+x{}_{1}\otimes(v\triangleleft x{}_{2})+x{}_{1}\otimes(x{}_{2}\triangleright v)$, \end{enumerate} \begin{enumerate} \item[(HM5)] $\phi(x\triangleright v)+\tau\psi(x\triangleright v)=v{}_{(-1)}\otimes(x\triangleright v{}_{(0)})+x v{}_{(1)}\otimes v{}_{(0)}+x{}_{2}\otimes (x{}_{1}\triangleright v)$,
\item[(HM6)] $\phi(v\triangleleft x)+\tau\psi(v\triangleleft x)=x{}_{1}\otimes(v\triangleleft x{}_{2})+v{}_{(-1)} x\otimes v{}_{(0)}+v{}_{(1)}\otimes(v{}_{(0)}\triangleleft x)$, \end{enumerate} for all $x\in H$ and $v\in V$. \end{definition}
We denote the category of alternative Hopf bimodules over $H$ by ${}^{H}_{H}\mathcal{M}{}^{H}_{H}$.
\begin{definition} Let $H$ be an alternative bialgebra. If $A$ be an alternative algebra and an alternative coalgebra in ${}^{H}_{H}\mathcal{M}{}^{H}_{H}$, we call $A$ be a \emph{braided alternative bialgebra} if the following conditions are satisfied \begin{enumerate} \item[(BB1)] $\Delta_{A}(a b)=a_{1} b \otimes a_{2} +a{}_{2} b\otimes a{}_{1}-a{}_{2}\otimes b a{}_{1}+b{}_{1}\otimes a b{}_{2} +b{}_{1}\otimes b{}_{2} a-a b{}_{1}\otimes b{}_{2}\\ +(a{}_{(-1)}\triangleright b)\otimes a{}_{(0)}+(a{}_{(1)}\triangleright b)\otimes a{}_{(0)}-a{}_{(0)}\otimes (b\triangleleft a{}_{(-1)})\\ +b{}_{(0)}\otimes (a\triangleleft b{}_{(1)})+b{}_{(0)}\otimes(b{}_{(1)}\triangleright a)-(a\triangleleft b{}_{(-1)})\otimes b{}_{(0)}$, \end{enumerate} \begin{enumerate} \item[(BB2)] $\Delta_{A}(b a)+\tau\Delta_{A}(b a)=a{}_{1}\otimes b a{}_{2}+b a{}_{2}\otimes a{}_{1}+b{}_{1} a\otimes b{}_{2}+b{}_{2}\otimes b{}_{1} a\\
+a{}_{(0)}\otimes(b\triangleleft a{}_{(1)})+(b\triangleleft a{}_{(1)})\otimes a{}_{(0)}+b{}_{(0)}\otimes(b{}_{(-1)}\triangleright a)+(b{}_{(-1)}\triangleright a)\otimes b{}_{(0)}$. \end{enumerate} \end{definition}
Here we say $A$ to be an alternative algebra and an alternative coalgebra in ${}^{H}_{H}\mathcal{M}{}^{H}_{H}$ means that $A$ is simultaneously an $H$-bimodule alternative algebra (alternative coalgebra) and $H$-bicomodule alternative algebra (alternative coalgebra).
Now we construct alternative bialgebra from braided alternative bialgebra. Let $H$ be an alternative bialgebra, $A$ be an alternative algebra and an alternative coalgebra in ${}^{H}_{H}\mathcal{M}{}^{H}_{H}$. We define the multiplication and comultiplication on the direct sum vector space $E:=A \oplus H$ by \begin{eqnarray} \label{sp01} &&(a+ x)(b+ y):=a b+x\triangleright b+a \triangleleft y+ x y, \\ \label{sp02} &&\Delta_{E}(a+ x):=\Delta_{A}(a)+\phi(a)+\psi(a)+\Delta_{H}(x). \end{eqnarray} This is called biproduct of ${A}$ and ${H}$ which will be denoted by $A{>\!\!\!\triangleleft\kern-.33em\cdot\, } H$.
\begin{theorem} \label{thm-main0} Let $H$ be an alternative bialgebra. Then the biproduct $A{>\!\!\!\triangleleft\kern-.33em\cdot\, } H$ forms an alternative bialgebra if and only if $A$ is a braided alternative bialgebra in ${}^{H}_{H}\mathcal{M}{}^{H}_{H}$. \end{theorem}
\begin{proof}
It is easy to prove that $A{>\!\!\!\triangleleft\kern-.33em\cdot\, } H$ is an alternative algebra and an alternative coalgebra with the multiplication \eqref{sp01} and comultiplication \eqref{sp02} . Now we show the compatibility conditions: $$ \begin{aligned} &\Delta_{E}((a+x)(b+y))\\ =&\Delta_{E} (a+x)\bullet (b+y)+\tau\Delta_{E} (a+x)\bullet (b+y)\\ &- (b+y)\bullet\tau\Delta_{E}(a+x)+(a+x)\bullet \Delta_{E}(b+y)\\ &+[\Delta_{E}(b+y),(a+x)],
\end{aligned} $$
$$ \begin{aligned} &\Delta_{E}((b+y)(a+x))+\tau\Delta_{E}((b+y)(a+x))\\ =&(b+y)\bullet \Delta_{E} (a+x)+(b+y)\cdot\tau\Delta_{E} (a+x)\\ &+ \Delta_{E}(b+y)\bullet (a+x)+\tau \Delta_{E}(b+y)\cdot (a+x).
\end{aligned} $$ By direct computations, for the first equation, the left hand side is equal to $$\begin{aligned} &\Delta_{E}((a+x)(b+y))\\ =&\Delta_E(a b+x \triangleright b+a \triangleleft y+ x y)\\ =&\Delta_A(a b)+\Delta_A(x \triangleright b)+\Delta_A(a \triangleleft y)+\phi(a b)+\phi(x \triangleright b)+\phi(a \triangleleft y)\\ &+\psi(a b)+\psi(x \triangleright b)+\psi(a \triangleleft y)+\Delta_{H}(x y), \end{aligned} $$ and the right hand side is equal to \begin{eqnarray*} &&\Delta_{E} (a+x)\bullet (b+y)+\tau\Delta_{E} (a+x)\bullet (b+y)- (b+y)\bullet\tau\Delta_{E}(a+x)\\ &&+(a+x)\bullet \Delta_{E}(b+y)+[\Delta_{E}(b+y),(a+x)]\\ &=&\left(a_{1} \otimes a_{2}+a_{(-1)} \otimes a_{(0)}+a_{(0)} \otimes a_{(1)}+x_{1} \otimes x_{2}\right) \bullet(b+ y)\\ &&\left(a_{2} \otimes a_{1}+a_{(0)} \otimes a_{(-1)}+a_{(1)} \otimes a_{(0)}+x_{2} \otimes x_{1}\right) \bullet(b+ y)\\ &&-(b+y)\bullet\left(a_{2} \otimes a_{1}+a_{(0)} \otimes a_{(-1)}+a_{(1)} \otimes a_{(0)}+x_{2} \otimes x_{1}\right)\\ &&+(a+ x) \bullet\left(b_{1} \otimes b_{2}+b_{(-1)} \otimes b_{(0)}+b_{(0)} \otimes b_{(1)}+y_{1} \otimes y_{2}\right)\\ &&+[b_{1} \otimes b_{2}+b_{(-1)} \otimes b_{(0)}+b_{(0)} \otimes b_{(1)}+y_{1} \otimes y_{2},a+x]\\ &=&a_{1}b \otimes a_{2}+(a{}_{1}\triangleleft y)\otimes a{}_{2}+(a{}_{(-1)} \triangleright b)\otimes a{}_{(0)} +a{}_{(-1)} y\otimes a{}_{(0)} \\ &&+ a{}_{(0)} b\otimes a{}_{(1)}+(a{}_{(0)}\triangleleft y)\otimes a{}_{(1)}+(x{}_{1}\triangleright b)\otimes x{}_{2}+x{}_{1} y\otimes x{}_{2}\\ &&+a{}_{2} b\otimes a{}_{1}+(a{}_{2}\triangleleft y)\otimes a{}_{1}+a{}_{(0)} b\otimes a{}_{(-1)}+(a{}_{(0)}\triangleleft y)\otimes a{}_{(-1)}\\ &&+(a{}_{(1)}\triangleright b)\otimes a{}_{(0)}+a{}_{(1)} y\otimes a{}_{(0)}+(x{}_{2}\triangleright b)\otimes x{}_{1}+x{}_{2} y\otimes x{}_{1}\\ &&-a{}_{2}\otimes b a{}_{1}-a{}_{2}\otimes (y\triangleright a{}_{1})-a{}_{(0)}\otimes(b\triangleleft a{}_{(-1)})-a{}_{(0)}\otimes ya{}_{(-1)}\\ &&-a{}_{(1)}\otimes ba{}_{(0)}-a{}_{(1)}\otimes(y\triangleright a{}_{(0)})-x{}_{2}\otimes(b\triangleleft x{}_{1})-x{}_{2}\otimes yx{}_{1}\\ &&+b{}_{1}\otimes a b{}_{2}+b{}_{1}\otimes (x\triangleright b{}_{2})+b{}_{(-1)}\otimes a b{}_{(0)}+b{}_{(-1)}\otimes (x\triangleright b{}_{(0)})\\ &&+b{}_{(0)}\otimes (a\triangleleft b{}_{(1)})+b{}_{(0)}\otimes x b{}_{(1)}+y{}_{1}\otimes (a\triangleleft y{}_{2})+y{}_{1}\otimes x y{}_{2}\\ &&+b{}_{1}\otimes b{}_{2} a+b{}_{1}\otimes (b{}_{2}\triangleleft x)-a b{}_{1}\otimes b{}_{2}-(x\triangleright b{}_{1})\otimes b{}_{2}\\ &&+b{}_{(-1)}\otimes b{}_{(0)} a+b{}_{(-1)}\otimes(b{}_{(0)}\triangleleft x)-(a\triangleleft b{}_{(-1)})\otimes b{}_{(0)}-x b{}_{(-1)}\otimes b{}_{(0)}\\ &&+b{}_{(0)}\otimes (b{}_{(1)}\triangleright a)+b{}_{(0)}\otimes b{}_{(1)} x-a b{}_{(0)}\otimes b{}_{(1)}-(x\triangleright b{}_{(0)})\otimes b{}_{(1)}\\ &&+y{}_{1}\otimes (y{}_{2}\triangleright a)+y{}_{1}\otimes y{}_{2} x-(a\triangleleft y{}_{1})\otimes y{}_{2}-x y{}_{1}\otimes y{}_{2}. \end{eqnarray*} Thus the two sides are equal to each other if and only if
(1) $\Delta_{A}(a b)=a_{1} b \otimes a_{2} +a{}_{2} b\otimes a{}_{1}-a{}_{2}\otimes b a{}_{1}+b{}_{1}\otimes a b{}_{2} +b{}_{1}\otimes b{}_{2} a-a b{}_{1}\otimes b{}_{2}$
$\qquad+(a{}_{(-1)}\triangleright b)\otimes a{}_{(0)}+(a{}_{(1)}\triangleright b)\otimes a{}_{(0)}-a{}_{(0)}\otimes (b\triangleleft a{}_{(-1)})$
$\qquad+b{}_{(0)}\otimes (a\triangleleft b{}_{(1)})+b{}_{(0)}\otimes(b{}_{(1)}\triangleright a)-(a\triangleleft b{}_{(-1)})\otimes b{}_{(0)}$,
(2) $\phi(a \triangleleft y)=a{}_{(-1)} y\otimes a{}_{(0)}+a{}_{(1)} y\otimes a{}_{(0)}-a{}_{(1)}\otimes(y\triangleright a{}_{(0)})+y{}_{1}\otimes(a\triangleleft y{}_{2})+y{}_{1}\otimes(y{}_{2}\triangleright a)$,
(3) $\phi(x\triangleright b)=-x{}_{2}\otimes(b\triangleleft x{}_{1})+b{}_{(-1)}\otimes(x\triangleright b{}_{(0)})+b{}_{(-1)}\otimes(b{}_{(0)}\triangleleft x)-x b{}_{(-1)}\otimes b{}_{(0)}$,
(4) $\psi(x\triangleright b)=(x{}_{1}\triangleright b)\otimes x{}_{2}+(x{}_{2}\triangleright b)\otimes x{}_{1}+b{}_{(0)}\otimes x b{}_{(1)}+b{}_{(0)}\otimes b{}_{(1)} x-(x\triangleright b{}_{(0)})\otimes b{}_{(1)}$,
(5) $\psi(a\triangleleft y)=(a{}_{(0)}\triangleleft y)\otimes a{}_{(1)}+(a{}_{(0)}\triangleleft y)\otimes a{}_{(-1)}-a{}_{(0)}\otimes y a{}_{(-1)}-(a\triangleleft y{}_{1})\otimes y{}_{2}$,
(6) $\Delta_{A}(x \triangleright b)=b{}_{1}\otimes(x\triangleright b{}_{2})+b{}_{1}\otimes (b{}_{2}\triangleleft x)-(x\triangleright b{}_{1})\otimes b{}_{2}$,
(7) $\Delta_{A}(a\triangleleft y)=(a{}_{1}\triangleleft y)\otimes a{}_{2}+(a{}_{2}\triangleleft y)\otimes a{}_{1}-a{}_{2}\otimes(y\triangleright a{}_{1})$,
(8) $\phi(ab)=+b{}_{(-1)}\otimes a b{}_{(0)}+b{}_{(-1)}\otimes b{}_{(0)} a-a{}_{(1)}\otimes b a{}_{(0)}$,
(9) $\psi(a b)=a{}_{(0)} b\otimes a{}_{(1)}+a{}_{(0)} b\otimes a{}_{(-1)}-a b{}_{(0)}\otimes b{}_{(1)}$.
For the second equation, the left hand side is equal to \begin{eqnarray*} &&\Delta_{E}((b+y)(a+x))+\tau\Delta_{E}((b+y)(a+x))\\ &=&\Delta_E( b a+y \triangleright a+b \triangleleft x+ y x)+\tau\Delta_E( b a+y \triangleright a+b \triangleleft x+ y x)\\ &=&\Delta_A( ba)+\Delta_A(y \triangleright a)+\Delta_A(b \triangleleft x)+\phi(b a)+\phi(y \triangleright a)+\phi(b \triangleleft x)\\ &&+\psi(b a)+\psi(y \triangleright a)+\psi(b \triangleleft x)+\Delta_{H}(yx)+\tau\Delta_A( ba)+\tau\Delta_A(y \triangleright a)\\ &&+\tau\Delta_A(b \triangleleft x)+\tau\phi(b a)+\tau\phi(y \triangleright a)+\tau\phi(b \triangleleft x)+\tau\psi(b a)+\tau\psi(y \triangleright a)\\ &&+\tau\psi(b \triangleleft x)+\tau\Delta_{H}(yx), \end{eqnarray*} and the right hand side is equal to \begin{eqnarray*} &&(b+y)\bullet \Delta_{E} (a+x)+(b+y)\cdot\tau\Delta_{E} (a+x)\\ &&+ \Delta_{E}(b+y)\bullet (a+x)+\tau \Delta_{E}(b+y)\cdot (a+x)\\ &=&(b+y)\bullet\left(a_{1} \otimes a_{2}+a_{(-1)} \otimes a_{(0)}+a_{(0)} \otimes a_{(1)}+x_{1} \otimes x_{2}\right)\\ &&+(b+y)\cdot\left(a_{2} \otimes a_{1}+a_{(0)} \otimes a_{(-1)}+a_{(1)} \otimes a_{(0)}+x_{2} \otimes x_{1}\right)\\ &&+\left(b_{1} \otimes b_{2}+b_{(-1)} \otimes b_{(0)}+b_{(0)} \otimes b_{(1)}+y_{1} \otimes y_{2}\right)\bullet(a+x)\\ &&+(b{}_{2}\otimes b{}_{1}+b{}_{(0)}\otimes b{}_{(-1)}+b{}_{(1)}\otimes b{}_{(0)}+y{}_{2}\otimes y{}_{1})\cdot (a+x)\\ &=&a{}_{1}\otimes b a{}_{2}+a{}_{1}\otimes(y\triangleright a{}_{2})+a{}_{(-1)}\otimes b a{}_{(0)}+a{}_{(-1)}\otimes (y\triangleright a{}_{(0)})\\ &&+a{}_{(0)}\otimes(b\triangleleft a{}_{(1)})+a{}_{(0)}\otimes y a{}_{(1)}+x{}_{1}\otimes(b\triangleleft x{}_{2})+x{}_{1}\otimes y x{}_{2}\\ &&+b a{}_{2}\otimes a{}_{1}+(y\triangleright a{}_{2})\otimes a{}_{1}+b a{}_{(0)}\otimes a{}_{(-1)}+(y\triangleright a{}_{(0)})\otimes a{}_{(-1)}\\ &&+(b\triangleleft a{}_{(1)})\otimes a{}_{(0)}+y a{}_{(1)}\otimes a{}_{(0)}+(b\triangleleft x{}_{2})\otimes x{}_{1}+y x{}_{2}\otimes x{}_{1}\\ &&+b{}_{1} a\otimes b{}_{2}+(b{}_{1}\triangleleft x)\otimes b{}_{2}+(b{}_{(-1)}\triangleright a)\otimes b{}_{(0)}+b{}_{(-1)} x\otimes b{}_{(0)}\\ &&+b{}_{(0)} a\otimes b{}_{(1)}+(b{}_{(0)}\triangleleft x)\otimes b{}_{(1)}+(y{}_{1}\triangleright a)\otimes y{}_{2}+y{}_{1} x\otimes y{}_{2}\\ &&+b{}_{2}\otimes b{}_{1} a+b{}_{2}\otimes (b{}_{1}\triangleleft x)+b{}_{(0)}\otimes(b{}_{(-1)}\triangleright a)+b{}_{(0)}\otimes b{}_{(-1)} x\\ &&+b{}_{(1)}\otimes b{}_{(0)} a+b{}_{(1)}\otimes(b{}_{(0)}\triangleleft x)+y{}_{2}\otimes(y{}_{1}\triangleright a)+y{}_{2}\otimes y{}_{1} x. \end{eqnarray*} Then the two sides are equal to each other if and only if satisfying the following conditions
(10) $\Delta_{A}(b a)+\tau\Delta_{A}(b a)=a{}_{1}\otimes b a{}_{2}+b a{}_{2}\otimes a{}_{1}+b{}_{1} a\otimes b{}_{2}+b{}_{2}\otimes b{}_{1} a$
$\qquad+a{}_{(0)}\otimes(b\triangleleft a{}_{(1)})+(b\triangleleft a{}_{(1)})\otimes a{}_{(0)}+b{}_{(0)}\otimes(b{}_{(-1)}\triangleright a)+(b{}_{(-1)}\triangleright a)\otimes b{}_{(0)}$,
(11) $\Delta_{A}(y\triangleright a)+\tau\Delta_{A}(y\triangleright a)=a{}_{1}\otimes (y\triangleright a{}_{2})+(y\triangleright a{}_{2})\otimes a{}_{1}$,
(12) $\Delta_{A}(b\triangleleft x)+\tau\Delta_{A}(b\triangleleft x)=b{}_{2}\otimes(b{}_{1}\triangleleft x)+(b{}_{1}\triangleleft x)\otimes b{}_{2}$,
(13) $\phi(b a)+\tau\psi(b a)=a{}_{(-1)}\otimes b a{}_{(0)}+b{}_{(1)}\otimes b{}_{(0)} a$,
(14) $\psi(ba)+\tau\phi(b a)=b a{}_{(0)}\otimes a{}_{(-1)}+b{}_{(0)} a\otimes b{}_{(1)}$,
(15) $\phi(b\triangleleft x)+\tau\psi(b\triangleleft x)=x{}_{1}\otimes(b\triangleleft x{}_{2})+b{}_{(-1)} x\otimes b{}_{(0)}+b{}_{(1)}\otimes(b{}_{(0)}\triangleleft x)$,
(16) $\phi(y\triangleright a)+\tau\psi(y\triangleright a)=a{}_{(-1)}\otimes(y\triangleright a{}_{(0)})+y a{}_{(1)}\otimes a{}_{(0)}+y{}_{2}\otimes (y{}_{1}\triangleright a)$,
(17) $\psi(y\triangleright a)+\tau\phi(y\triangleright a)=a{}_{(0)}\otimes y a{}_{(1)}+(y\triangleright a{}_{(0)})\otimes a{}_{(-1)}+(y{}_{1}\triangleright a)\otimes y{}_{2}$,
(18) $\psi(b\triangleleft x)+\tau\phi(b\triangleleft x)=(b\triangleleft x{}_{2})\otimes x{}_{1}+(b{}_{(0)}\triangleleft x)\otimes b{}_{(1)}+b{}_{(0)}\otimes b{}_{(-1)} x$.
\noindent From (6)--(9) and(11)--(14) we have that $A$ is an alternative algebra and an alternative coalgebra in ${}^{H}_{H}\mathcal{M}{}^{H}_{H}$, from (2)--(5) and (15)--(18) we get that $A$ is an alternative Hopf bimodule over $H$, and (1) together with (10) are the conditions for $A$ to be a braided alternative bialgebra. The proof is completed. \end{proof}
\subsection{From quasitriangular alternative bialgebra to braided alternative bialgebra}
Let $A$ be an alternative algebra and $M$ an $A$-bimodule. Given an element $r=$ $\sum_{i} u_{i} \otimes v_{i} \in A \otimes A$, there is a derivation $\Delta_{r}: A \rightarrow M$ associated to each element $r \in M$ as follows: \begin{equation}\label{eq:deltar} \Delta_{r}(a)=a \star r-r \star a= u_i a\otimes v_i-u_i\otimes a v_i. \end{equation} Denote by $$ r_{12}=\sum_{i} u_{i} \otimes v_{i} \otimes 1, \quad r_{13}=\sum_{i} u_{i} \otimes 1 \otimes v_{i},\quad r_{23}=\sum_{i} 1 \otimes u_{i} \otimes v_{i}. $$
\begin{lemma}\cite{Gon} Let $(A, \cdot )$ be an alternative algebra and $ {r}\in A\otimes A$. Let $\Delta:A\rightarrow A\otimes A$ be a linear map defined by Eq.~\eqref{eq:deltar}. Assume that $ {r}$ is skew-symmetric and $ {r}$ satisfies \begin{equation}\label{eqYBE}
{r}_{{23}} {r}_{{12}}- {r}_{{12}} {r}_{{13}}-{r}_{{13}} {r}_{{23}}=0, \end{equation} Then $(A, \cdot, \Delta_r)$ is an alternative bialgebra. \end{lemma}
The equation \eqref{eqYBE} is called the alternative Yang-Baxter equation (AYB). A quasitriangular alternative bialgebra is a pair $(A, r)$ where $A$ is an alternative algebra and $r \in A \otimes A$ is a solution to (AYB).
\begin{lemma} Let $(A, \cdot, r )$ be a quasitriangular alternative bialgebra of $\Delta:=\Delta_r$. Then \begin{equation}\label{r11} (\Delta\otimes \mathrm{id})(r)=-r_{13}r_{23}, \end{equation} \begin{equation}\label{r12} (\mathrm{id}\otimes\Delta)(r)=r_{12}r_{13}. \end{equation}
\end{lemma} \begin{proof} By direct computations, we have \begin{eqnarray*} (\Delta\otimes \mathrm{id})(r)&=&\sum_{i} \Delta(u_i)\otimes v_i=\sum_{ij} (u_j u_i\otimes v_j-u_j \otimes u_i v_j)\otimes v_i\\ &=&\sum_{ij} (u_j u_i\otimes v_j\otimes v_i-u_j \otimes u_i v_j\otimes v_i)\\ &=&r_{12}r_{13}-r_{23}r_{12}\\ &=&-r_{13}r_{23} \end{eqnarray*} \begin{eqnarray*} (\mathrm{id}\otimes\Delta)(r)&=&\sum_{i} u_i\otimes\Delta(v_i)=\sum_{ij} u_i\otimes(u_j v_i\otimes v_j-u_j \otimes v_i v_j)\\ &=&\sum_{ij} (u_i\otimes u_j v_i\otimes v_j-u_i\otimes u_j \otimes v_i v_j)\\ &=&r_{23}r_{12}-r_{13}r_{23}\\ &=&r_{12}r_{13} \end{eqnarray*} Thus the above equality \eqref{r11} and \eqref{r12} hold. \end{proof}
\begin{theorem}
Let $(A, \cdot, r)$ be a quasitriangular alternative bialgebra and $M$ an $A$-bimodule. Then $M$ becomes an alternativel Hopf bimodule over $A$ with the $\phi: M \rightarrow A \otimes M$ and $\psi: M \rightarrow M \otimes A$ given by \begin{equation} \phi(m):=-\sum_{i} u_{i} \otimes m\triangleleft v_{i} ,\quad \psi(m):= \sum_{i} u_{i}\triangleright m \otimes v_{i} \end{equation}
\end{theorem}
\begin{proof} First we prove that $M$ is a bicomodule. By Definition 2.5, we just have to verify that equations \eqref{(14)}--\eqref{(17)} are true.
For the equation \eqref{(14)}, the left hand side is equal to \begin{eqnarray*} &&(\Delta_{r} \otimes \mathrm{id} _{A})\phi(m)-(\mathrm{id} _{A}\otimes \phi)\phi(m)\\ &=&-\Delta_r(u_i)\otimes m\triangleleft v_i+u_i\otimes \phi(m\triangleleft v_i)\\ &=&u_i\otimes u_j\otimes m\triangleleft (v_iv_j)-u_i\otimes u_j\otimes (m\triangleleft v_i)\triangleleft v_j\\ &=&u_i\otimes u_j\otimes (m\triangleleft (v_iv_j)- (m\triangleleft v_i)\triangleleft v_j) \end{eqnarray*} and the right hand side is equal to \begin{eqnarray*} &&-\tau_{12}(\Delta_{r} \otimes \mathrm{id} _{A})\phi(m)-(\mathrm{id} _{A}\otimes \phi)\phi(m)\\ &=&-\tau_{12}(-\Delta_r(u_i)\otimes v_i\triangleleft m+u_i\otimes \phi(m\triangleleft v_i))\\ &=&-\tau_{12}(u_i\otimes u_j\otimes m\triangleleft (v_iv_j)-u_i\otimes u_j\otimes (m\triangleleft v_i)\triangleleft v_j)\\ &=&-u_j\otimes u_i\otimes m\triangleleft (v_iv_j)+u_j\otimes u_i\otimes (m\triangleleft v_i)\triangleleft v_j\\ &=&-u_i\otimes u_j\otimes m\triangleleft (v_jv_i)+u_i\otimes u_j\otimes (m\triangleleft v_j)\triangleleft v_i\\ &=&u_i\otimes u_j\otimes(-m\triangleleft (v_jv_i)+(m\triangleleft v_j)\triangleleft v_i)\\ &\overset{(13)}{=}&u_i\otimes u_j\otimes (m\triangleleft (v_iv_j)- (m\triangleleft v_i)\triangleleft v_j). \end{eqnarray*} Thus the left and the right are equal to each other.
For the equation \eqref{(15)}, the left hand side is equal to \begin{eqnarray*} &&(\psi \otimes \mathrm{id} _{A})\psi(m)-(\mathrm{id} _{A}\otimes \Delta_{r})\psi(m)\\ &=&\psi( u_i\triangleright m)\otimes v_i-( u_i\triangleright m)\otimes \Delta_r(v_i)\\ &=& u_j\triangleright (u_i\triangleright m)\otimes v_j\otimes v_i-(u_ju_i)\triangleright m\otimes v_j\otimes v_i\\ &=& (u_j\triangleright (u_i\triangleright m)-(u_ju_i)\triangleright m)\otimes v_j\otimes v_i \end{eqnarray*} and the right hand side is equal to \begin{eqnarray*} &&-\tau_{12}((\phi\otimes\mathrm{id}_{H})\psi(v)-(\mathrm{id} _{H} \otimes \psi) \phi(v))\\ &=&-\tau_{12}(\phi(u_i\triangleright m)\otimes v_i+u_i\otimes \psi(m\triangleleft v_i))\\ &=&-\tau_{12}(-u_j\otimes (u_i\triangleright m)\triangleleft v_j\otimes v_i+u_i\otimes u_j\triangleright (m\triangleleft v_i)\otimes v_j)\\ &=& (u_i\triangleright m)\triangleleft v_j\otimes u_j\otimes v_i- u_j\triangleright (m\triangleleft v_i)\otimes u_i\otimes v_j\\ &=&-(u_i\triangleright m)\triangleleft u_j\otimes v_j\otimes v_i+ u_j\triangleright (m\triangleleft u_i)\otimes v_i\otimes v_j\\ &=&-(u_i\triangleright m)\triangleleft u_j\otimes v_j\otimes v_i+ u_i\triangleright (m\triangleleft u_j)\otimes v_j\otimes v_i\\ &=&(-(u_i\triangleright m)\triangleleft u_j+ u_i\triangleright (m\triangleleft u_j))\otimes v_j\otimes v_i\\ &\overset{ \eqref{(11)}}{=}&(-m\triangleleft(u_iu_j)+(m\triangleleft u_i)\triangleleft u_j)\otimes v_j\otimes v_i\\ &\overset{ \eqref{(13)}}{=}&(m\triangleleft(u_ju_i)-(m\triangleleft u_j)\triangleleft u_i)\otimes v_j\otimes v_i\\ &\overset{ \eqref{(11)}}{=}&(-u_j\triangleright(m\triangleleft u_i)+(u_j\triangleright m)\triangleleft u_i)\otimes v_j\otimes v_i\\ &\overset{ \eqref{(12)}}{=}&(-(u_ju_i)\triangleright m+u_j\triangleright(u_i\triangleright m))\otimes v_j\otimes v_i. \end{eqnarray*} Thus the left and the right are equal to each other.
For the equation \eqref{(16)}, the left hand side is equal to \begin{eqnarray*} &&(\Delta_{r} \otimes \mathrm{id} _{A})\phi(m)-(\mathrm{id} _{A}\otimes \phi)\phi(m)\\ &=&-\Delta_r(u_i)\otimes m\triangleleft v_i+u_i\otimes \phi(m\triangleleft v_i)\\ &=&u_i\otimes u_j\otimes m\triangleleft (v_iv_j)-u_i\otimes u_j\otimes (m\triangleleft v_i)\triangleleft v_j\\ &=&u_i\otimes u_j\otimes (m\triangleleft (v_iv_j)- (m\triangleleft v_i)\triangleleft v_j) \end{eqnarray*} and the right hand side is equal to \begin{eqnarray*} &&-\tau_{23}((\phi\otimes\mathrm{id}_{A})\psi(m)-(\mathrm{id} _{A} \otimes \psi) \phi(m))\\ &=&-\tau_{23}(\phi(u_i\triangleright m)\otimes v_i+u_i\otimes\psi(m\triangleleft v_i))\\ &=&-\tau_{23}(-u_j\otimes (u_i\triangleright m)\triangleleft v_j\otimes v_i+u_i\otimes u_j\triangleright(m\triangleleft v_i)\otimes v_j)\\ &=& u_j\otimes v_i\otimes (u_i\triangleright m)\triangleleft v_j-u_i\otimes v_j\otimes u_j\triangleright(m\triangleleft v_i)\\ &=&u_i\otimes v_j\otimes (u_j\triangleright m)\triangleleft v_i-u_i\otimes v_j\otimes u_j\triangleright(m\triangleleft v_i)\\ &=&-u_i\otimes u_j\otimes (v_j\triangleright m)\triangleleft v_i+u_i\otimes u_j\otimes v_j\triangleright(m\triangleleft v_i)\\ &=&u_i\otimes u_j\otimes (-(v_j\triangleright m)\triangleleft v_i+ v_j\triangleright(m\triangleleft v_i))\\ &\overset{ \eqref{(12)}}{=}&u_i\otimes u_j\otimes ((v_jv_i)\triangleright m-v_j\triangleright(v_i\triangleright m))\\ &\overset{ \eqref{(10)}}{=}&u_i\otimes u_j\otimes (-(v_iv_j)\triangleright m+v_i\triangleright(v_j\triangleright m))\\ &\overset{ \eqref{(12)}}{=}&u_i\otimes u_j\otimes ((v_i\triangleright m)\triangleleft v_j-v_i\triangleright(m\triangleleft v_j))\\ &\overset{ \eqref{(11)}}{=}&u_i\otimes u_j\otimes (m\triangleleft v_iv_j-(m\triangleleft v_i)\triangleleft v_j). \end{eqnarray*} Thus we obtain that the left and the right are equal to each other.
For the equation \eqref{(17)}, the left hand side is equal to \begin{eqnarray*} &&(\psi \otimes \mathrm{id} _{A})\psi(m)-(\mathrm{id} _{A}\otimes \Delta_{r})\psi(m)\\ &=&\psi( u_i\triangleright m)\otimes v_i-( u_i\triangleright m)\otimes \Delta_r(v_i)\\ &=& u_j\triangleright (u_i\triangleright m)\otimes v_j\otimes v_i-(u_ju_i)\triangleright m\otimes v_j\otimes v_i\\ &=& (u_j\triangleright (u_i\triangleright m)-(u_ju_i)\triangleright m)\otimes v_j\otimes v_i \end{eqnarray*} and the right hand side is equal to \begin{eqnarray*} &&-\tau_{23}((\psi\otimes\mathrm{id}_{H})\psi(v)-(\mathrm{id} _{V} \otimes \Delta_{H}) \psi(v))\\ &=&-\tau_{23}(\psi( u_i\triangleright m)\otimes v_i-( u_i\triangleright m)\otimes \Delta_r(v_i))\\ &=&-\tau_{23}(u_j\triangleright (u_i\triangleright m)\otimes v_j\otimes v_i-(u_ju_i)\triangleright m\otimes v_j\otimes v_i)\\ &=&-u_j\triangleright (u_i\triangleright m)\otimes v_i\otimes v_j+(u_ju_i)\triangleright m\otimes v_i\otimes v_j\\ &=&-u_i\triangleright (u_j\triangleright m)\otimes v_j\otimes v_i+(u_iu_j)\triangleright m\otimes v_j\otimes v_i\\ &=&(-u_i\triangleright (u_j\triangleright m)+(u_iu_j)\triangleright m)\otimes v_j\otimes v_i\\ &\overset{ \eqref{(10)}}{=}&(u_j\triangleright (u_i\triangleright m)-(u_ju_i)\triangleright m)\otimes v_j\otimes v_i. \end{eqnarray*} Thus we obtain that the left and the right are equal to each other.
Next we check that $M$ is an alternative Hopf bimodule as follows. By definition 3.1, we just have to verify equations (HM1)--(HM6) are true.
For the equation (HM1), the left hand side is equal to \begin{eqnarray*} &&\phi(x\triangleright m)=-u_i\otimes (x\triangleright m)\triangleleft v_i \end{eqnarray*} the right hand side is equal to \begin{eqnarray*} &&-v_i\otimes m\triangleleft(u_ix)+xv_i\otimes m\triangleleft u_i-u_i\otimes x\triangleright(m\triangleleft v_i)-u_i\otimes(m\triangleleft v_i)\triangleleft x+xu_i\otimes m\triangleleft v_i\\ &=&-v_i\otimes m\triangleleft(u_ix)-u_i\otimes x\triangleright(m\triangleleft v_i)-u_i\otimes(m\triangleleft v_i)\triangleleft x-xu_i\otimes m\triangleleft v_i+xu_i\otimes m\triangleleft v_i\\ &=&-v_i\otimes m\triangleleft(u_ix)-u_i\otimes x\triangleright(m\triangleleft v_i)-u_i\otimes(m\triangleleft v_i)\triangleleft x\\ &=&u_i\otimes m\triangleleft(v_ix)-u_i\otimes x\triangleright(m\triangleleft v_i)-u_i\otimes(m\triangleleft v_i)\triangleleft x\\ &=&u_i\otimes (m\triangleleft(v_ix)- x\triangleright(m\triangleleft v_i)-(m\triangleleft v_i)\triangleleft x)\\
&\overset{ \eqref{(11)}}{=}&u_i\otimes (m\triangleleft(v_ix)-(m\triangleleft v_i)\triangleleft x+m\triangleleft(xv_i)-(m\triangleleft x)\triangleleft v_i-(x\triangleright m)\triangleleft v_i)\\ &\overset{ \eqref{(13)}}{=}&-u_i\otimes (x\triangleright m)\triangleleft v_i. \end{eqnarray*}
For the equation (HM2), the left hand side is equal to \begin{eqnarray*} &&\psi(m\triangleleft x)=u_i\triangleright(m\triangleleft x)\otimes v_i \end{eqnarray*} the right hand side is equal to \begin{eqnarray*} &&(u_i\triangleright m)\triangleleft x\otimes v_i-(m\triangleleft v_i)\triangleleft x\otimes u_i+m\triangleleft v_i\otimes xu_i-m\triangleleft(u_ix)\otimes v_i+m\triangleleft u_i\otimes xv_i\\ &=&(u_i\triangleright m)\triangleleft x\otimes v_i-(m\triangleleft v_i)\triangleleft x\otimes u_i-m\triangleleft u_i\otimes xv_i-m\triangleleft(u_ix)\otimes v_i+m\triangleleft u_i\otimes xv_i\\ &=&(u_i\triangleright m)\triangleleft x\otimes v_i-(m\triangleleft v_i)\triangleleft x\otimes u_i-m\triangleleft(u_ix)\otimes v_i\\ &=&(u_i\triangleright m)\triangleleft x\otimes v_i+(m\triangleleft u_i)\triangleleft x\otimes v_i-m\triangleleft(u_ix)\otimes v_i\\ &=&((u_i\triangleright m)\triangleleft x+(m\triangleleft u_i)\triangleleft x-m\triangleleft(u_ix))\otimes v_i\\ &\overset{ \eqref{(11)}}{=}&((u_i\triangleright m)\triangleleft x+(m\triangleleft u_i)\triangleleft x-(m\triangleleft u_i)\triangleleft x+u_i\triangleright(m\triangleleft u_i)-(u_i\triangleright m)\triangleleft x)\otimes v_i\\ &=&u_i\triangleright(m\triangleleft u_i)\otimes v_i. \end{eqnarray*}
For the equation (HM3), the left hand side is equal to \begin{eqnarray*} &&\psi(x\triangleright m)=u_i\triangleright(x\triangleright m)\otimes v_i
\end{eqnarray*} the right hand side is equal to \begin{eqnarray*} &&(u_ix)\triangleright m\otimes v_i-u_i\triangleright m\otimes xv_i+v_i\triangleright m\otimes u_ix-(xv_i)\triangleright m\otimes u_i+u_i\triangleright m\otimes xv_i\\ &&+u_i\triangleright m\otimes v_ix-x\triangleright(u_i\triangleright m)\otimes v_i\\ &=&(u_ix)\triangleright m\otimes v_i+v_i\triangleright m\otimes u_ix-(xv_i)\triangleright m\otimes u_i-v_i\triangleright m\otimes u_ix-x\triangleright(u_i\triangleright m)\otimes v_i\\ &=&(u_ix)\triangleright m\otimes v_i+(xu_i)\triangleright m\otimes v_i-x\triangleright(u_i\triangleright m)\otimes v_i\\ &=&((u_ix)\triangleright m+(xu_i)\triangleright m-x\triangleright(u_i\triangleright m))\otimes v_i\\ &\overset{ \eqref{(10)}}{=}&u_i\triangleright(x\triangleright m)\otimes v_i.
\end{eqnarray*}
For the equation (HM4), the left hand side is equal to \begin{eqnarray*} &&\phi(m\triangleleft x)=-u_i\otimes (m\triangleleft x)\triangleleft v_i \end{eqnarray*} the right hand side is equal to \begin{eqnarray*} &&-u_ix\otimes m\triangleleft v_i+v_ix\otimes u_i\triangleright m-v_i\otimes x\triangleright(u_i\triangleright m)+u_ix\otimes m\triangleleft v_i-u_i\otimes m\triangleleft (xv_i)\\ &&+u_ix\otimes v_i\triangleright m-u_i\otimes(xv_i)\triangleright m\\ &=&v_ix\otimes u_i\triangleright m-v_i\otimes x\triangleright(u_i\triangleright m)-u_i\otimes m\triangleleft (xv_i)-v_ix\otimes u_i\triangleright m-u_i\otimes(xv_i)\triangleright m\\ &=&-v_i\otimes x\triangleright(u_i\triangleright m)-u_i\otimes m\triangleleft (xv_i)-u_i\otimes(xv_i)\triangleright m\\ &=&u_i\otimes x\triangleright(v_i\triangleright m)-u_i\otimes m\triangleleft (xv_i)-u_i\otimes(xv_i)\triangleright m\\ &=&u_i\otimes (x\triangleright(v_i\triangleright m)- m\triangleleft (xv_i)-(xv_i)\triangleright m)\\ &\overset{ \eqref{(12)}}{=}&u_i\otimes (x\triangleright(v_i\triangleright m)-(m\triangleleft x)\triangleleft v_i+x\triangleright(m\triangleleft v_i)-(x\triangleright m)\triangleleft v_i-x\triangleright(v_i\triangleright m)\\ &&+(x\triangleright m)\triangleleft v_i-x\triangleright(m\triangleleft v_i))\\ &=&-u_i\otimes (m\triangleleft x)\triangleleft v_i \end{eqnarray*}
For the equation (HM5), the left hand side is equal to \begin{eqnarray*} &&\phi(x\triangleright m)+\tau\psi(x\triangleright m)\\ &=&-u_i\otimes (x\triangleright m)\triangleleft v_i+v_i\otimes u_i\triangleright(x\triangleright m)\\ &=&v_i\otimes (x\triangleright m)\triangleleft u_i+v_i\otimes u_i\triangleright(x\triangleright m)\\ &=&v_i\otimes ((x\triangleright m)\triangleleft u_i+ u_i\triangleright(x\triangleright m)) \end{eqnarray*} the right hand side is equal to \begin{eqnarray*} &&-u_i\otimes x\triangleright(m\triangleleft v_i)+xv_i\otimes u_i\triangleright m+v_i\otimes (u_ix)\triangleright m-xv_i\otimes u_i\triangleright m\\ &=&-u_i\otimes x\triangleright(m\triangleleft v_i)+v_i\otimes (u_ix)\triangleright m\\ &=&v_i\otimes x\triangleright(m\triangleleft u_i)+v_i\otimes (u_ix)\triangleright m\\ &=&v_i\otimes (x\triangleright(m\triangleleft u_i)+ (u_ix)\triangleright m)\\ &\overset{ \eqref{(12)}}{=}&v_i\otimes (x\triangleright(m\triangleleft u_i)+ u_i\triangleright(x\triangleright m)-(u_i\triangleright m)\triangleleft x+u_i\triangleright(m\triangleleft x))\\
&\overset{ \eqref{(11)}}{=}&v_i\otimes ((x\triangleright m)\triangleleft u_i+ u_i\triangleright(x\triangleright m)+(m\triangleleft x)\triangleleft u_i-m\triangleleft(xu_i)+(m\triangleleft u_i)\triangleleft x-m\triangleleft(u_ix))\\ &\overset{ \eqref{(13)}}{=}&v_i\otimes ((x\triangleright m)\triangleleft u_i+ u_i\triangleright(x\triangleright m)). \end{eqnarray*}
For the equation (HM6), the left hand side is equal to \begin{eqnarray*} &&\phi(m\triangleleft x)+\tau\psi(m\triangleleft x)\\ &=&-u_i\otimes (m\triangleleft x)\triangleleft v_i+v_i\otimes u_i\triangleright(m\triangleleft x)\\ &=&v_i\otimes (m\triangleleft x)\triangleleft u_i+v_i\otimes u_i\triangleright(m\triangleleft x)\\ &=&v_i\otimes ((m\triangleleft x)\triangleleft u_i+ u_i\triangleright(m\triangleleft x)) \end{eqnarray*} the right hand side is equal to \begin{eqnarray*} &&u_ix\otimes m\triangleleft v_i-u_i\otimes m\triangleleft(xv_i)-u_ix\otimes m\triangleleft v_i+v_i\otimes (u_i\triangleright m)\triangleleft x\\ &=&-u_i\otimes m\triangleleft(xv_i)+v_i\otimes (u_i\triangleright m)\triangleleft x\\
&=&v_i\otimes (m\triangleleft(xu_i)+ (u_i\triangleright m)\triangleleft x)\\
&\overset{ \eqref{(11)}}{=}&v_i\otimes ((m\triangleleft x)\triangleleft u_i+u_i\triangleright(m\triangleleft x)+m\triangleleft(xu_i)-(m\triangleleft x)\triangleleft u_i+ m\triangleleft(u_ix)-(m\triangleleft u_i)\triangleleft x)\\ &\overset{ \eqref{(13)}}{=}&v_i\otimes ((m\triangleleft x)\triangleleft u_i+u_i\triangleright(m\triangleleft x)). \end{eqnarray*} Thus we obtain that the equations (HM1)--(HM6) are hold. \end{proof}
\begin{theorem}\label{r-bialg2}
Let $(A, \cdot, r)$ be a quasitriangular alternative bialgebra. Then $A$ becomes a braided infinitesimal bialgebra over itself with $M=A$ and $\phi: M \rightarrow A \otimes M$ and $\psi: M \rightarrow M \otimes A$ are given by
\begin{equation} \phi(a):=-\sum_{i} u_{i} \otimes a v_{i} ,\quad \psi(a):= \sum_{i} u_{i}a \otimes v_{i}, \end{equation} \end{theorem}
\begin{proof}For braided terms in (BB1), we have \begin{eqnarray*} &&(a{}_{(-1)}\triangleright b)\otimes a{}_{(0)}+(a{}_{(1)}\triangleright b)\otimes a{}_{(0)}-a{}_{(0)}\otimes (b\triangleleft a{}_{(-1)})+b{}_{(0)}\otimes (a\triangleleft b{}_{(1)})\\ &&+b{}_{(0)}\otimes(b{}_{(1)}\triangleright a)-(a\triangleleft b{}_{(-1)})\otimes b{}_{(0)}\\ &=&-u_ib\otimes av_i+v_ib\otimes u_ia+av_i\otimes bu_i+u_ib\otimes av_i+u_ib\otimes v_ia+au_i\otimes bv_i\\ &=&v_ib\otimes u_ia+av_i\otimes bu_i-v_ib\otimes u_ia-av_i\otimes bu_i\\ &=&0. \end{eqnarray*} For braided terms in (BB2), we have \begin{eqnarray*} &&a{}_{(0)}\otimes(b\triangleleft a{}_{(1)})+(b\triangleleft a{}_{(1)})\otimes a{}_{(0)}+b{}_{(0)}\otimes(b{}_{(-1)}\triangleright a)+(b{}_{(-1)}\triangleright a)\otimes b{}_{(0)}\\ &=&u_ia\otimes bv_i+bv_i\otimes u_ia-bv_i\otimes u_ia-u_ia\otimes bv_i\\ &=&0. \end{eqnarray*} Thus $(A, \cdot, r)$ be an (braided) alternative bialgebra in the category of alternative Hopf bimodule over itself. \end{proof}
In the above Theorem \ref{r-bialg2} what we have obtained is a braided alternative bialgebra with zero braided terms. It is an open question for us whether there exists a braided alternative bialgebra with nonzero braided terms coming from a quasitriangular alternative bialgebra.
\section{Unified product of braided alternative bialgebras} \subsection{Matched pair of braided alternative bialgebras}
In this section, we construct alternative bialgebra from the double cross biproduct of a matched pair of braided alternative bialgebras.
Let $A, H$ be both alternative algebras and alternative coalgebras. For any $a, b\in A$, $x, y\in H$, we denote maps \begin{align*} &\rightharpoonup: H \otimes A \to A,\quad \leftharpoonup: A\otimes H\to A,\\ &\triangleright: A\otimes H \to H,\quad \triangleleft: H\otimes A \to H,\\ &\phi: A \to H \otimes A,\quad \psi: A \to A\otimes H,\\ &\rho: H \to A\otimes H,\quad \gamma: H \to H \otimes A, \end{align*} by \begin{eqnarray*} && \rightharpoonup (x \otimes a) = x \rightharpoonup a, \quad \leftharpoonup(a\otimes x) = a \leftharpoonup x, \\ && \triangleright (a \otimes x) = a \triangleright x, \quad \triangleleft(x \otimes a) = x \triangleleft a, \\ && \phi (a)=\sum a{}_{(-1)}\otimes a{}_{(0)}, \quad \psi (a) = \sum a{}_{(0)} \otimes a{}_{(1)},\\ && \rho (x)=\sum x{}_{[-1]}\otimes x{}_{[0]}, \quad \gamma (x) = \sum x{}_{[0]} \otimes x{}_{[1]}. \end{eqnarray*}
\begin{definition}\cite{NB} A \emph{matched pair} of alternative algebras is a system $(A, \, {H},\, \triangleleft, \, \triangleright, \, \leftharpoonup, \, \rightharpoonup)$ consisting of two alternative algebras $A$ and ${H}$ and four bilinear maps $\triangleleft : {H}\otimes A\to {H}$, $\triangleright : {A} \otimes H \to H$, $\leftharpoonup:A \otimes {H} \to A$, $\rightharpoonup: H\otimes {A} \to {A}$ such that $({H},\triangleright,\triangleleft)$ is an $A$-bimodule, $(A,\rightharpoonup,\leftharpoonup)$ is an ${H}$-bimodule and satisfying the following compatibilities for all $a, b\in A$, $x, y \in {H}$: \begin{enumerate} \item[(AM1)] $x\rightharpoonup (ab)+a(x\rightharpoonup b)+a\leftharpoonup(x\triangleleft b)=(x\rightharpoonup a+a\leftharpoonup x) b+ (x\triangleleft a+a\triangleright x)\rightharpoonup b$,
\item[(AM2)]$x\rightharpoonup (ab+ b a)=(x\rightharpoonup a)b+(x\triangleleft a)\rightharpoonup b+(x\rightharpoonup b)a+(x\triangleleft b)\rightharpoonup a$,
\item[(AM3)] $ (ab) \leftharpoonup x+(a\leftharpoonup x)b+(a\triangleright x)\rightharpoonup b= a(b\leftharpoonup x+x\rightharpoonup b)+a\leftharpoonup(b\triangleright x+x\triangleleft b)$,
\item[(AM4)] $ (ab+b a )\leftharpoonup x=a(b \leftharpoonup x) + a \leftharpoonup ( b \triangleright x)+b(a\leftharpoonup x)+b\leftharpoonup(a\triangleright x)$,
\item[(AM5)]$a\triangleright (x y)+x(a\triangleright y)+x\triangleleft(a\leftharpoonup y)=(x\triangleleft a+a\triangleright x)y+(x\rightharpoonup a+a\leftharpoonup x)\triangleright y$,
\item[(AM6)] $ a\triangleright (x y+ y x)= (a\triangleright x)y+(a\leftharpoonup x)\triangleright y+(a\triangleright y)x+(a\leftharpoonup y)\triangleright x$,
\item[(AM7)] $ (x y)\triangleleft a+(x\triangleleft a)y+(x\rightharpoonup a)\triangleright y=x(y\triangleleft a+a\triangleright y)+x\triangleleft(y\rightharpoonup a+a\leftharpoonup y) $,
\item[(AM8)] $(x y+y x)\triangleleft a=x(y\triangleleft a)+x\triangleleft(y\rightharpoonup a)+y(x\triangleleft a)+y\triangleleft(x\rightharpoonup a)$. \end{enumerate} \end{definition}
\begin{lemma}\cite{NB} Let $(A, \, {H},\, \triangleleft, \, \triangleright, \, \leftharpoonup, \, \rightharpoonup)$be a matched pair of alternative algebras. Then $A \, \bowtie {H}:= A \oplus {H}$, as a vector space, with the multiplication defined for any $a, b\in A$ and $x, y\in {H}$ by \begin{equation} (a +x) (b+ y) := (ab+ a \leftharpoonup y + x\rightharpoonup b)+( a\triangleright y + x\triangleleft b + xy ) \end{equation} is an alternative algebra called the \emph{bicrossed product} associated to the matched pair of alternative algebras $A$ and ${H}$. \end{lemma}
Now we introduce the notion of matched pairs of alternative coalgebras, which is the dual version of matched pairs of alternative algebras.
\begin{definition} A \emph{matched pair} of alternative coalgebras is a system $(A, \, {H}, \, \phi, \, \psi, \, \rho, \, \gamma)$ consisting of two alternative coalgebras $A$ and ${H}$ and four bilinear maps $\phi: {A}\to H\otimes A$, $\psi: {A}\to A \otimes H$, $\rho: H\to A\otimes {H}$, $\gamma: H \to {H} \otimes {A}$ such that $({H},\rho, \gamma)$ is an $A$-bicomodule, $(A,\phi, \, \psi)$ is an ${H}$-bicomodule and satisfying the following compatibility conditions for any $a\in A$, $x\in {H}$: \begin{enumerate} \item[(CM1)] $\phi(a{}_{1})\otimes a{}_{2}+\gamma(a{}_{(-1)})\otimes a{}_{(0)}-a_{(-1)} \otimes \Delta_{A}\left(a_{(0)}\right)\\ =-\tau_{12}\big(\psi(a{}_{1})\otimes a{}_{2}+\rho(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{1}\otimes\phi(a{}_{2})-a{}_{(0)}\otimes\gamma(a{}_{(1)})\big)$,
\item[(CM2)] $\Delta_{A}(a{}_{(0)})\otimes a{}_{(1)}-a{}_{1}\otimes\psi(a{}_{2})-a{}_{(0)}\otimes\rho(a{}_{(1)})\\ =-\tau_{12}\big(\Delta_{A}(a{}_{(0)})\otimes a{}_{(1)}-a{}_{1}\otimes\psi(a{}_{2})-a{}_{(0)}\otimes\rho(a{}_{(1)})\big)$,
\item[(CM3)] $\Delta_{H}(x{}_{[0]})\otimes x{}_{[1]}-x{}_{1}\otimes\gamma(x{}_{2})-x{}_{[0]}\otimes\phi(x{}_{[1]})\\ =-\tau_{12}\big(\Delta_{H}(x{}_{[0]})\otimes x{}_{[1]}-x{}_{1}\otimes\gamma(x{}_{2})-x{}_{[0]}\otimes\phi(x{}_{[1]})\big)$,
\item[(CM4)] $ \rho(x{}_{1})\otimes x{}_{2}+\psi(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes\Delta_{H}(x{}_{[0]})\\ =-\tau_{12}\big(\gamma(x{}_{1})\otimes x{}_{2}+\phi(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{1}\otimes\rho(x{}_{2})-x{}_{[0]}\otimes\psi(x{}_{[1]})\big)$,
\item[(CM5)] $\phi(a{}_{1})\otimes a{}_{2}+\gamma(a{}_{(-1)})\otimes a{}_{(0)}-a_{(-1)} \otimes \Delta_{A}\left(a_{(0)}\right)\\ =-\tau_{23}\big(\phi(a{}_{1})\otimes a{}_{2}+\gamma(a{}_{(-1)})\otimes a{}_{(0)}-a_{(-1)} \otimes \Delta_{A}\left(a_{(0)}\right)\big)$,
\item[(CM6)] $\Delta_{A}(a{}_{(0)})\otimes a{}_{(1)}-a{}_{1}\otimes\psi(a{}_{2})-a{}_{(0)}\otimes\rho(a{}_{(1)})\\ =-\tau_{23}\big(\psi(a{}_{1})\otimes a{}_{2}+\rho(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{1}\otimes\phi(a{}_{2})-a{}_{(0)}\otimes\gamma(a{}_{(1)})\big)$,
\item[(CM7)] $\Delta_{H}(x{}_{[0]})\otimes x{}_{[1]}-x{}_{1}\otimes\gamma(x{}_{2})-x{}_{[0]}\otimes\phi(x{}_{[1]})\\ =-\tau_{23}\big(\gamma(x{}_{1})\otimes x{}_{2}+\phi(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{1}\otimes\rho(x{}_{2})-x{}_{[0]}\otimes\psi(x{}_{[1]})\big)$,
\item[(CM8)] $\rho(x{}_{1})\otimes x{}_{2}+\psi(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes\Delta_{H}(x{}_{[0]})\\ =-\tau_{23}\big(\rho(x{}_{1})\otimes x{}_{2}+\psi(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes\Delta_{H}(x{}_{[0]})\big)$. \end{enumerate} \end{definition}
\begin{lemma}\label{lem1} Let $(A, \, {H}, \, \phi, \, \psi, \, \rho, \, \gamma)$ be a matched pair of alternative coalgebras. We define $E=A{\,\blacktriangleright\!\!\blacktriangleleft\, } H$ as the vector space $A\oplus H$ with comultiplication $$\Delta_{E}(a)=(\Delta_{A}+\phi+\psi)(a),\quad\Delta_{E}(x)=(\Delta_{H}+\rho+\gamma)(x),$$ that is $$\Delta_{E}(a)=\sum a{}_{1} \otimes a{}_{2}+\sum a{}_{(-1)} \otimes a{}_{(0)}+\sum a{}_{(0)}\otimes a{}_{(1)}, $$ $$\Delta_{E}(x)=\sum x{}_{1} \otimes x{}_{2}+\sum x{}_{[-1]} \otimes x{}_{[0]}+\sum x{}_{[0]} \otimes x{}_{[1]}.$$ Then $A{\,\blacktriangleright\!\!\blacktriangleleft\, } H$ is an alternative coalgebra which is called the \emph{bicrossed coproduct} associated to the matched pair of alternative coalgebras $A$ and $H$. \end{lemma}
The proof of the above Lemma \ref{lem1} is omitted since it is by direct computations. In the following of this section, we construct alternative bialgebra from the double cross biproduct of a pair of braided alternative bialgebras. First we generalize the concept of Hopf module to the case of $A$ is not necessarily an alternative bialgebra. But by abuse of notation, we also call it alternative Hopf module.
\begin{definition} Let $A$ be simultaneously an alternative algebra and an alternative coalgebra.
If $H$ is an $A$-bimodule, $A$-bicomodule and satisfying \begin{enumerate} \item[(HM1')] $\rho(a \triangleright x)=-a{}_{2}\otimes(x\triangleleft a{}_{1})+x{}_{[-1]}\otimes(a\triangleright x{}_{[0]})+x{}_{[-1]}\otimes(x{}_{[0]}\triangleleft a)-a x_{[-1]} \otimes x_{[0]}$, \end{enumerate} \begin{enumerate} \item[(HM2')] $\gamma(x\triangleleft a)=(x{}_{[0]}\triangleleft a)\otimes x{}_{[1]}+(x{}_{[0]}\triangleleft a)\otimes x{}_{[-1]}-x{}_{[0]}\otimes a x{}_{[-1]}-\left(x\triangleleft a_{1}\right) \otimes a_{2}$, \end{enumerate} \begin{enumerate} \item[(HM3')] $\rho(x\triangleleft a)=x{}_{[-1]} a\otimes x{}_{[0]}+x{}_{[1]} a\otimes x{}_{[0]}-x{}_{[1]}\otimes(a\triangleright x{}_{[0]})+a{}_{1}\otimes(x\triangleleft a{}_{2})+a{}_{1}\otimes(a{}_{2}\triangleright x)$, \item[(HM4')] $\gamma(a \triangleright x)=(a{}_{1}\triangleright x)\otimes a{}_{2}+(a{}_{2}\triangleright x)\otimes a{}_{1}+x{}_{[0]}\otimes a x{}_{[1]}+x{}_{[0]}\otimes x{}_{[1]} a-\left(a\triangleright x_{[0]}\right)\otimes x_{[1]}$, \end{enumerate} \begin{enumerate} \item[(HM5')] $\rho(a\triangleright x)+\tau\gamma(a\triangleright x)=x{}_{[-1]}\otimes(a\triangleright x{}_{[0]})+a x{}_{[1]}\otimes x{}_{[0]}+a{}_{2}\otimes (a{}_{1}\triangleright x)$,
\item[(HM6')] $\rho(x\triangleleft a)+\tau\gamma(x\triangleleft a)=a{}_{1}\otimes(x\triangleleft a{}_{2})+x{}_{[-1]} a\otimes x{}_{[0]}+x{}_{[1]}\otimes(x{}_{[0]}\triangleleft a)$, \end{enumerate} \noindent then $H$ is called an alternative Hopf module over $A$. \end{definition}
We denote the category of alternative Hopf bimodules over $A$ by ${}^{A}_{A}\mathcal{M}{}^{A}_{A}$.
\begin{definition} Let $A$ be an alternative algebra and alternative coalgebra and $H$ is an alternative Hopf bimodule over $A$. If $H$ is an alternative algebra and an alternative coalgebra in ${}^{A}_{A}\mathcal{M}^{A}_{A}$, then we call $H$ be a \emph{braided alternative bialgebra} over $A$, if the following conditions are satisfied: \begin{enumerate} \item[(BB3)] $\Delta_{H}(x y)=x{}_{1} y\otimes x{}_{2}+x{}_{2} y\otimes x{}_{1}-x{}_{2}\otimes y x{}_{1}+y{}_{1}\otimes x y{}_{2}+y{}_{1}\otimes y{}_{2} x-x y{}_{1}\otimes y{}_{2}\\ +(x{}_{[-1]}\triangleright y)\otimes x{}_{[0]}+(x{}_{[1]}\triangleright y)\otimes x{}_{[0]}-x{}_{[0]}\otimes(y\triangleleft x{}_{[-1]})+y{}_{[0]}\otimes(x\triangleleft y{}_{[1]})\\ +y{}_{[0]}\otimes(y{}_{[1]}\triangleright x)-(x\triangleleft y{}_{[-1]})\otimes y{}_{[0]},$ \item[(BB4)] $\Delta_{H}(y x)+\tau\Delta_{H}(y x)=x{}_{1}\otimes y x{}_{2}+y x{}_{2}\otimes x{}_{1}+y{}_{1} x\otimes y{}_{2}+y{}_{2}\otimes y{}_{1} x\\ +x{}_{[0]}\otimes(y\triangleleft x{}_{[1]})+(y\triangleleft x{}_{[1]})\otimes x{}_{[0]}+y{}_{[0]}\otimes(y{}_{[-1]}\triangleright x)+(y{}_{[-1]}\triangleright x)\otimes y{}_{[0]}.$ \end{enumerate} \end{definition}
\begin{definition}\label{def:dmp} Let $A, H$ be both alternative algebras and alternative coalgebras. If the following conditions hold: \begin{enumerate} \item[(DM1)] $\phi(a b)=-a{}_{(1)}\otimes b a{}_{(0)}+b{}_{(-1)}\otimes a b{}_{(0)}+b{}_{(-1)}\otimes b{}_{(0)} a\\ +(a{}_{(-1)}\triangleleft b)\otimes a{}_{(0)}+(a{}_{(1)}\triangleleft b)\otimes a{}_{(0)}-(a\triangleright b{}_{(-1)})\otimes b{}_{(0)}$, \item[(DM2)] $\psi(a b)=a{}_{(0)} b\otimes a{}_{(1)}+a{}_{(0)} b\otimes a{}_{(-1)}-a b{}_{(0)}\otimes b{}_{(1)}\\ -a{}_{(0)}\otimes (b\triangleright a{}_{(-1)})+b{}_{(0)}\otimes(a\triangleright b{}_{(1)})+b{}_{(0)}\otimes(b{}_{(1)}\triangleleft a)$, \item[(DM3)] $\rho(x y)=(x{}_{[1]}\leftharpoonup y)\otimes x{}_{[0]}+(x{}_{[-1]}\leftharpoonup y)\otimes x{}_{[0]}-x_{[1]} \otimes y x_{[0]} \\ -\left(x \rightharpoonup y_{[-1]}\right) \otimes y_{[0]}+y{}_{[-1]}\otimes x y{}_{[0]}+y{}_{[-1]}\otimes y{}_{[0]} x$, \item[(DM4)] $\gamma(x y)=x{}_{[0]} y\otimes x{}_{[1]}+x{}_{[0]} y\otimes x{}_{[-1]}-x_{[0]}\otimes (y\rightharpoonup x{}_{[-1]})\\ -xy_{[0]}\otimes y_{[1]}+y{}_{[0]}\otimes (x\rightharpoonup y{}_{[1]})+y{}_{[0]}\otimes(y{}_{[1]}\leftharpoonup x)$, \item[(DM5)] $\Delta_{A}(x \rightharpoonup b)=(x{}_{[0]}\rightharpoonup b)\otimes x{}_{[1]}+(x{}_{[0]}\rightharpoonup b)\otimes x{}_{[-1]}-x{}_{[1]}\otimes(b\leftharpoonup x{}_{[0]})\\ -\left(x \rightharpoonup b_{1}\right) \otimes b_{2}+b{}_{1}\otimes(x\rightharpoonup b{}_{2})+b{}_{1}\otimes(b{}_{2}\leftharpoonup x)$, \item[(DM6)] $\Delta_{A}(a\leftharpoonup y)=(a{}_{1}\leftharpoonup y)\otimes a{}_{2}+\left(a_{2} \leftharpoonup y\right)\otimes a{}_{1}-a{}_{2}\otimes(y\rightharpoonup a{}_{1})\\ -\left(a\leftharpoonup y_{[0]}\right) \otimes y_{[1]}+y{}_{[-1]}\otimes(a\leftharpoonup y{}_{[0]})+y{}_{[-1]}\otimes(y{}_{[0]}\rightharpoonup a)$, \item[(DM7)] $\Delta_{H}(a \triangleright y)=(a{}_{(0)}\triangleright y)\otimes a{}_{(1)}+(a{}_{(0)}\triangleright y)\otimes a{}_{(-1)}-a{}_{(1)}\otimes(y\triangleleft a{}_{(0)})\\ +y{}_{1}\otimes(a\triangleright y{}_{2})+y{}_{1}\otimes(y{}_{2}\triangleleft a)-\left(a \triangleright y_{1}\right) \otimes y_{2}$, \item[(DM8)] $\Delta_{H}(x \triangleleft b)=(x{}_{1}\triangleleft b)\otimes x{}_{2}+(x{}_{2}\triangleleft b)\otimes x{}_{1}-x{}_{2}\otimes(b\triangleright x{}_{1})\\ +b{}_{(-1)}\otimes(x\triangleleft b{}_{(0)})+b{}_{(-1)}\otimes(b{}_{(0)}\triangleright x)-\left(x\triangleleft b_{(0)}\right) \otimes b_{(1)}$, \item[(DM9)] $\phi(x \rightharpoonup b)+\gamma(x\triangleleft b)=(x{}_{[0]}\triangleleft b)\otimes x{}_{[1]}+(x{}_{[0]}\triangleleft b)\otimes x{}_{[-1]}$\\ $-x{}_{2}\otimes(b\leftharpoonup x{}_{1})-x{}_{[0]}\otimes b x{}_{[-1]}+b{}_{(-1)}\otimes(x\rightharpoonup b{}_{(0)})$\\ $-\left(x\triangleleft b_{1}\right) \otimes b_{2}+b{}_{(-1)}\otimes(b{}_{(0)}\leftharpoonup x)-x b_{(-1)} \otimes b_{(0)}$, \item[(DM10)] $\psi(a\leftharpoonup y)+\rho(a \triangleright y)=(a{}_{(0)}\leftharpoonup y)\otimes a{}_{(1)}+(a{}_{(0)}\leftharpoonup y)\otimes a{}_{(-1)}$\\ $-a{}_{2}\otimes(y\triangleleft a{}_{1})-a{}_{(0)}\otimes y a{}_{(-1)}+y{}_{[-1]}\otimes(a\triangleright y{}_{[0]})$\\ $-\left(a\leftharpoonup y_{1}\right) \otimes y_{2}+y{}_{[-1]}\otimes(y{}_{[0]}\triangleleft a)-a y_{[-1]} \otimes y_{[0]}$, \item[(DM11)] $\psi(x \rightharpoonup b)+\rho(x\triangleleft b)=(x{}_{1}\rightharpoonup b)\otimes x{}_{2}+x{}_{[-1]} b\otimes x{}_{[0]}+(x{}_{2}\rightharpoonup b)\otimes x{}_{1}\\ +x{}_{[1]} b\otimes x{}_{[0]}-x{}_{[1]}\otimes(b\triangleright x{}_{[0]})+b{}_{1}\otimes(x\triangleleft b{}_{2})+b{}_{(0)}\otimes x b{}_{(1)}\\ +b{}_{1}\otimes(b{}_{2}\triangleright x)+b{}_{(0)}\otimes b{}_{(1)} x-(x \rightharpoonup b_{(0)}) \otimes b_{(1)}$, \item[(DM12)] $\phi(a\leftharpoonup y)+\gamma(a \triangleright y)=(a{}_{1}\triangleright y)\otimes a{}_{2}+a{}_{(-1)} y\otimes a{}_{(0)}+(a{}_{2}\triangleright y)\otimes a{}_{1}\\ +a{}_{(1)} y\otimes a{}_{(0)}-a{}_{(1)}\otimes (y\rightharpoonup a{}_{(0)})+y{}_{1}\otimes(a\leftharpoonup y{}_{2})+y{}_{[0]}\otimes a y{}_{[1]}\\ +y{}_{1}\otimes(y{}_{2}\rightharpoonup a)+y{}_{[0]}\otimes y{}_{[1]} a-\left(a\triangleright y_{[0]}\right)\otimes y_{[1]}$, \item[(DM13)] $\phi(b a)+\tau\psi(b a)=a{}_{(-1)}\otimes b a{}_{(0)}+(b\triangleright a{}_{(1)})\otimes a{}_{(0)}\\ +(b{}_{(-1)}\triangleleft a)\otimes b{}_{(0)}+b{}_{(1)}\otimes b{}_{(0)} a$, \item[(DM14)] $\psi(b a)+\tau\phi(b a)=a{}_{(0)}\otimes(b\triangleright a{}_{(1)})+b a{}_{(0)} \otimes a{}_{(-1)}\\ + b{}_{(0)} a\otimes b{}_{(1)}+b{}_{(0)}\otimes(b{}_{(-1)}\triangleleft a)$, \item[(DM15)] $\rho(y x)+\tau\gamma(y x)=x{}_{[-1]}\otimes y x{}_{[0]}+(y\rightharpoonup x{}_{[1]})\otimes x{}_{[0]}\\ +(y{}_{[-1]}\leftharpoonup x)\otimes y{}_{[0]}+y{}_{[1]}\otimes y{}_{[0]} x$, \item[(DM16)] $\gamma(y x)+\tau\rho(y x)=y x{}_{[0]}\otimes x{}_{[-1]}+y{}_{[0]} x\otimes y{}_{[1]}\\ +y{}_{[0]}\otimes(y{}_{[-1]}\leftharpoonup x)+x{}_{[0]}\otimes(y\rightharpoonup x{}_{[1]})$, \item[(DM17)] $\Delta_{A}(b \leftharpoonup x)+\tau\Delta_{A}(b \leftharpoonup x)=x{}_{[-1]}\otimes(b\leftharpoonup x{}_{[0]})+(b\leftharpoonup x{}_{[0]})\otimes x{}_{[-1]}\\ +(b{}_{1}\leftharpoonup x)\otimes b{}_{2}+b{}_{2}\otimes(b{}_{1}\leftharpoonup x)$, \item[(DM18)] $\Delta_{A}(y\rightharpoonup a)+\tau\Delta_{A}(y\rightharpoonup a)=a{}_{1}\otimes(y\rightharpoonup a{}_{2})+(y\rightharpoonup a{}_{2})\otimes a{}_{1}\\ +(y{}_{[0]}\rightharpoonup a)\otimes y{}_{[1]}+y{}_{[1]}\otimes(y{}_{[0]}\rightharpoonup a)$, \item[(DM19)] $\Delta_{H}(y\triangleleft a)+\tau\Delta_{H}(y\triangleleft a)=a{}_{(-1)}\otimes(y\triangleleft a{}_{(0)})+(y\triangleleft a{}_{(0)})\otimes a{}_{(-1)}\\ +(y{}_{1}\triangleleft a)\otimes y{}_{2}+y{}_{2}\otimes(y{}_{1}\triangleleft a)$, \item[(DM20)] $\Delta_{H}(b \triangleright x)+\tau\Delta_{H}(b \triangleright x)=x{}_{1}\otimes(b\triangleright x{}_{2})+(b\triangleright x{}_{2})\otimes x{}_{1}\\ +(b{}_{(0)}\triangleright x)\otimes b{}_{(1)}+b{}_{(1)}\otimes (b{}_{(0)}\triangleright x)$, \item[(DM21)] $\phi(y \rightharpoonup a)+\tau\psi(y \rightharpoonup a)+\gamma(y\triangleleft a)+\tau\rho(y\triangleleft a)\\ =a{}_{(-1)}\otimes(y\rightharpoonup a{}_{(0)})+(y\triangleleft a{}_{2})\otimes a{}_{1}+y a{}_{(1)}\otimes a{}_{(0)}\\ +(y{}_{[0]}\triangleleft a)\otimes y{}_{[1]}+y{}_{2}\otimes(y{}_{1}\rightharpoonup a)+y{}_{[0]}\otimes y{}_{[-1]} a$, \item[(DM22)] $\phi(b \leftharpoonup x)+\tau\psi(b \leftharpoonup x)+\gamma(b\triangleright x)+\tau\rho(b\triangleright x)\\ =x{}_{1}\otimes(b\leftharpoonup x{}_{2})+x{}_{[0]}\otimes b x{}_{[1]}+(b\triangleright x{}_{[0]})\otimes x{}_{[-1]}\\ +(b{}_{1}\triangleright x)\otimes b{}_{2}+b{}_{(-1)} x\otimes b{}_{(0)}+b{}_{(1)}\otimes(b{}_{(0)}\leftharpoonup x)$, \item[(DM23)] $\psi(y \rightharpoonup a)+\tau\phi(y \rightharpoonup a)+\rho(y\triangleleft a)+\tau\gamma(y\triangleleft a)\\ =a{}_{1}\otimes(y\triangleleft a{}_{2})+a{}_{(0)}\otimes y a{}_{(1)}+(y\rightharpoonup a{}_{(0)})\otimes a{}_{(-1)}\\ +(y{}_{1}\rightharpoonup a)\otimes y{}_{2}+y{}_{[-1]} a\otimes y{}_{[0]}+y{}_{[1]}\otimes(y{}_{[0]}\triangleleft a)$, \item[(DM24)] $\psi(b \leftharpoonup x)+\tau\phi(b \leftharpoonup x)+\rho(b\triangleright x)+\tau\gamma(b\triangleright x)\\ =x{}_{[-1]}\otimes(b\triangleright x{}_{[0]})+(b\leftharpoonup x{}_{2})\otimes x{}_{1}+b x{}_{[1]}\otimes x{}_{[0]}\\ +(b{}_{(0)}\leftharpoonup x)\otimes b{}_{(1)}+b{}_{2}\otimes(b{}_{1}\triangleright x)+b{}_{(0)}\otimes b{}_{(-1)} x$, \end{enumerate} \noindent then $(A, H)$ is called a \emph{double matched pair}. \end{definition}
\begin{theorem}\label{main1} Let $(A, H)$ be a double matched pair of alternative algebras and alternative coalgebras, $A$ is a braided alternative bialgebra in ${}^{H}_{H}\mathcal{M}^{H}_{H}$, $H$ is a braided alternative bialgebra in ${}^{A}_{A}\mathcal{M}^{A}_{A}$. If we define the double cross biproduct of $A$ and $H$, denoted by $A{\ \cdot\kern-.60em\triangleright\kern-.33em\triangleleft\kern-.33em\cdot\, } H$, $A{\ \cdot\kern-.60em\triangleright\kern-.33em\triangleleft\kern-.33em\cdot\, } H=A\bowtie H$ as an alternative algebra, $A{\ \cdot\kern-.60em\triangleright\kern-.33em\triangleleft\kern-.33em\cdot\, } H=A{\,\blacktriangleright\!\!\blacktriangleleft\, } H$ as an alternative coalgebra, then $A{\ \cdot\kern-.60em\triangleright\kern-.33em\triangleleft\kern-.33em\cdot\, } H$ become an alternative bialgebra if and only if $(A, H)$ forms a double matched pair. \end{theorem}
The proof of the above Theorem \ref{main1} is omitted since it is a special case of Theorem \ref{main2} in next subsection.
\subsection{Cocycle bicrossproduct alternative bialgebras}\label{subsetion-4.2}
In this section, we construct cocycle bicrossproduct alternative bialgebras, which is a generalization of double cross biproduct.
Let $A, H$ be both alternative algebras and alternative coalgebras. For $a, b\in A$, $x, y\in H$, we denote maps \begin{align*} &\sigma: H\otimes H \to A,\quad \theta: A\otimes A \to H,\\ &P: A \to H\otimes H,\quad Q: H \to A\otimes A, \end{align*} by \begin{eqnarray*} && \sigma (x,y) \in A, \quad \theta(a, b) \in H,\\ && P(a)=\sum a{}_{<1>}\otimes a{}_{<2>}, \quad Q(x) = \sum x{}_{\{1\}} \otimes x{}_{\{2\}}. \end{eqnarray*}
A bilinear map $\sigma: H\otimes H\to A$ is called a cocycle on $H$ if \begin{enumerate} \item[(CC1)] $\sigma(x y, z)+\sigma(x, y)\leftharpoonup z-x \rightharpoonup \sigma(y, z)-{\sigma}(x, y z)\\ =-\sigma(y, x)\leftharpoonup z-\sigma(y x,z)+y\rightharpoonup\sigma(x,z)+\sigma(y,xz),$ \item[(CC2)] $\sigma(x y, z)+\sigma(x, y)\leftharpoonup z-x \rightharpoonup \sigma(y, z)-{\sigma}(x, y z)\\ =-\sigma(x,z)\leftharpoonup y-\sigma(x z, y)+x\rightharpoonup\sigma(z,y)+\sigma(x,zy).$ \end{enumerate}
A bilinear map $\theta: A\otimes A\to H$ is called a cocycle on $A$ if \begin{enumerate} \item[(CC3)] $\theta(a b, c)+\theta(a, b) \triangleleft c-a\triangleright \theta(b, c)-\theta(a, b c)\\ =-\theta(b a, c)-\theta(b ,a) \triangleleft c+b\triangleright \theta(a, c)+\theta(b, a c),$ \item[(CC4)] $\theta(a b, c)+\theta(a, b) \triangleleft c-a\triangleright \theta(b, c)-\theta(a, b c)\\ =-\theta(a c, b)-\theta(a ,c) \triangleleft b+a\triangleright \theta(c ,b)+\theta(a,cb).$ \end{enumerate}
A bilinear map $P: A\to H\otimes H$ is called a cycle on $A$ if \begin{enumerate} \item[(CC5)] $\Delta_H(a{}_{<1>})\otimes a{}_{<2>}+P(a{}_{(0)})\otimes a{}_{(1)}-a{}_{(-1)}\otimes P(a{}_{(0)})-a{}_{<1>}\otimes \Delta_H(a{}_{<2>})\\ =-\tau_{12}\big(\Delta_H(a{}_{<1>})\otimes a{}_{<2>}+P(a{}_{(0)})\otimes a{}_{(1)}-a{}_{(-1)}\otimes P(a{}_{(0)})-a{}_{<1>}\otimes \Delta_H(a{}_{<2>})\big)$, \item[(CC6)] $\Delta_H(a{}_{<1>})\otimes a{}_{<2>}+P(a{}_{(0)})\otimes a{}_{(1)}-a{}_{(-1)}\otimes P(a{}_{(0)})-a{}_{<1>}\otimes \Delta_H(a{}_{<2>})\\ =-\tau_{23}\big(\Delta_H(a{}_{<1>})\otimes a{}_{<2>}+P(a{}_{(0)})\otimes a{}_{(1)}-a{}_{(-1)}\otimes P(a{}_{(0)})-a{}_{<1>}\otimes \Delta_H(a{}_{<2>})\big)$. \end{enumerate}
A bilinear map $Q: H\to A\otimes A$ is called a cycle on $H$ if \begin{enumerate} \item[(CC7)] $ \Delta_A(x{}_{\{1\}})\otimes x{}_{\{2\}}+Q(x{}_{[0]})\otimes x{}_{[1]}-x{}_{\{1\}}\otimes \Delta_A(x{}_{\{2\}})-x{}_{[-1]}\otimes Q(x{}_{[0]})\\ =-\tau_{12}\big(\Delta_A(x{}_{\{1\}})\otimes x{}_{\{2\}}+Q(x{}_{[0]})\otimes x{}_{[1]}-x{}_{\{1\}}\otimes \Delta_A(x{}_{\{2\}})-x{}_{[-1]}\otimes Q(x{}_{[0]})\big)$, \item[(CC8)] $ \Delta_A(x{}_{\{1\}})\otimes x{}_{\{2\}}+Q(x{}_{[0]})\otimes x{}_{[1]}-x{}_{\{1\}}\otimes \Delta_A(x{}_{\{2\}})-x{}_{[-1]}\otimes Q(x{}_{[0]})\\ =-\tau_{23}\big(\Delta_A(x{}_{\{1\}})\otimes x{}_{\{2\}}+Q(x{}_{[0]})\otimes x{}_{[1]}-x{}_{\{1\}}\otimes \Delta_A(x{}_{\{2\}})-x{}_{[-1]}\otimes Q(x{}_{[0]})\big)$. \end{enumerate}
In the following definitions, we introduce the concept of cocycle alternative algebras and cycle alternative coalgebras, which are in fact not really ordinary alternative algebras and alternative coalgebras, but generalized ones.
\begin{definition} (i): Let $\sigma$ be a cocycle on a vector space $H$ equipped with multiplication $H \otimes H \to H$, satisfying the following cocycle associative identities: \begin{enumerate} \item[(CC9)] $(x y) z+\sigma(x, y) \triangleright z-x(y z)-x \triangleleft \sigma(y, z)\\ =-(y x) z-\sigma(y, x) \triangleright z+y(x z)+y \triangleleft \sigma(x, z)$, \item[(CC10)] $(x y) z+\sigma(x, y) \triangleright z-x(y z)-x \triangleleft \sigma(y, z)\\ =-(x z) y-\sigma(x, z) \triangleright y+x(z y)+x \triangleleft \sigma(z, y)$. \end{enumerate} Then $H$ is called a cocycle $\sigma$-alternative algebra which is denoted by $(H,\sigma)$.
(ii): Let $\theta$ be a cocycle on a vector space $A$ equipped with a multiplication $A \otimes A \to A$, satisfying the following cocycle associative identities: \begin{enumerate} \item[(CC11)] $(a b) c+\theta(a, b) \rightharpoonup c-a(b c)-a\leftharpoonup \theta(b, c)\\ =-(b a) c-\theta(b, a) \rightharpoonup c+b(a c)+b\leftharpoonup \theta(a, c)$, \item[(CC12)] $(a b) c+\theta(a, b) \rightharpoonup c-a(b c)-a\leftharpoonup \theta(b, c)\\ =-(a c) b-\theta(a, c) \rightharpoonup b+a(c b)+a\leftharpoonup \theta(c, b)$. \end{enumerate} Then $A$ is called a cocycle $\theta$-alternative algebra which is denoted by $(A,\theta)$.
(iii) Let $P$ be a cycle on a vector space $H$ equipped with a comultiplication $\Delta: H \to H \otimes H$, satisfying the following cycle coassociative identities: \begin{enumerate} \item[(CC13)] $\Delta_H(x{}_{1})\otimes x{}_{2}+ P(x{}_{[-1]}) \otimes x{}_{[0]}-x_1\otimes \Delta_H(x_2)-x{}_{[0]}\otimes P(x{}_{[1]})\\ =-\tau_{12}\big(\Delta_H(x{}_{1})\otimes x{}_{2}+ P(x{}_{[-1]}) \otimes x{}_{[0]}-x_1\otimes \Delta_H(x_2)-x{}_{[0]}\otimes P(x{}_{[1]})\big)$, \item[(CC14)] $\Delta_H(x{}_{1})\otimes x{}_{2}+ P(x{}_{[-1]}) \otimes x{}_{[0]}-x_1\otimes \Delta_H(x_2)-x{}_{[0]}\otimes P(x{}_{[1]})\\ =-\tau_{23}\big(\Delta_H(x{}_{1})\otimes x{}_{2}+ P(x{}_{[-1]}) \otimes x{}_{[0]}-x_1\otimes \Delta_H(x_2)-x{}_{[0]}\otimes P(x{}_{[1]})\big)$. \end{enumerate} \noindent Then $H$ is called a cycle $P$-alternative coalgebra which is denoted by $(H, P)$.
(iv) Let $Q$ be a cycle on a vector space $A$ equipped with a commutativity map $\Delta: A \to A \otimes A$, satisfying the following cycle coassociative identities: \begin{enumerate} \item[(CC15)] $\Delta_A(a{}_{1})\otimes a{}_{2}+Q(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{1}\otimes \Delta_A(a{}_{2})-a{}_{(0)}\otimes Q(a{}_{(1)})\\ =-\tau_{12}\big(\Delta_A(a{}_{1})\otimes a{}_{2}+Q(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{1}\otimes \Delta_A(a{}_{2})-a{}_{(0)}\otimes Q(a{}_{(1)})\big)$, \item[(CC16)] $\Delta_A(a{}_{1})\otimes a{}_{2}+Q(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{1}\otimes \Delta_A(a{}_{2})-a{}_{(0)}\otimes Q(a{}_{(1)})\\ =-\tau_{23}\big(\Delta_A(a{}_{1})\otimes a{}_{2}+Q(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{1}\otimes \Delta_A(a{}_{2})-a{}_{(0)}\otimes Q(a{}_{(1)})\big)$. \end{enumerate} \noindent Then $A$ is called a cycle $Q$-alternative coalgebra which is denoted by $(A, Q)$. \end{definition}
\begin{definition} A \emph{cocycle cross product system } is a pair of $\theta$-alternative algebra $A$ and $\sigma$-alternative algebra $H$, where $\sigma: H\otimes H\to A$ is a cocycle on $H$, $\theta: A\otimes A\to H$ is a cocycle on $A$ and the following conditions are satisfied: \begin{enumerate} \item[(CP1)] $(ab+b a)\leftharpoonup x+\sigma(\theta(a,b),x)+\sigma(\theta(b,a),x)\\ =a(b\leftharpoonup x)+a\leftharpoonup(b\triangleright x)+b(a\leftharpoonup x)+b\leftharpoonup(a\triangleright x)$,
\item[(CP2)] $x\rightharpoonup (a b)+a(x\rightharpoonup b)+a\leftharpoonup(x\triangleleft b)+\sigma(x,\theta(a,b))\\ =(x\rightharpoonup a+a\leftharpoonup x)b+(x\triangleleft a+a\triangleright x)\rightharpoonup b$,
\item[(CP3)] $ a\leftharpoonup (x y)+a\sigma(x,y)+x\rightharpoonup(a\leftharpoonup y)+\sigma(x,a\triangleright y)\\ =(a\leftharpoonup x+x\rightharpoonup a)\leftharpoonup y+\sigma(a\triangleright x,y)+\sigma(x\triangleleft a,y)$,
\item[(CP4)] $(x y+y x)\rightharpoonup a+(\sigma(x,y)+\sigma(y,x))a\\ =x\rightharpoonup(y\rightharpoonup a)+\sigma(x,y\triangleleft a)+y\rightharpoonup(x\rightharpoonup a)+\sigma(y,x\triangleleft a)$,
\item[(CP5)] $(ab +ba)\triangleright x+(\theta(a, b) +\theta(b,a))x\\ =a\triangleright (b\triangleright x)+\theta(a, b\leftharpoonup x)+b\triangleright (a\triangleright x)+\theta(b, a\leftharpoonup x)$,
\item[(CP6)] $x \triangleleft (a b)+a\triangleright(x\triangleleft b)+\theta(a,x\rightharpoonup b)+x\theta(a, b)\\ =(x \triangleleft a+a\triangleright x) \triangleleft b+\theta(x \rightharpoonup a, b)+\theta(a\leftharpoonup x,b)$,
\item[(CP7)] $ a \triangleright (x y)+x(a\triangleright y)+x\triangleleft(a\leftharpoonup y)+\theta(a, \sigma(x, y))\\ =(a \triangleright x+x\triangleleft a) y+(a\leftharpoonup x+x\rightharpoonup a) \triangleright y$,
\item[(CP8)] $(x y +y x )\triangleleft a+\theta(\sigma(x, y), a)+\theta(\sigma(y,x), a)\\ =x \triangleleft(y \rightharpoonup a)+x(y \triangleleft a)+y \triangleleft(x \rightharpoonup a)+y(x \triangleleft a)$,
\item[(CP9)] $x \rightharpoonup (a b+ b a)+\sigma(x, \theta(a, b))+\sigma(x, \theta(b,a))\\ =(x\rightharpoonup a) b+(x\triangleleft a) \rightharpoonup b+(x\rightharpoonup b) a+(x\triangleleft b) \rightharpoonup a$,
\item[(CP10)] $(ab)\leftharpoonup x+(a\leftharpoonup x)b+(a\triangleright x)\rightharpoonup b+\sigma(\theta(a, b), x)\\ =a(b\leftharpoonup x+x\rightharpoonup b)+a\leftharpoonup (b \triangleright x+x\triangleleft b)$,
\item[(CP11)] $(x y) \rightharpoonup a+(x\rightharpoonup a)\leftharpoonup y+\sigma(x\triangleleft a,y)+\sigma(x,y)a\\ =x\rightharpoonup(y\rightharpoonup a+a\leftharpoonup y)+\sigma(x, y\triangleleft a)+\sigma(x,a\triangleright y)$,
\item[(CP12)] $a\leftharpoonup (x y+ y x)+a( \sigma(x, y)+ \sigma(y, x))\\ =(a\leftharpoonup x)\leftharpoonup y+\sigma(a \triangleright x, y)+(a\leftharpoonup y)\leftharpoonup x+\sigma(a \triangleright y, x)$,
\item[(CP13)] $x \triangleleft (a b+ b a)+x(\theta(a,b)+\theta(b,a))\\ =(x\triangleleft a)\triangleleft b+\theta(x\rightharpoonup a, b) +(x\triangleleft b)\triangleleft a+\theta(x\rightharpoonup b, a) $,
\item[(CP14)] $(a b)\triangleright x+\theta(a,b)x+(a\triangleright x)\triangleleft b+\theta(a\leftharpoonup x,b)\\ =a\triangleright(b\triangleright x+x\triangleleft b)+\theta(a,b\leftharpoonup x)+\theta(a,x\rightharpoonup b)$,
\item[(CP15)] $(x y)\triangleleft a+(x\triangleleft a)y+(x\rightharpoonup a)\triangleright y+\theta(\sigma(x, y), a)\\ =x(y\triangleleft a+a\triangleright y)+x\triangleleft(y\rightharpoonup a+a\leftharpoonup y)$,
\item[(CP16)] $a\triangleright (x y+ y x)+\theta(a,\sigma(x,y))+\theta(a,\sigma(y, x))\\ =(a\triangleright x)y+(a\leftharpoonup x)\triangleright y+(a\triangleright y)x+(a\leftharpoonup y)\triangleright x$. \end{enumerate} \end{definition}
\begin{lemma} Let $(A, H)$ be a cocycle cross product system. If we define $E=A_{\sigma}\#_{\theta} H$ as the vector space $A\oplus H$ with the multiplication \begin{align} (a+x)(b+ y)=\big(ab+x\rightharpoonup b+a\leftharpoonup y+\sigma(x, y)\big)+ \big(xy+x\triangleleft b+a\triangleright y+\theta(a, b)\big). \end{align} Then $E=A_{\sigma}\#_{\theta} H$ forms an alternative algebra which is called the cocycle cross product alternative algebra. \end{lemma}
\begin{proof} First, we need to check the first equation $$\begin{aligned} &\big((a+ x) (b+ y)\big) (c+ z)-(a+ x)\big( (b+ y) (c+ z)\big)\\
=& -\big( (b+y) (a+x)\big) (c+z)+(b+y)\big ((a+x) (c+z)\big).\\ \end{aligned} $$ By direct computations, the left hand side is equal to \begin{eqnarray*} &&\big((a+ x) (b+ y)\big) (c+ z)-(a+ x)\big( (b+ y) (c+ z)\big)\\ &=&\big(a b+x \rightharpoonup b+a \leftharpoonup y+\sigma(x,y)+ x y+x\triangleleft b+a\triangleright y+\theta(a,b)\big) (c+ z)\\ &&-(a+ x)(b c+y\rightharpoonup c+b \leftharpoonup z+\sigma(y,z)+ y z+y\triangleleft c+b\triangleright z+\theta(b,c))\\ &=&(a b)c+(x\rightharpoonup b)c+(a\leftharpoonup y)c+\sigma(x,y)c+(xy)\rightharpoonup c+(x\triangleleft b)\rightharpoonup c\\ &&+(a\triangleright y)\rightharpoonup c+\theta(a,b)\rightharpoonup c+(a b)\leftharpoonup z+(x\rightharpoonup b)\leftharpoonup z+(a\leftharpoonup y)\leftharpoonup z\\ &&+\sigma(x,y)\leftharpoonup z+\sigma( x y,z)+\sigma(x\triangleleft b,z)+\sigma(a\triangleright y,z)+\sigma(\theta(a,b),z)\\ &&+(xy)z+(x\triangleleft b)z+(a\triangleright y)z+\theta(a,b)z+(x y)\triangleleft c+(x\triangleleft b)\triangleleft c\\ &&+(a\triangleright y)\triangleleft c+\theta(a,b)\triangleleft c+(a b)\triangleright z+(x \rightharpoonup b)\triangleright z+(a \leftharpoonup y)\triangleright z\\ &&+\sigma(x,y)\triangleright z+\theta(a b,c)+\theta(x \rightharpoonup b,c)+\theta(a \leftharpoonup y,c)+\theta(\sigma(x,y),c)\\ &&-a(b c)-a(y\rightharpoonup c)-a(b\leftharpoonup z)-a\sigma(y,z)-x\rightharpoonup( b c)\\ &&-x\rightharpoonup (y\rightharpoonup c)-x\rightharpoonup(b\leftharpoonup z)-x\rightharpoonup\sigma(y,z)-a\leftharpoonup (yz)\\ &&-a\leftharpoonup(y\triangleleft c)-a\leftharpoonup(b\triangleright z)-a\leftharpoonup\theta(b,c)-\sigma(x, y z)-\sigma(x,y\triangleleft c)\\ &&-\sigma(x,b\triangleright z)-\sigma(x,\theta(b,c))-x(yz)-x(y\triangleleft c)-x(b\triangleright z)\\ &&-x\theta(b,c)-x\triangleleft (b c)-x\triangleleft(y\rightharpoonup c)-x\triangleleft(b \leftharpoonup z)-x\triangleleft\sigma(y,z)\\ &&-a\triangleright (y z)-a\triangleright(y\triangleleft c)-a\triangleright(b\triangleright z)-a\triangleright\theta(b,c)-\theta(a,b c)\\ &&-\theta(a,y\rightharpoonup c)-\theta(a,b \leftharpoonup z)-\theta(a,\sigma(y,z)), \end{eqnarray*} and the right hand side is equal to \begin{eqnarray*} &&-\big( (b+y) (a+x)\big) (c+z)+(b+y)\big ((a+x) (c+z)\big)\\ &=&-\big(b a+y \rightharpoonup a+b \leftharpoonup x+\sigma(y,x)+ y x+y\triangleleft a+b\triangleright x+\theta(b,a)\big) (c+ z)\\ &&+(b+ y)(a c+x\rightharpoonup c+a \leftharpoonup z+\sigma(x,z)+ x z+x\triangleleft c+a\triangleright z+\theta(a,c))\\ &=&-(b a)c-(y\rightharpoonup a)c-(b\leftharpoonup x)c-\sigma(y,x)c-(yx)\rightharpoonup c-(y\triangleleft a)\rightharpoonup c\\ &&-(b\triangleright x)\rightharpoonup c-\theta(b,a)\rightharpoonup c-(b a)\leftharpoonup z-(y\rightharpoonup a)\leftharpoonup z-(b\leftharpoonup x)\leftharpoonup z\\ &&-\sigma(y,x)\leftharpoonup z-\sigma( y x,z)-\sigma(y\triangleleft a,z)-\sigma(b\triangleright x,z)-\sigma(\theta(b,a),z)\\ &&-(yx)z-(y\triangleleft a)z-(b\triangleright x)z-\theta(b,a)z-(y x)\triangleleft c-(y\triangleleft a)\triangleleft c\\ &&-(b\triangleright x)\triangleleft c-\theta(b,a)\triangleleft c-(b a)\triangleright z-(y \rightharpoonup a)\triangleright z-(b \leftharpoonup x)\triangleright z\\ &&-\sigma(y,x)\triangleright z-\theta(ba,c)-\theta(y \rightharpoonup a,c)-\theta(b \leftharpoonup x,c)-\theta(\sigma(y,x),c)\\ &&+b(a c)+b(x\rightharpoonup c)+b(a\leftharpoonup z)+b\sigma(x,z)+y\rightharpoonup (a c)\\ &&+y\rightharpoonup (x\rightharpoonup c)+y\rightharpoonup(a\leftharpoonup z)+y\rightharpoonup\sigma(x,z)+b\leftharpoonup (xz)\\ &&+b\leftharpoonup(x\triangleleft c)+b\leftharpoonup(a\triangleright z)+b\leftharpoonup\theta(a,c)+\sigma(y, x z)+\sigma(y,x\triangleleft c)\\ &&+\sigma(y,a\triangleright z)+\sigma(y,\theta(a,c))+y(xz)+y(x\triangleleft c)+y(a\triangleright z)\\ &&+y\theta(a,c)+y\triangleleft (a c)+y\triangleleft(x\rightharpoonup c)+y\triangleleft(a \leftharpoonup z)+y\triangleleft\sigma(x,z)\\ &&+b\triangleright (x z)+b\triangleright(x\triangleleft c)+b\triangleright(a\triangleright z)+b\triangleright\theta(a,c)+\theta(b,a c)\\ &&+\theta(b,x\rightharpoonup c)+\theta(b,a \leftharpoonup z)+\theta(b,\sigma(x,z)). \end{eqnarray*} Thus the two sides are equal to each other if and only if (CP1)--(CP8) hold.
Next,we check the second equation
$$\begin{aligned} &\big((a+ x) (b+ y)\big) (c+z)-(a+ x) \big((b+ y) (c+ z)\big)\\
=& -\big( (a+x)(c+z)\big) (b+y)+(a+x) \big( (c+z)(b+y)\big).\\ \end{aligned} $$ By direct computations, the left hand side is equal to \begin{eqnarray*} &&\big((a+ x) (b+ y)\big) (c+ z)-(a+ x)\big( (b+ y) (c+ z)\big)\\ &=&\big(a b+x \rightharpoonup b+a \leftharpoonup y+\sigma(x,y)+ x y+x\triangleleft b+a\triangleright y+\theta(a,b)\big) (c+ z)\\ &&-(a+ x)(b c+y\rightharpoonup c+b \leftharpoonup z+\sigma(y,z)+ y z+y\triangleleft c+b\triangleright z+\theta(b,c))\\ &=&(a b)c+(x\rightharpoonup b)c+(a\leftharpoonup y)c+\sigma(x,y)c+(xy)\rightharpoonup c+(x\triangleleft b)\rightharpoonup c\\ &&+(a\triangleright y)\rightharpoonup c+\theta(a,b)\rightharpoonup c+(a b)\leftharpoonup z+(x\rightharpoonup b)\leftharpoonup z+(a\leftharpoonup y)\leftharpoonup z\\ &&+\sigma(x,y)\leftharpoonup z+\sigma( x y,z)+\sigma(x\triangleleft b,z)+\sigma(a\triangleright y,z)+\sigma(\theta(a,b),z)\\ &&+(xy)z+(x\triangleleft b)z+(a\triangleright y)z+\theta(a,b)z+(x y)\triangleleft c+(x\triangleleft b)\triangleleft c\\ &&+(a\triangleright y)\triangleleft c+\theta(a,b)\triangleleft c+(a b)\triangleright z+(x \rightharpoonup b)\triangleright z+(a \leftharpoonup y)\triangleright z\\ &&+\sigma(x,y)\triangleright z+\theta(a b,c)+\theta(x \rightharpoonup b,c)+\theta(a \leftharpoonup y,c)+\theta(\sigma(x,y),c)\\ &&-a(b c)-a(y\rightharpoonup c)-a(b\leftharpoonup z)-a\sigma(y,z)-x\rightharpoonup (b c)\\ &&-x\rightharpoonup (y\rightharpoonup c)-x\rightharpoonup(b\leftharpoonup z)-x\rightharpoonup\sigma(y,z)-a\leftharpoonup (yz)\\ &&-a\leftharpoonup(y\triangleleft c)-a\leftharpoonup(b\triangleright z)-a\leftharpoonup\theta(b,c)-\sigma(x, y z)-\sigma(x,y\triangleleft c)\\ &&-\sigma(x,b\triangleright z)-\sigma(x,\theta(b,c))-x(yz)-x(y\triangleleft c)-x(b\triangleright z)\\ &&-x\theta(b,c)-x\triangleleft (b c)-x\triangleleft(y\rightharpoonup c)-x\triangleleft(b \leftharpoonup z)-x\triangleleft\sigma(y,z)\\ &&-a\triangleright (y z)-a\triangleright(y\triangleleft c)-a\triangleright(b\triangleright z)-a\triangleright\theta(b,c)-\theta(a,b c)\\ &&-\theta(a,y\rightharpoonup c)-\theta(a,b \leftharpoonup z)-\theta(a,\sigma(y,z)), \end{eqnarray*} and the right hand side is equal to \begin{eqnarray*} && -\big( (a+x)(c+z)\big) (b+y)+(a+x) \big( (c+z)(b+y)\big)\\ &=&-\big(a c+x \rightharpoonup c+a \leftharpoonup z+\sigma(x,z)+ x z+x\triangleleft c+a\triangleright z+\theta(a,c)\big) (b+ y)\\ &&+(a+ x)(c b+z\rightharpoonup b+c \leftharpoonup y+\sigma(z,y)+ z y+z\triangleleft b+c\triangleright y+\theta(c,b))\\ &=&-(a c)b-(x\rightharpoonup c)b-(a\leftharpoonup z)b-\sigma(x,z)b-(xz)\rightharpoonup b-(x\triangleleft c)\rightharpoonup b\\ &&-(a\triangleright z)\rightharpoonup b-\theta(a,c)\rightharpoonup b-(a c)\leftharpoonup y-(x\rightharpoonup c)\leftharpoonup y-(a\leftharpoonup z)\leftharpoonup y\\ &&-\sigma(x,z)\leftharpoonup y-\sigma( x z,y)-\sigma(x\triangleleft c,y)-\sigma(a\triangleright z,y)-\sigma(\theta(a,c),y)\\ &&-(xz)y-(x\triangleleft c)y-(a\triangleright z)y-\theta(a,c)y-(x z)\triangleleft b-(x\triangleleft c)\triangleleft b\\ &&-(a\triangleright z)\triangleleft b-\theta(a,c)\triangleleft b-(a c)\triangleright y-(x \rightharpoonup c)\triangleright y-(a \leftharpoonup z)\triangleright y\\ &&-\sigma(x,z)\triangleright y-\theta(a c,b)-\theta(x \rightharpoonup c,b)-\theta(a \leftharpoonup z,b)-\theta(\sigma(x,z),b)\\ &&+a(c b)+a(z\rightharpoonup b)+a(c\leftharpoonup y)+a\sigma(z,y)+x\rightharpoonup (c b)\\ &&+x\rightharpoonup (z\rightharpoonup b)+x\rightharpoonup(c\leftharpoonup y)+x\rightharpoonup\sigma(z,y)+a\leftharpoonup (zy)\\ &&+a\leftharpoonup(z\triangleleft b)+a\leftharpoonup(c\triangleright y)+a\leftharpoonup\theta(c,b)+\sigma(x, z y)+\sigma(x,z\triangleleft b)\\ &&+\sigma(x,c\triangleright y)+\sigma(x,\theta(c,b))+x(zy)+x(z\triangleleft b)+x(c\triangleright y)\\ &&+x\theta(c,b)+x\triangleleft (c b)+x\triangleleft(z\rightharpoonup b)+x\triangleleft(c \leftharpoonup y)+x\triangleleft\sigma(z,y)\\ &&+a\triangleright (z y)+a\triangleright(z\triangleleft b)+a\triangleright(c\triangleright y)+a\triangleright\theta(c,b)+\theta(a,c b)\\ &&+\theta(a,z\rightharpoonup b)+\theta(a,c \leftharpoonup y)+\theta(a,\sigma(z,y)). \end{eqnarray*} Thus the two sides are equal to each other if and only if (CP9)--(CP16) hold. \end{proof}
\begin{definition} A \emph{cycle cross coproduct system } is a pair of $P$-alternative coalgebra $A$ and $Q$-alternative coalgebra $H$ , where $P: A\to H\otimes H$ is a cycle on $A$, $Q: H\to A\otimes A$ is a cycle over $H$ such that following conditions are satisfied: \begin{enumerate} \item[(CCP1)] $\phi(a{}_{1})\otimes a{}_{2}+\gamma(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{(-1)}\otimes \Delta_{A}(a{}_{(0)})-a{}_{<1>}\otimes Q(a{}_{<2>})\\ =-\tau_{12}\big(\psi(a{}_{1})\otimes a{}_{2}+\rho(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{1}\otimes\phi(a{}_{2})-a{}_{(0)}\otimes\gamma(a{}_{(1)})\big)$,
\item[(CCP2)] $P(a{}_{1})\otimes a{}_{2}+\Delta_{H}(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{(-1)}\otimes\phi(a{}_{(0)})-a{}_{<1>}\otimes\gamma(a{}_{<2>})\\ =-\tau_{12}\big(P(a{}_{1})\otimes a{}_{2}+\Delta_{H}(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{(-1)}\otimes\phi(a{}_{(0)})-a{}_{<1>}\otimes\gamma(a{}_{<2>})\big)$,
\item[(CCP3)] $\Delta_{A}(a{}_{(0)})\otimes a{}_{(1)}+Q(a{}_{<1>})\otimes a{}_{<2>}-a{}_{1}\otimes\psi(a{}_{2})-a{}_{(0)}\otimes\rho(a{}_{(1)})\\ =-\tau_{12}\big(\Delta_{A}(a{}_{(0)})\otimes a{}_{(1)}+Q(a{}_{<1>})\otimes a{}_{<2>}-a{}_{1}\otimes\psi(a{}_{2})-a{}_{(0)}\otimes\rho(a{}_{(1)})\big)$,
\item[(CCP4)] $\psi(a{}_{(0)})\otimes a{}_{(1)}+\rho(a{}_{<1>})\otimes a{}_{<2>}-a{}_{1}\otimes P(a{}_{2})-a{}_{(0)}\otimes\Delta_{H}(a{}_{(1)})\\ =-\tau_{12}\big(\phi(a{}_{(0)})\otimes a{}_{(1)}+\gamma(a{}_{<1>})\otimes a{}_{<2>}-a{}_{(-1)}\otimes\psi(a{}_{(0)})-a{}_{<1>}\otimes\rho(a{}_{<2>})\big)$,
\item[(CCP5)] $\gamma(x{}_{[0]})\otimes x{}_{[1]}+\phi(x{}_{\{1\}})\otimes x{}_{\{2\}}-x{}_{1}\otimes Q(x{}_{2})-x{}_{[0]}\otimes\Delta_{A}(x{}_{[1]})\\ =-\tau_{12}\big(\rho(x{}_{[0]})\otimes x{}_{[1]}+\psi(x{}_{\{1\}})\otimes x{}_{\{2\}}-x{}_{[-1]}\otimes\gamma(x{}_{[0]})-x{}_{\{1\}}\otimes\phi(x{}_{\{2\}})\big)$,
\item[(CCP6)] $\Delta_{H}(x{}_{[0]})\otimes x{}_{[1]}+P(x{}_{\{1\}})\otimes x{}_{\{2\}}-x{}_{1}\otimes \gamma(x{}_{2})-x{}_{[0]}\otimes\phi(x{}_{[1]})\\ =-\tau_{12}\big(\Delta_{H}(x{}_{[0]})\otimes x{}_{[1]}+P(x{}_{\{1\}})\otimes x{}_{\{2\}}-x{}_{1}\otimes \gamma(x{}_{2})-x{}_{[0]}\otimes\phi(x{}_{[1]})\big)$,
\item[(CCP7)] $Q(x{}_{1})\otimes x{}_{2}+\Delta_{A}(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes \rho(x{}_{[0]})-x{}_{\{1\}}\otimes\psi(x{}_{\{2\}})\\ =-\tau_{12}\big(Q(x{}_{1})\otimes x{}_{2}+\Delta_{A}(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes \rho(x{}_{[0]})-x{}_{\{1\}}\otimes\psi(x{}_{\{2\}})\big)$,
\item[(CCP8)] $\rho(x{}_{1})\otimes x{}_{2}+\psi(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes\Delta_{H}(x{}_{[0]})-x{}_{\{1\}}\otimes P(x{}_{\{2\}})\\ =-\tau_{12}\big(\gamma(x{}_{1})\otimes x{}_{2}+\phi(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{1}\otimes\rho(x{}_{2})-x{}_{[0]}\otimes\psi(x{}_{[1]})\big)$,
\item[(CCP9)] $\phi(a{}_{1})\otimes a{}_{2}+\gamma(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{(-1)}\otimes \Delta_{A}(a{}_{(0)})-a{}_{<1>}\otimes Q(a{}_{<2>})\\ =-\tau_{23}\big(\phi(a{}_{1})\otimes a{}_{2}+\gamma(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{(-1)}\otimes \Delta_{A}(a{}_{(0)})-a{}_{<1>}\otimes Q(a{}_{<2>})\big)$,
\item[(CCP10)] $P(a{}_{1})\otimes a{}_{2}+\Delta_{H}(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{(-1)}\otimes\phi(a{}_{(0)})-a{}_{<1>}\otimes\gamma(a{}_{<2>})\\ =-\tau_{23}\big(\phi(a{}_{(0)})\otimes a{}_{(1)}+\gamma(a{}_{<1>})\otimes a{}_{<2>}-a{}_{(-1)}\otimes\psi(a{}_{(0)})-a{}_{<1>}\otimes\rho(a{}_{<2>})\big)$,
\item[(CCP11)] $\Delta_{A}(a{}_{(0)})\otimes a{}_{(1)}+Q(a{}_{<1>})\otimes a{}_{<2>}-a{}_{1}\otimes\psi(a{}_{2})-a{}_{(0)}\otimes\rho(a{}_{(1)})\\ =-\tau_{23}\big(\psi(a{}_{1})\otimes a{}_{2}+\rho(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{1}\otimes\phi(a{}_{2})-a{}_{(0)}\otimes\gamma(a{}_{(1)})\big)$,
\item[(CCP12)] $\psi(a{}_{(0)})\otimes a{}_{(1)}+\rho(a{}_{<1>})\otimes a{}_{<2>}-a{}_{1}\otimes P(a{}_{2})-a{}_{(0)}\otimes\Delta_{H}(a{}_{(1)})\\ =-\tau_{23}\big(\psi(a{}_{(0)})\otimes a{}_{(1)}+\rho(a{}_{<1>})\otimes a{}_{<2>}-a{}_{1}\otimes P(a{}_{2})-a{}_{(0)}\otimes\Delta_{H}(a{}_{(1)})\big)$,
\item[(CCP13)] $\gamma(x{}_{[0]})\otimes x{}_{[1]}+\phi(x{}_{\{1\}})\otimes x{}_{\{2\}}-x{}_{1}\otimes Q(x{}_{2})-x{}_{[0]}\otimes\Delta_{A}(x{}_{[1]})\\ =-\tau_{23}\big(\gamma(x{}_{[0]})\otimes x{}_{[1]}+\phi(x{}_{\{1\}})\otimes x{}_{\{2\}}-x{}_{1}\otimes Q(x{}_{2})-x{}_{[0]}\otimes\Delta_{A}(x{}_{[1]})\big)$,
\item[(CCP14)] $\Delta_{H}(x{}_{[0]})\otimes x{}_{[1]}+P(x{}_{\{1\}})\otimes x{}_{\{2\}}-x{}_{1}\otimes \gamma(x{}_{2})-x{}_{[0]}\otimes\phi(x{}_{[1]})\\ =-\tau_{23}\big(\phi(x{}_{[-1]})\otimes x{}_{[0]}+\gamma(x{}_{1})\otimes x{}_{2}-x{}_{1}\otimes\rho(x{}_{2})-x{}_{[0]}\otimes\psi(x{}_{[1]})\big)$,
\item[(CCP15)] $Q(x{}_{1})\otimes x{}_{2}+\Delta_{A}(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes \rho(x{}_{[0]})-x{}_{\{1\}}\otimes\psi(x{}_{\{2\}})\\ =-\tau_{23}\big(\rho(x{}_{[0]})\otimes x{}_{[1]}+\psi(x{}_{\{1\}})\otimes x{}_{\{2\}}-x{}_{[-1]}\otimes\gamma(x{}_{[0]})-x{}_{\{1\}}\otimes\phi(x{}_{\{2\}})\big)$,
\item[(CCP16)] $\rho(x{}_{1})\otimes x{}_{2}+\psi(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes\Delta_{H}(x{}_{[0]})-x{}_{\{1\}}\otimes P(x{}_{\{2\}})\\ =-\tau_{23}\big(\rho(x{}_{1})\otimes x{}_{2}+\psi(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes\Delta_{H}(x{}_{[0]})-x{}_{\{1\}}\otimes P(x{}_{\{2\}})\big)$. \end{enumerate} \end{definition}
\begin{lemma}\label{lem2} Let $(A, H)$ be a cycle cross coproduct system. If we define $E=A^{P}\# {}^{Q} H$ as the vector space $A\oplus H$ with the comultiplication $$\Delta_{E}(a)=(\Delta_{A}+\phi+\psi+P)(a),\quad \Delta_{E}(x)=(\Delta_{H}+\rho+\gamma+Q)(x), $$ that is $$\Delta_{E}(a)= a{}_{1} \otimes a{}_{2}+ a{}_{(-1)} \otimes a{}_{(0)}+a{}_{(0)}\otimes a{}_{(1)}+a{}_{<1>}\otimes a{}_{<2>},$$ $$\Delta_{E}(x)= x{}_{1} \otimes x{}_{2}+ x{}_{[-1]} \otimes x{}_{[0]}+x{}_{[0]} \otimes x{}_{[1]}+x{}_{\{1\}}\otimes x{}_{\{2\}},$$ then $A^{P}\# {}^{Q} H$ forms an alternative coalgebra which we will call it the cycle cross coproduct alternative coalgebra. \end{lemma}
\begin{proof} First, we have to check $$ \begin{aligned}
&(\Delta_E\otimes\mathrm{id})\Delta_E(a+x)-(\mathrm{id}\otimes\Delta_E)\Delta_E(a+x)\\
=&-\tau_{12}\big((\Delta_E\otimes\mathrm{id})\Delta_E(a+x)-(\mathrm{id}\otimes\Delta_E)\Delta_E(a+x)\big).
\end{aligned}
$$ By direct computations, the left hand side is equal to \begin{eqnarray*} &&(\Delta_E\otimes\mathrm{id})\Delta_E(a+x)-(\mathrm{id}\otimes\Delta_E)\Delta_E(a+x)\\ &=&(\Delta_E\otimes\mathrm{id})(a_{1} \otimes a_{2}+a_{(-1)} \otimes a_{(0)}+a_{(0)} \otimes a_{(1)}+a{}_{<1>}\otimes a{}_{<2>}+x_{1} \otimes x_{2}\\ &&+x_{[-1]} \otimes x_{[0]}+x_{[0]} \otimes x_{[1]}+x{}_{\{1\}}\otimes x{}_{\{2\}})-(\mathrm{id}\otimes\Delta_E)(a_{1} \otimes a_{2}+a_{(-1)} \otimes a_{(0)}\\ &&+a_{(0)} \otimes a_{(1)}+a{}_{<1>}\otimes a{}_{<2>}+x_{1} \otimes x_{2}+x_{[-1]} \otimes x_{[0]}+x_{[0]} \otimes x_{[1]}+x{}_{\{1\}}\otimes x{}_{\{2\}})\\ &=&\Delta_{A}\left(a_{1}\right) \otimes a_{2}+\phi\left(a_{1}\right) \otimes a_{2}+\psi\left(a_{1}\right) \otimes a_{2} +P(a{}_{1})\otimes a{}_{2}\\ &&+\Delta_{H}\left(a_{(-1)}\right) \otimes a_{(0)}+\rho\left(a_{(-1)}\right) \otimes a_{(0)}+\gamma\left(a_{(-1)}\right) \otimes a_{(0)} +Q(a{}_{(-1)})\otimes a{}_{(0)}\\ &&+\Delta_{A}\left(a_{(0)}\right) \otimes a_{(1)}+\phi\left(a_{(0)}\right) \otimes a_{(1)}+\psi\left(a_{(0)}\right) \otimes a_{(1)} +P(a{}_{(0)})\otimes a{}_{(1)}\\ &&+\Delta_{H}(a{}_{<1>})\otimes a{}_{<2>}+\rho(a{}_{<1>})\otimes a{}_{<2>}+\gamma(a{}_{<1>})\otimes a{}_{<2>}+Q(a{}_{<1>})\otimes a{}_{<2>}\\ &&+\Delta_{H}\left(x_{1}\right) \otimes x_{2}+\rho\left(x_{1}\right) \otimes x_{2}+\gamma\left(x_{1}\right) \otimes x_{2} +Q(x{}_{1})\otimes x{}_{2}\\ &&+\Delta_{A}\left(x_{[-1])}\right) \otimes x_{[0]}+\phi\left(x_{[-1]}\right) \otimes x_{[0]}+\psi\left(x_{[-1]}\right) \otimes x_{[0]} +P(x{}_{[-1]})\otimes x{}_{[0]}\\ &&+\Delta_{H}\left(x_{[0]}\right) \otimes x_{[1]}+\rho\left(x_{[0]}\right) \otimes x_{[1]}+\gamma\left(x_{[0]}\right) \otimes x_{[1]} +Q(x{}_{[0]})\otimes x{}_{[1]}\\ &&+\Delta_{A}(x{}_{\{1\}})\otimes x{}_{\{2\}}+\phi(x{}_{\{1\}})\otimes x{}_{\{2\}}+\psi(x{}_{\{1\}})\otimes x{}_{\{2\}}+P(x{}_{\{1\}})\otimes x{}_{\{2\}}\\ &&-a_{1} \otimes \Delta_{A}\left(a_{2}\right)-a_{1} \otimes \phi\left(a_{2}\right)-a_{1} \otimes \psi\left(a_{2}\right) -a{}_{1}\otimes P(a{}_{2})\\ &&-a_{(-1)} \otimes \Delta_{A}\left(a_{(0)}\right)-a_{(-1)} \otimes \phi\left(a_{(0)}\right)-a_{(-1)} \otimes \psi\left(a_{(0)}\right) -a{}_{(-1)}\otimes P(a{}_{(0)})\\ &&-a_{(0)} \otimes \Delta_{H}\left(a_{(1)}\right)-a_{(0)} \otimes \rho\left(a_{(1)}\right)-a_{(0)} \otimes \gamma\left(a_{(1)}\right) -a{}_{(0)}\otimes Q(a{}_{(1)})\\ &&-a{}_{<1>}\otimes \Delta_{H}(a{}_{<2>})-a{}_{<1>}\otimes \rho(a{}_{<2>})-a{}_{<1>}\otimes\gamma(a{}_{<2>})-a{}_{<1>}\otimes Q(a{}_{<2>})\\ &&-x_{1} \otimes \Delta_{H}\left(x_{2}\right)-x_{1} \otimes \rho\left(x_{2}\right)-x_{1} \otimes \gamma\left(x_{2}\right) -x{}_{1}\otimes Q(x{}_{2})\\ &&-x_{[-1]} \otimes \Delta_{H}\left(x_{[0]}\right)-x_{[-1]} \otimes \rho\left(x_{[0]}\right)-x_{[-1]} \otimes \gamma\left(x_{[0]}\right) -x{}_{[-1]}\otimes Q(x{}_{[0]})\\ &&-x_{[0]} \otimes \Delta_{A}\left(x_{[1]}\right)-x_{[0]} \otimes \phi\left(x_{[1]}\right)-x_{[0]} \otimes \psi\left(x_{[1]}\right) -x{}_{[0]}\otimes P(x{}_{[1]})\\ &&-x{}_{\{1\}}\otimes\Delta_{A}(x{}_{\{2\}})-x{}_{\{1\}}\otimes\phi(x{}_{\{2\}})-x{}_{\{1\}}\otimes\psi(x{}_{\{2\}})-x{}_{\{1\}}\otimes P(x{}_{\{2\}}), \end{eqnarray*} and the right hand side is equal to \begin{eqnarray*} && -\tau_{12}\big((\Delta_E\otimes\mathrm{id})\Delta_E(a+x)-(\mathrm{id}\otimes\Delta_E)\Delta_E(a+x)\big)\\ &=&-\tau_{12}(\Delta_E\otimes\mathrm{id})(a_{1} \otimes a_{2}+a_{(-1)} \otimes a_{(0)}+a_{(0)} \otimes a_{(1)} +a{}_{<1>}\otimes a{}_{<2>}+x_{1} \otimes x_{2}\\ &&+x_{[-1]} \otimes x_{[0]}+x_{[0]} \otimes x_{[1]}+x{}_{\{1\}}\otimes x{}_{\{2\}}) +\tau_{12}(\mathrm{id}\otimes\Delta_E)(a_{1} \otimes a_{2}+a_{(-1)} \otimes a_{(0)}\\ &&+a_{(0)} \otimes a_{(1)}+a{}_{<1>}\otimes a{}_{<2>}+x_{1} \otimes x_{2}+x_{[-1]} \otimes x_{[0]}+x_{[0]} \otimes x_{[1]}+x{}_{\{1\}}\otimes x{}_{\{2\}})\\ &=&-\tau_{12}\big(\Delta_{A}\left(a_{1}\right) \otimes a_{2}\big)-\tau_{12}\big(\phi\left(a_{1}\right) \otimes a_{2}\big)-\tau_{12}\big(\psi\left(a_{1}\right) \otimes a_{2}\big)-\tau_{12}\big(P(a{}_{1})\otimes a{}_{2}\big)\\ &&-\tau_{12}\big(\Delta_{H}\left(a_{(-1)}\right) \otimes a_{(0)}\big)-\tau_{12}\big(\rho\left(a_{(-1)}\right) \otimes a_{(0)}\big) -\tau_{12}\big(\gamma\left(a_{(-1)}\right) \otimes a_{(0)}\big)\\ &&-\tau_{12}\big(Q(a{}_{(-1)})\otimes a{}_{(0)}\big)-\tau_{12}\big(\Delta_{A}\left(a_{(0)}\right) \otimes a_{(1)}\big)-\tau_{12}\big(\phi\left(a_{(0)}\right) \otimes a_{(1)}\big)\\ &&-\tau_{12}\big(\psi\left(a_{(0)}\right) \otimes a_{(1)}\big)-\tau_{12}\big(P(a{}_{(0)})\otimes a{}_{(1)}\big)-\tau_{12}\big(\Delta_{H}(a{}_{<1>})\otimes a{}_{<2>}\big)\\ &&-\tau_{12}\big(\rho(a{}_{<1>})\otimes a{}_{<2>}\big)-\tau_{12}\big(\gamma(a{}_{<1>})\otimes a{}_{<2>}\big)-\tau_{12}\big(Q(a{}_{<1>})\otimes a{}_{<2>}\big)\\ &&-\tau_{12}\big(\Delta_{H}\left(x_{1}\right) \otimes x_{2}\big)-\tau_{12}\big(\rho\left(x_{1}\right) \otimes x_{2}\big) -\tau_{12}\big(\gamma\left(x_{1}\right) \otimes x_{2}\big)-\tau_{12}\big(Q(x{}_{1})\otimes x{}_{2}\big)\\ &&-\tau_{12}\big(\Delta_{A}\left(x_{[-1])}\right) \otimes x_{[0]}\big)-\tau_{12}\big(\phi\left(x_{[-1]}\right) \otimes x_{[0]}\big) -\tau_{12}\big(\psi\left(x_{[-1]}\right) \otimes x_{[0]}\big)\\ &&-\tau_{12}\big(P(x{}_{[-1]})\otimes x{}_{[0]}\big)-\tau_{12}\big(\Delta_{H}\left(x_{[0]}\right) \otimes x_{[1]}\big)-\tau_{12}\big(\rho\left(x_{[0]}\right) \otimes x_{[1]}\big)\\ &&-\tau_{12}\big(\gamma\left(x_{[0]}\right) \otimes x_{[1]}\big)-\tau_{12}\big(Q(x{}_{[0]})\otimes x{}_{[1]}\big)-\tau_{12}\big(\Delta_{A}(x{}_{\{1\}})\otimes x{}_{\{2\}}\big)\\ &&-\tau_{12}\big(\phi(x{}_{\{1\}})\otimes x{}_{\{2\}}\big)-\tau_{12}\big(\psi(x{}_{\{1\}})\otimes x{}_{\{2\}}\big)-\tau_{12}\big(P(x{}_{\{1\}})\otimes x{}_{\{2\}}\big)\\ &&+\tau_{12}\big(a_{1} \otimes \Delta_{A}\left(a_{2}\right)\big)+\tau_{12}\big(a_{1} \otimes \phi\left(a_{2}\right)\big) +\tau_{12}\big(a_{1} \otimes \psi\left(a_{2}\right)\big)+\tau_{12}\big(a{}_{1}\otimes P(a{}_{2})\big)\\ &&+\tau_{12}\big(a_{(-1)} \otimes \Delta_{A}\left(a_{(0)}\right)\big)+\tau_{12}\big(a_{(-1)} \otimes \phi\left(a_{(0)}\right)\big) +\tau_{12}\big(a_{(-1)} \otimes \psi\left(a_{(0)}\right)\big)\\ &&+\tau_{12}\big(a{}_{(-1)}\otimes P(a{}_{(0)})\big)+\tau_{12}\big(a_{(0)} \otimes \Delta_{H}\left(a_{(1)}\right)\big)+\tau_{12}\big(a_{(0)} \otimes \rho\left(a_{(1)}\right)\big)\\ &&+\tau_{12}\big(a_{(0)} \otimes \gamma\left(a_{(1)}\right)\big)+\tau_{12}\big(a{}_{(0)}\otimes Q(a{}_{(1)})\big)+\tau_{12}\big(a{}_{<1>}\otimes \Delta_{H}(a{}_{<2>})\big)\\ &&+\tau_{12}\big(a{}_{<1>}\otimes \rho(a{}_{<2>})\big)+\tau_{12}\big(a{}_{<1>}\otimes\gamma(a{}_{<2>})\big)+\tau_{12}\big(a{}_{<1>}\otimes Q(a{}_{<2>})\big)\\ &&+\tau_{12}\big(x_{1} \otimes \Delta_{H}\left(x_{2}\right)\big)+\tau_{12}\big(x_{1} \otimes \rho\left(x_{2}\right)\big) +\tau_{12}\big(x_{1} \otimes \gamma\left(x_{2}\right)\big)+\tau_{12}\big(x{}_{1}\otimes Q(x{}_{2})\big)\\ &&+\tau_{12}\big(x_{[-1]} \otimes \Delta_{H}\left(x_{[0]}\right)\big)+\tau_{12}\big(x_{[-1]} \otimes \rho\left(x_{[0]}\right)\big) +\tau_{12}\big(x_{[-1]} \otimes \gamma\left(x_{[0]}\right)\big)\\ &&+\tau_{12}\big(x{}_{[-1]}\otimes Q(x{}_{[0]})\big)+\tau_{12}\big(x_{[0]} \otimes \Delta_{A}\left(x_{[1]}\right)\big)+\tau_{12}\big(x_{[0]} \otimes \phi\left(x_{[1]}\right)\big)\\ &&+\tau_{12}\big(x_{[0]} \otimes \psi\left(x_{[1]}\right)\big)+\tau_{12}\big(x{}_{[0]}\otimes P(x{}_{[1]})\big)+\tau_{12}\big(x{}_{\{1\}}\otimes\Delta_{A}(x{}_{\{2\}})\big)\\ &&+\tau_{12}\big(x{}_{\{1\}}\otimes\phi(x{}_{\{2\}})\big) +\tau_{12}\big(x{}_{\{1\}}\otimes\psi(x{}_{\{2\}})\big)+\tau_{12}\big(x{}_{\{1\}}\otimes P(x{}_{\{2\}})\big). \end{eqnarray*} Thus the two sides are equal to each other if and only if (CCP1)--(CCP8) hold.
Next, we need to check
$$
\begin{aligned}
&(\Delta_E\otimes\mathrm{id})\Delta_E(a+x)-(\mathrm{id}\otimes\Delta_E)\Delta_E(a+x)\\
=&-\tau_{23}\big((\Delta_E\otimes\mathrm{id})\Delta_E(a+x)-(\mathrm{id}\otimes\Delta_E)\Delta_E(a+x)\big).
\end{aligned}
$$ By direct computations, the left hand side is equal to \begin{eqnarray*} &&(\Delta_E\otimes\mathrm{id})\Delta_E(a+x)-(\mathrm{id}\otimes\Delta_E)\Delta_E(a+x)\\ &=&(\Delta_E\otimes\mathrm{id})(a_{1} \otimes a_{2}+a_{(-1)} \otimes a_{(0)}+a_{(0)} \otimes a_{(1)}+a{}_{<1>}\otimes a{}_{<2>}+x_{1} \otimes x_{2}\\ &&+x_{[-1]} \otimes x_{[0]}+x_{[0]} \otimes x_{[1]}+x{}_{\{1\}}\otimes x{}_{\{2\}})-(\mathrm{id}\otimes\Delta_E)(a_{1} \otimes a_{2}+a_{(-1)} \otimes a_{(0)}\\ &&+a_{(0)} \otimes a_{(1)}+a{}_{<1>}\otimes a{}_{<2>}+x_{1} \otimes x_{2}+x_{[-1]} \otimes x_{[0]}+x_{[0]} \otimes x_{[1]}+x{}_{\{1\}}\otimes x{}_{\{2\}})\\ &=&\Delta_{A}\left(a_{1}\right) \otimes a_{2}+\phi\left(a_{1}\right) \otimes a_{2}+\psi\left(a_{1}\right) \otimes a_{2} +P(a{}_{1})\otimes a{}_{2}\\ &&+\Delta_{H}\left(a_{(-1)}\right) \otimes a_{(0)}+\rho\left(a_{(-1)}\right) \otimes a_{(0)}+\gamma\left(a_{(-1)}\right) \otimes a_{(0)} +Q(a{}_{(-1)})\otimes a{}_{(0)}\\ &&+\Delta_{A}\left(a_{(0)}\right) \otimes a_{(1)}+\phi\left(a_{(0)}\right) \otimes a_{(1)}+\psi\left(a_{(0)}\right) \otimes a_{(1)} +P(a{}_{(0)})\otimes a{}_{(1)}\\ &&+\Delta_{H}(a{}_{<1>})\otimes a{}_{<2>}+\rho(a{}_{<1>})\otimes a{}_{<2>}+\gamma(a{}_{<1>})\otimes a{}_{<2>}+Q(a{}_{<1>})\otimes a{}_{<2>}\\ &&+\Delta_{H}\left(x_{1}\right) \otimes x_{2}+\rho\left(x_{1}\right) \otimes x_{2}+\gamma\left(x_{1}\right) \otimes x_{2} +Q(x{}_{1})\otimes x{}_{2}\\ &&+\Delta_{A}\left(x_{[-1])}\right) \otimes x_{[0]}+\phi\left(x_{[-1]}\right) \otimes x_{[0]}+\psi\left(x_{[-1]}\right) \otimes x_{[0]} +P(x{}_{[-1]})\otimes x{}_{[0]}\\ &&+\Delta_{H}\left(x_{[0]}\right) \otimes x_{[1]}+\rho\left(x_{[0]}\right) \otimes x_{[1]}+\gamma\left(x_{[0]}\right) \otimes x_{[1]} +Q(x{}_{[0]})\otimes x{}_{[1]}\\ &&+\Delta_{A}(x{}_{\{1\}})\otimes x{}_{\{2\}}+\phi(x{}_{\{1\}})\otimes x{}_{\{2\}}+\psi(x{}_{\{1\}})\otimes x{}_{\{2\}}+P(x{}_{\{1\}})\otimes x{}_{\{2\}}\\ &&-a_{1} \otimes \Delta_{A}\left(a_{2}\right)-a_{1} \otimes \phi\left(a_{2}\right)-a_{1} \otimes \psi\left(a_{2}\right) -a{}_{1}\otimes P(a{}_{2})\\ &&-a_{(-1)} \otimes \Delta_{A}\left(a_{(0)}\right)-a_{(-1)} \otimes \phi\left(a_{(0)}\right)-a_{(-1)} \otimes \psi\left(a_{(0)}\right) -a{}_{(-1)}\otimes P(a{}_{(0)})\\ &&-a_{(0)} \otimes \Delta_{H}\left(a_{(1)}\right)-a_{(0)} \otimes \rho\left(a_{(1)}\right)-a_{(0)} \otimes \gamma\left(a_{(1)}\right) -a{}_{(0)}\otimes Q(a{}_{(1)})\\ &&-a{}_{<1>}\otimes \Delta_{H}(a{}_{<2>})-a{}_{<1>}\otimes \rho(a{}_{<2>})-a{}_{<1>}\otimes\gamma(a{}_{<2>})-a{}_{<1>}\otimes Q(a{}_{<2>})\\ &&-x_{1} \otimes \Delta_{H}\left(x_{2}\right)-x_{1} \otimes \rho\left(x_{2}\right)-x_{1} \otimes \gamma\left(x_{2}\right) -x{}_{1}\otimes Q(x{}_{2})\\ &&-x_{[-1]} \otimes \Delta_{H}\left(x_{[0]}\right)-x_{[-1]} \otimes \rho\left(x_{[0]}\right)-x_{[-1]} \otimes \gamma\left(x_{[0]}\right) -x{}_{[-1]}\otimes Q(x{}_{[0]})\\ &&-x_{[0]} \otimes \Delta_{A}\left(x_{[1]}\right)-x_{[0]} \otimes \phi\left(x_{[1]}\right)-x_{[0]} \otimes \psi\left(x_{[1]}\right) -x{}_{[0]}\otimes P(x{}_{[1]})\\ &&-x{}_{\{1\}}\otimes\Delta_{A}(x{}_{\{2\}})-x{}_{\{1\}}\otimes\phi(x{}_{\{2\}})-x{}_{\{1\}}\otimes\psi(x{}_{\{2\}})-x{}_{\{1\}}\otimes P(x{}_{\{2\}}), \end{eqnarray*} and the right hand side is equal to \begin{eqnarray*} && -\tau_{23}\big((\Delta_E\otimes\mathrm{id})\Delta_E(a+x)-(\mathrm{id}\otimes\Delta_E)\Delta_E(a+x)\big)\\ &=&-\tau_{23}(\Delta_E\otimes\mathrm{id})(a_{1} \otimes a_{2}+a_{(-1)} \otimes a_{(0)}+a_{(0)} \otimes a_{(1)} +a{}_{<1>}\otimes a{}_{<2>}+x_{1} \otimes x_{2}\\ &&+x_{[-1]} \otimes x_{[0]}+x_{[0]} \otimes x_{[1]}+x{}_{\{1\}}\otimes x{}_{\{2\}}) +\tau_{23}(\mathrm{id}\otimes\Delta_E)(a_{1} \otimes a_{2}+a_{(-1)} \otimes a_{(0)}\\ &&+a_{(0)} \otimes a_{(1)}+a{}_{<1>}\otimes a{}_{<2>}+x_{1} \otimes x_{2}+x_{[-1]} \otimes x_{[0]}+x_{[0]} \otimes x_{[1]}+x{}_{\{1\}}\otimes x{}_{\{2\}})\\ &=&-\tau_{23}\big(\Delta_{A}\left(a_{1}\right) \otimes a_{2}\big)-\tau_{23}\big(\phi\left(a_{1}\right) \otimes a_{2}\big) -\tau_{23}\big(\psi\left(a_{1}\right) \otimes a_{2}\big)-\tau_{23}\big(P(a{}_{1})\otimes a{}_{2}\big)\\ &&-\tau_{23}\big(\Delta_{H}\left(a_{(-1)}\right) \otimes a_{(0)}\big)-\tau_{23}\big(\rho\left(a_{(-1)}\right) \otimes a_{(0)}\big) -\tau_{23}\big(\gamma\left(a_{(-1)}\right) \otimes a_{(0)}\big)\\ &&-\tau_{23}\big(Q(a{}_{(-1)})\otimes a{}_{(0)}\big)-\tau_{23}\big(\Delta_{A}\left(a_{(0)}\right) \otimes a_{(1)}\big)-\tau_{23}\big(\phi\left(a_{(0)}\right) \otimes a_{(1)}\big)\\ &&-\tau_{23}\big(\psi\left(a_{(0)}\right) \otimes a_{(1)}\big)-\tau_{23}\big(P(a{}_{(0)})\otimes a{}_{(1)}\big)-\tau_{23}\big(\Delta_{H}(a{}_{<1>})\otimes a{}_{<2>}\big)\\ &&-\tau_{23}\big(\rho(a{}_{<1>})\otimes a{}_{<2>}\big)-\tau_{23}\big(\gamma(a{}_{<1>})\otimes a{}_{<2>}\big)-\tau_{23}\big(Q(a{}_{<1>})\otimes a{}_{<2>}\big)\\ &&-\tau_{23}\big(\Delta_{H}\left(x_{1}\right) \otimes x_{2}\big)-\tau_{23}\big(\rho\left(x_{1}\right) \otimes x_{2}\big) -\tau_{23}\big(\gamma\left(x_{1}\right) \otimes x_{2}\big)-\tau_{23}\big(Q(x{}_{1})\otimes x{}_{2}\big)\\ &&-\tau_{23}\big(\Delta_{A}\left(x_{[-1])}\right) \otimes x_{[0]}\big)-\tau_{23}\big(\phi\left(x_{[-1]}\right) \otimes x_{[0]}\big) -\tau_{23}\big(\psi\left(x_{[-1]}\right) \otimes x_{[0]}\big)\\ &&-\tau_{23}\big(P(x{}_{[-1]})\otimes x{}_{[0]}\big)-\tau_{23}\big(\Delta_{H}\left(x_{[0]}\right) \otimes x_{[1]}\big)-\tau_{23}\big(\rho\left(x_{[0]}\right) \otimes x_{[1]}\big)\\ &&-\tau_{23}\big(\gamma\left(x_{[0]}\right) \otimes x_{[1]}\big)-\tau_{23}\big(Q(x{}_{[0]})\otimes x{}_{[1]}\big)-\tau_{23}\big(\Delta_{A}(x{}_{\{1\}})\otimes x{}_{\{2\}}\big)\\ &&-\tau_{23}\big(\phi(x{}_{\{1\}})\otimes x{}_{\{2\}}\big) -\tau_{23}\big(\psi(x{}_{\{1\}})\otimes x{}_{\{2\}}\big)-\tau_{23}\big(P(x{}_{\{1\}})\otimes x{}_{\{2\}}\big)\\ &&+\tau_{23}\big(a_{1} \otimes \Delta_{A}\left(a_{2}\right)\big)+\tau_{23}\big(a_{1} \otimes \phi\left(a_{2}\right)\big) +\tau_{23}\big(a_{1} \otimes \psi\left(a_{2}\right)\big)+\tau_{23}\big(a{}_{1}\otimes P(a{}_{2})\big)\\ &&+\tau_{23}\big(a_{(-1)} \otimes \Delta_{A}\left(a_{(0)}\right)\big)+\tau_{23}\big(a_{(-1)} \otimes \phi\left(a_{(0)}\right)\big) +\tau_{23}\big(a_{(-1)} \otimes \psi\left(a_{(0)}\right)\big)\\ &&+\tau_{23}\big(a{}_{(-1)}\otimes P(a{}_{(0)})\big)+\tau_{23}\big(a_{(0)} \otimes \Delta_{H}\left(a_{(1)}\right)\big)+\tau_{23}\big(a_{(0)} \otimes \rho\left(a_{(1)}\right)\big)\\ &&+\tau_{23}\big(a_{(0)} \otimes \gamma\left(a_{(1)}\right)\big)+\tau_{23}\big(a{}_{(0)}\otimes Q(a{}_{(1)})\big)+\tau_{23}\big(a{}_{<1>}\otimes \Delta_{H}(a{}_{<2>})\big)\\ &&+\tau_{23}\big(a{}_{<1>}\otimes \rho(a{}_{<2>})\big) +\tau_{23}\big(a{}_{<1>}\otimes\gamma(a{}_{<2>})\big)+\tau_{23}\big(a{}_{<1>}\otimes Q(a{}_{<2>})\big)\\ &&+\tau_{23}\big(x_{1} \otimes \Delta_{H}\left(x_{2}\right)\big)+\tau_{23}\big(x_{1} \otimes \rho\left(x_{2}\right)\big) +\tau_{23}\big(x_{1} \otimes \gamma\left(x_{2}\right)\big)+\tau_{23}\big(x{}_{1}\otimes Q(x{}_{2})\big)\\ &&+\tau_{23}\big(x_{[-1]} \otimes \Delta_{H}\left(x_{[0]}\right)\big)+\tau_{23}\big(x_{[-1]} \otimes \rho\left(x_{[0]}\right)\big) +\tau_{23}\big(x_{[-1]} \otimes \gamma\left(x_{[0]}\right)\big)\\ &&+\tau_{23}\big(x{}_{[-1]}\otimes Q(x{}_{[0]})\big)+\tau_{23}\big(x_{[0]} \otimes \Delta_{A}\left(x_{[1]}\right)\big)+\tau_{23}\big(x_{[0]} \otimes \phi\left(x_{[1]}\right)\big)\\ &&+\tau_{23}\big(x_{[0]} \otimes \psi\left(x_{[1]}\right)\big)+\tau_{23}\big(x{}_{[0]}\otimes P(x{}_{[1]})\big)+\tau_{23}\big(x{}_{\{1\}}\otimes\Delta_{A}(x{}_{\{2\}})\big)\\ &&+\tau_{23}\big(x{}_{\{1\}}\otimes\phi(x{}_{\{2\}})\big) +\tau_{23}\big(x{}_{\{1\}}\otimes\psi(x{}_{\{2\}})\big)+\tau_{23}\big(x{}_{\{1\}}\otimes P(x{}_{\{2\}})\big). \end{eqnarray*} Thus the two sides are equal to each other if and only if (CCP9)--(CCP16) hold. \end{proof}
\begin{definition}\label{cocycledmp} Let $A$ and $H$ be both alternative algebras and alternative coalgebras. If the following conditions hold: \begin{enumerate} \item[(CDM1)] $\phi(ab)+\gamma(\theta(a, b))\\
=-a{}_{(1)}\otimes b a{}_{(0)}+b{}_{(-1)}\otimes a b{}_{(0)}+b{}_{(-1)}\otimes b{}_{(0)} a+(a{}_{(-1)}\triangleleft b)\otimes a{}_{(0)}\\ +(a{}_{(1)}\triangleleft b)\otimes a{}_{(0)}-(a\triangleright b{}_{(-1)})\otimes b{}_{(0)}+b{}_{<1>}\otimes(a\leftharpoonup b{}_{<2>})+b{}_{<1>}\otimes(b{}_{<2>}\rightharpoonup a)\\ -a{}_{<2>}\otimes(b\leftharpoonup a{}_{<1>})+\theta(a{}_{1}, b)\otimes a{}_{2}+\theta(a{}_{2}, b)\otimes a{}_{1}-\theta(a, b{}_{1})\otimes b{}_{2}$, \item[(CDM2)] $\psi(a b)+\rho(\theta(a, b))\\ =a{}_{(0)} b\otimes a{}_{(1)}+a{}_{(0)} b\otimes a{}_{(-1)}-a b{}_{(0)}\otimes b{}_{(1)}-a{}_{(0)}\otimes (b\triangleright a{}_{(-1)})+b{}_{(0)}\otimes(a\triangleright b{}_{(1)})\\ +b{}_{(0)}\otimes(b{}_{(1)}\triangleleft a)+(a{}_{<1>}\rightharpoonup b)\otimes a{}_{<2>}+(a{}_{<2>}\rightharpoonup b)\otimes a{}_{<1>}\\ -(a\leftharpoonup b{}_{<1>})\otimes b{}_{<2>}+b{}_{1}\otimes\theta(a, b{}_{2})+b{}_{1}\otimes\theta(b{}_{2}, a)-a{}_{2}\otimes\theta(b, a{}_{1})$, \item[(CDM3)] $\rho(x y)+\psi(\sigma(x, y))\\ =(x{}_{[1]}\leftharpoonup y)\otimes x{}_{[0]}+(x{}_{[-1]}\leftharpoonup y)\otimes x{}_{[0]}-x_{[1]} \otimes y x_{[0]}-\left(x \rightharpoonup y_{[-1]}\right) \otimes y_{[0]} \\ +y{}_{[-1]}\otimes x y{}_{[0]}+y{}_{[-1]}\otimes y{}_{[0]} x+\sigma(x{}_{1},y)\otimes x{}_{2}+\sigma(x{}_{2},y)\otimes x{}_{1}-\sigma(x,y{}_{1})\otimes y{}_{2}\\ +y{}_{\{1\}}\otimes (y{}_{\{2\}}\triangleright x)+y{}_{\{1\}}\otimes (x\triangleleft y{}_{\{2\}})-x{}_{\{2\}}\otimes(y\triangleleft x{}_{\{1\}})$, \item[(CDM4)] $\gamma(x y)+\phi(\sigma(x, y))\\ =x{}_{[0]} y\otimes x{}_{[1]}+x{}_{[0]} y\otimes x{}_{[-1]}-x_{[0]}\otimes (y\rightharpoonup x{}_{[-1]})-xy_{[0]}\otimes y_{[1]}+y{}_{[0]}\otimes (x\rightharpoonup y{}_{[1]})\\ +y{}_{[0]}\otimes(y{}_{[1]}\leftharpoonup x)+(x{}_{\{1\}}\triangleright y)\otimes x{}_{\{2\}}+(x{}_{\{2\}}\triangleright y)\otimes x{}_{\{1\}}-x{}_{2}\otimes\sigma(y,x{}_{1})\\ +y{}_{1}\otimes\sigma(x,y{}_{2})+y{}_{1}\otimes\sigma(y{}_{2},x)-(x\triangleleft y{}_{\{1\}})\otimes y{}_{\{2\}}$, \item[(CDM5)] $\Delta_{A}(x \rightharpoonup b)+Q(x\triangleleft b)$ \\ $=(x{}_{[0]}\rightharpoonup b)\otimes x{}_{[1]}+(x{}_{[0]}\rightharpoonup b)\otimes x{}_{[-1]}-x{}_{[1]}\otimes(b\leftharpoonup x{}_{[0]})-\left(x \rightharpoonup b_{1}\right) \otimes b_{2}\\ +b{}_{1}\otimes(x\rightharpoonup b{}_{2})+b{}_{1}\otimes(b{}_{2}\leftharpoonup x)+b{}_{(0)}\otimes\sigma(x,b{}_{(1)})+b{}_{(0)}\otimes\sigma(b{}_{(1)},x)\\ -\sigma(x,b{}_{(-1)})\otimes b{}_{(0)}+x{}_{\{1\}} b\otimes x{}_{\{2\}}+x{}_{\{2\}} b\otimes x{}_{\{1\}}-x{}_{\{2\}}\otimes b x{}_{\{1\}}$,
\item[(CDM6)] $\Delta_{A}(a\leftharpoonup y)+Q(a\triangleright y)$\\ $=(a{}_{1}\leftharpoonup y)\otimes a{}_{2}+\left(a_{2} \leftharpoonup y\right)\otimes a{}_{1}-a{}_{2}\otimes(y\rightharpoonup a{}_{1})-\left(a\leftharpoonup y_{[0]}\right) \otimes y_{[1]}\\ +y{}_{[-1]}\otimes(a\leftharpoonup y{}_{[0]})+y{}_{[-1]}\otimes(y{}_{[0]}\rightharpoonup a)+\sigma(a{}_{(-1)},y)\otimes a{}_{(0)}+\sigma(a{}_{(1)},y)\otimes a{}_{(0)}\\ -a{}_{(0)}\otimes\sigma(y,a{}_{(-1)})+y{}_{\{1\}}\otimes a y{}_{\{2\}}+y{}_{\{1\}}\otimes y{}_{\{2\}} a-a y{}_{\{1\}}\otimes y{}_{\{2\}}$,
\item[(CDM7)] $\Delta_{H}(a \triangleright y)+P(a\leftharpoonup y)$\\ $=(a{}_{(0)}\triangleright y)\otimes a{}_{(1)}+(a{}_{(0)}\triangleright y)\otimes a{}_{(-1)}-a{}_{(1)}\otimes(y\triangleleft a{}_{(0)})+y{}_{1}\otimes(a\triangleright y{}_{2})\\ +y{}_{1}\otimes(y{}_{2}\triangleleft a)-\left(a \triangleright y_{1}\right) \otimes y_{2}+y{}_{[0]}\otimes\theta(a,y{}_{[1]})+y{}_{[0]}\otimes\theta(y{}_{[1]},a)\\ -\theta(a,y{}_{[-1]})\otimes y{}_{[0]}+a{}_{<1>} y\otimes a{}_{<2>}+a{}_{<2>} y\otimes a{}_{<1>}-a{}_{<2>}\otimes y a{}_{<1>}$,
\item[(CDM8)] $\Delta_{H}(x \triangleleft b)+P(x\rightharpoonup b)$\\ $=(x{}_{1}\triangleleft b)\otimes x{}_{2}+(x{}_{2}\triangleleft b)\otimes x{}_{1}-x{}_{2}\otimes(b\triangleright x{}_{1})+b{}_{(-1)}\otimes(x\triangleleft b{}_{(0)})\\ +b{}_{(-1)}\otimes(b{}_{(0)}\triangleright x)-\left(x\triangleleft b_{(0)}\right) \otimes b_{(1)}+\theta(x{}_{[-1]},b)\otimes x{}_{[0]}+\theta(x{}_{[1]},b)\otimes x{}_{[0]}\\ -x{}_{[0]}\otimes\theta(b,x{}_{[-1]})+b{}_{<1>}\otimes x b{}_{<2>}+b{}_{<1>}\otimes b{}_{<2>} x-x b{}_{<1>}\otimes b{}_{<2>}$,
\item[(CDM9)]$\Delta_{H}(\theta(a,b))+P(a, b)$\\ $=\theta(a{}_{(0)},b)\otimes a{}_{(1)}+\theta(a{}_{(0)},b)\otimes a{}_{(-1)}-\theta(a,b{}_{(0)})\otimes b{}_{(1)}+b{}_{(-1)}\otimes\theta(a,b{}_{(0)})\\ +b{}_{(-1)}\otimes\theta(b{}_{(0)},a)-a{}_{(1)}\otimes\theta(b,a{}_{(0)})+(a{}_{<1>}\triangleleft b)\otimes a{}_{<2>}+(a{}_{<2>}\triangleleft b)\otimes a{}_{<1>}\\ -(a\triangleright b{}_{<1>})\otimes b{}_{<2>}+b{}_{<1>}\otimes(a\triangleright b{}_{<2>})+b{}_{<1>}\otimes(b{}_{<2>}\triangleleft a)-a{}_{<2>}\otimes(b\triangleright a{}_{<1>})$,
\item[(CDM10)]$\Delta_{A}(\sigma(x,y))+Q(x, y)$\\ $=\sigma(x{}_{[0]},y)\otimes x{}_{[1]}+\sigma(x{}_{[0]},y)\otimes x{}_{[-1]}-\sigma(x,y{}_{[0]})\otimes y{}_{[1]}+y{}_{[-1]}\otimes\sigma(x,y{}_{[0]})\\ +y{}_{[-1]}\otimes\sigma(y{}_{[0]},x)-x{}_{[1]}\otimes\sigma(y,x{}_{[0]})+(x{}_{\{1\}}\leftharpoonup y)\otimes x{}_{\{2\}}+(x{}_{\{2\}}\leftharpoonup y)\otimes x{}_{\{1\}}\\ -x{}_{\{2\}}\otimes(y\rightharpoonup x{}_{\{1\}})+y{}_{\{1\}}\otimes(x\rightharpoonup y{}_{\{2\}})+y{}_{\{1\}}\otimes(y{}_{\{2\}}\leftharpoonup x)-(x\rightharpoonup y{}_{\{1\}})\otimes y{}_{\{2\}}$,
\item[(CDM11)]
$\phi(x \rightharpoonup b)+\gamma(x\triangleleft b)\\
=(x{}_{[0]}\triangleleft b)\otimes x{}_{[1]}+(x{}_{[0]}\triangleleft b)\otimes x{}_{[-1]}-x{}_{2}\otimes(b\leftharpoonup x{}_{1})-x{}_{[0]}\otimes b x{}_{[-1]}$\\ $+b{}_{(-1)}\otimes(x\rightharpoonup b{}_{(0)})-\left(x\triangleleft b_{1}\right) \otimes b_{2}+b{}_{(-1)}\otimes(b{}_{(0)}\leftharpoonup x)-x b_{(-1)} \otimes b_{(0)}$\\ $+\theta(x{}_{\{1\}},b)\otimes x{}_{\{2\}}+\theta(x{}_{\{2\}},b)\otimes x{}_{\{1\}}+b{}_{<1>}\otimes\sigma(x,b{}_{<2>})+b{}_{<1>}\otimes\sigma(b{}_{<2>},x)$,
\item[(CDM12)] $\psi(a\leftharpoonup y)+\rho(a \triangleright y)\\ =(a{}_{(0)}\leftharpoonup y)\otimes a{}_{(1)}+(a{}_{(0)}\leftharpoonup y)\otimes a{}_{(-1)}-a{}_{2}\otimes(y\triangleleft a{}_{1})-a{}_{(0)}\otimes y a{}_{(-1)}$\\ $+y{}_{[-1]}\otimes(a\triangleright y{}_{[0]})-\left(a\leftharpoonup y_{1}\right) \otimes y_{2}+y{}_{[-1]}\otimes(y{}_{[0]}\triangleleft a)-a y_{[-1]} \otimes y_{[0]}$\\ $+\sigma(a{}_{<1>},y)\otimes a{}_{<2>}+\sigma(a{}_{<2>},y)\otimes a{}_{<1>}+y{}_{\{1\}}\otimes\theta(a,y{}_{\{2\}})+y{}_{\{1\}}\otimes\theta(y{}_{\{2\}},a)$,
\item[(CDM13)] $\psi(x \rightharpoonup b)+\rho(x\triangleleft b)\\ =(x{}_{1}\rightharpoonup b)\otimes x{}_{2}+x{}_{[-1]} b\otimes x{}_{[0]}+(x{}_{2}\rightharpoonup b)\otimes x{}_{1}+x{}_{[1]} b\otimes x{}_{[0]}\\ -x{}_{[1]}\otimes(b\triangleright x{}_{[0]})+b{}_{1}\otimes(x\triangleleft b{}_{2})+b{}_{(0)}\otimes x b{}_{(1)}+b{}_{1}\otimes(b{}_{2}\triangleright x)\\ +b{}_{(0)}\otimes b{}_{(1)} x-(x \rightharpoonup b_{(0)}) \otimes b_{(1)}-x{}_{\{2\}}\otimes\theta(b,x{}_{\{1\}})-\sigma(x,b{}_{<1>})\otimes b{}_{<2>}$,
\item[(CDM14)] $\phi(a\leftharpoonup y)+\gamma(a \triangleright y)\\ =(a{}_{1}\triangleright y)\otimes a{}_{2}+a{}_{(-1)} y\otimes a{}_{(0)}+(a{}_{2}\triangleright y)\otimes a{}_{1}+a{}_{(1)} y\otimes a{}_{(0)}\\ -a{}_{(1)}\otimes (y\rightharpoonup a{}_{(0)})+y{}_{1}\otimes(a\leftharpoonup y{}_{2})+y{}_{[0]}\otimes a y{}_{[1]}+y{}_{1}\otimes(y{}_{2}\rightharpoonup a)\\ +y{}_{[0]}\otimes y{}_{[1]} a-\left(a\triangleright y_{[0]}\right)\otimes y_{[1]}-\theta(a,y{}_{\{1\}})\otimes y{}_{\{2\}}-a{}_{<2>}\otimes\sigma(y,a{}_{<1>})$,
\item[(CDM15)] $\phi(b a)+\gamma(\theta(b,a))+\tau\psi(b a)+\tau\rho(\theta(b,a))\\ =a{}_{(-1)}\otimes b a{}_{(0)}+(b\triangleright a{}_{(1)})\otimes a{}_{(0)}+(b{}_{(-1)}\triangleleft a)\otimes b{}_{(0)}+b{}_{(1)}\otimes b{}_{(0)} a\\ +a{}_{<1>}\otimes(b\leftharpoonup a{}_{<2>})+b{}_{<2>}\otimes(b{}_{<1>}\rightharpoonup a)+\theta(b,a{}_{2})\otimes a{}_{1}+\theta(b{}_{1},a)\otimes b{}_{2}$,
\item[(CDM16)] $\psi(b a)+\rho(\theta(b,a))+\tau\phi(b a)+\tau\gamma(\theta(b,a))\\ =a{}_{(0)}\otimes(b\triangleright a{}_{(1)})+b a{}_{(0)} \otimes a{}_{(-1)}+ b{}_{(0)} a\otimes b{}_{(1)}+b{}_{(0)}\otimes(b{}_{(-1)}\triangleleft a)\\ +a{}_{1}\otimes\theta(b,a{}_{2})+b{}_{2}\otimes\theta(b{}_{1},a)+(b\leftharpoonup a{}_{<2>})\otimes a{}_{<1>}+(b{}_{<1>}\rightharpoonup a)\otimes b{}_{<2>}$,
\item[(CDM17)] $\rho(y x)+\psi(\sigma(y,x))+\tau\gamma(y x)+\tau\phi(\sigma(y,x))\\ =x{}_{[-1]}\otimes y x{}_{[0]}+(y\rightharpoonup x{}_{[1]})\otimes x{}_{[0]}+(y{}_{[-1]}\leftharpoonup x)\otimes y{}_{[0]}+y{}_{[1]}\otimes y{}_{[0]} x\\ +x{}_{\{1\}}\otimes(y\triangleleft x{}_{\{2\}})+y{}_{\{2\}}\otimes(y{}_{\{1\}}\triangleright x)+\sigma(y,x{}_{2})\otimes x{}_{1}+\sigma(y{}_{1},x)\otimes y{}_{2}$,
\item[(CDM18)] $\gamma(y x)+\phi(\sigma(y,x))+\tau\rho(y x)+\tau\psi(\sigma(y,x))\\ =y x{}_{[0]}\otimes x{}_{[-1]}+y{}_{[0]} x\otimes y{}_{[1]}+y{}_{[0]}\otimes(y{}_{[-1]}\leftharpoonup x)+x{}_{[0]}\otimes(y\rightharpoonup x{}_{[1]})\\ +x{}_{1}\otimes\sigma(y,x{}_{2})+y{}_{2}\otimes\sigma(y{}_{1},x)+(y\triangleleft x{}_{\{2\}})\otimes x{}_{\{1\}}+(y{}_{\{1\}}\triangleright x)\otimes y{}_{\{2\}}$,
\item[(CDM19)] $\Delta_{A}(b \leftharpoonup x)+Q(b\triangleright x)+\tau\Delta_{A}(b \leftharpoonup x)+\tau Q(b\triangleright x)\\ =x{}_{[-1]}\otimes(b\leftharpoonup x{}_{[0]})+(b\leftharpoonup x{}_{[0]})\otimes x{}_{[-1]}+(b{}_{1}\leftharpoonup x)\otimes b{}_{2}+b{}_{2}\otimes(b{}_{1}\leftharpoonup x)\\ +x{}_{\{1\}}\otimes b x{}_{\{2\}}+b x{}_{\{2\}}\otimes x{}_{\{1\}}+\sigma(b{}_{(-1)},x)\otimes b{}_{(0)}+b{}_{(0)}\otimes\sigma(b{}_{(-1)},x)$,
\item[(CDM20)] $\Delta_{A}(y\rightharpoonup a)+Q(y\triangleleft a)+\tau\Delta_{A}(y\rightharpoonup a)+\tau Q(y\triangleleft a)\\ =a{}_{1}\otimes(y\rightharpoonup a{}_{2})+(y\rightharpoonup a{}_{2})\otimes a{}_{1}+(y{}_{[0]}\rightharpoonup a)\otimes y{}_{[1]}+y{}_{[1]}\otimes(y{}_{[0]}\rightharpoonup a)\\ +a{}_{(0)}\otimes\sigma(y,a{}_{(1)})+\sigma(y,a{}_{(1)})\otimes a{}_{(0)}+y{}_{\{1\}} a\otimes y{}_{\{2\}}+y{}_{\{2\}}\otimes y{}_{\{1\}} a$,
\item[(CDM21)] $\Delta_{H}(y\triangleleft a)+P(y\rightharpoonup a)+\tau\Delta_{H}(y\triangleleft a)+\tau P(y\rightharpoonup a)\\ =a{}_{(-1)}\otimes(y\triangleleft a{}_{(0)})+(y\triangleleft a{}_{(0)})\otimes a{}_{(-1)}+(y{}_{1}\triangleleft a)\otimes y{}_{2}+y{}_{2}\otimes(y{}_{1}\triangleleft a)\\ +a{}_{<1>}\otimes y a{}_{<2>}+y a{}_{<2>}\otimes a{}_{<1>}+\theta(y{}_{[-1]},a)\otimes y{}_{[0]}+y{}_{[0]}\otimes\theta(y{}_{[-1]},a)$,
\item[(CDM22)] $\Delta_{H}(b \triangleright x)+P(b\leftharpoonup x)+\tau\Delta_{H}(b \triangleright x)+\tau P(b\leftharpoonup x)\\ =x{}_{1}\otimes(b\triangleright x{}_{2})+(b\triangleright x{}_{2})\otimes x{}_{1}+(b{}_{(0)}\triangleright x)\otimes b{}_{(1)}+b{}_{(1)}\otimes (b{}_{(0)}\triangleright x)\\ +x{}_{[0]}\otimes\theta(b,x{}_{[1]})+\theta(b,x{}_{[1]})\otimes x{}_{[0]}+b{}_{<1>} x\otimes b{}_{<2>}+b{}_{<2>}\otimes b{}_{<1>} x$,
\item[(CDM23)] $\phi(y \rightharpoonup a)+\tau\psi(y \rightharpoonup a)+\gamma(y\triangleleft a)+\tau\rho(y\triangleleft a)\\ =a{}_{(-1)}\otimes(y\rightharpoonup a{}_{(0)})+(y\triangleleft a{}_{2})\otimes a{}_{1}+y a{}_{(1)}\otimes a{}_{(0)}+(y{}_{[0]}\triangleleft a)\otimes y{}_{[1]}\\ +y{}_{2}\otimes(y{}_{1}\rightharpoonup a)+y{}_{[0]}\otimes y{}_{[-1]} a+a{}_{<1>}\otimes\sigma(y,a{}_{<2>})+\theta(y{}_{\{1\}},a)\otimes y{}_{\{2\}}$,
\item[(CDM24)] $\phi(b \leftharpoonup x)+\tau\psi(b \leftharpoonup x)+\gamma(b\triangleright x)+\tau\rho(b\triangleright x)\\ =x{}_{1}\otimes(b\leftharpoonup x{}_{2})+x{}_{[0]}\otimes b x{}_{[1]}+(b\triangleright x{}_{[0]})\otimes x{}_{[-1]}+(b{}_{1}\triangleright x)\otimes b{}_{2}\\ +b{}_{(-1)} x\otimes b{}_{(0)}+b{}_{(1)}\otimes(b{}_{(0)}\leftharpoonup x)+\theta(b,x{}_{\{2\}})\otimes x{}_{\{1\}}+b{}_{<2>}\otimes\sigma(b{}_{<1>},x)$,
\item[(CDM25)] $\psi(y \rightharpoonup a)+\tau\phi(y \rightharpoonup a)+\rho(y\triangleleft a)+\tau\gamma(y\triangleleft a)\\ =a{}_{1}\otimes(y\triangleleft a{}_{2})+a{}_{(0)}\otimes y a{}_{(1)}+(y\rightharpoonup a{}_{(0)})\otimes a{}_{(-1)}+(y{}_{1}\rightharpoonup a)\otimes y{}_{2}\\ +y{}_{[-1]} a\otimes y{}_{[0]}+y{}_{[1]}\otimes(y{}_{[0]}\triangleleft a)+\sigma(y,a{}_{<2>})\otimes a{}_{<1>}+y{}_{\{2\}}\otimes\theta(y{}_{\{1\}},a)$,
\item[(CDM26)] $\psi(b \leftharpoonup x)+\tau\phi(b \leftharpoonup x)+\rho(b\triangleright x)+\tau\gamma(b\triangleright x)\\ =x{}_{[-1]}\otimes(b\triangleright x{}_{[0]})+(b\leftharpoonup x{}_{2})\otimes x{}_{1}+b x{}_{[1]}\otimes x{}_{[0]}+(b{}_{(0)}\leftharpoonup x)\otimes b{}_{(1)}\\ +b{}_{2}\otimes(b{}_{1}\triangleright x)+b{}_{(0)}\otimes b{}_{(-1)} x+x{}_{\{1\}}\otimes\theta(b,x{}_{\{2\}})+\sigma(b{}_{<1>},x)\otimes b{}_{<2>}$,
\item[(CDM27)] $\Delta_{A}(\sigma(y,x))+Q(y,x)+\tau\Delta_{A}(\sigma(y,x))+\tau Q(y,x)\\ =x{}_{[-1]}\otimes\sigma(y,x{}_{[0]})+\sigma(y,x{}_{[0]})\otimes x{}_{[-1]}+x{}_{\{1\}}\otimes(y\rightharpoonup x{}_{\{2\}})+(y\rightharpoonup x{}_{\{2\}})\otimes x{}_{\{1\}}\\ +\sigma(y{}_{[0]},x)\otimes y{}_{[1]}+y{}_{[1]}\otimes\sigma(y{}_{[0]},x)+(y{}_{\{1\}}\leftharpoonup x)\otimes y{}_{\{2\}}+y{}_{\{2\}}\otimes(y{}_{\{1\}}\leftharpoonup x)$,
\item[(CDM28)] $P(b,a)+\Delta_{H}(\theta(b,a))+\tau P(b,a)+\tau\Delta_{H}(\theta(b,a))\\ =a{}_{(-1)}\otimes\theta(b,a{}_{(0)})+\theta(b,a{}_{(0)})\otimes a{}_{(-1)}+a{}_{<1>}\otimes(b\triangleright a{}_{<2>})+(b\triangleright a{}_{<2>})\otimes a{}_{<1>}\\ +\theta(b{}_{(0)},a)\otimes b{}_{(1)}+b{}_{(1)}\otimes\theta(b{}_{(0)},a)+(b{}_{<1>}\triangleleft a)\otimes b{}_{<2>}+b{}_{<2>}\otimes(b{}_{<1>}\triangleleft a)$. \end{enumerate} \noindent then $(A, H)$ is called a \emph{cocycle double matched pair}. \end{definition}
\begin{definition}\label{cocycle-braided} (i) A \emph{cocycle braided alternative bialgebra} $A$ is simultaneously a cocycle alternative algebra $(A, \theta)$ and a cycle alternative coalgebra $(A, Q)$ satisfying the conditions \begin{enumerate} \item[(CBB1)] $\Delta_{A}(ab)+Q(\theta(a,b))\\ =a_{1} b\otimes a_{2}+a{}_{2} b\otimes a{}_{1}-a{}_{2}\otimes b a{}_{1}+b{}_{1}\otimes a b{}_{2}\\ +b{}_{1}\otimes b{}_{2} a-a b{}_{1}\otimes b{}_{2}+(a{}_{(-1)}\rightharpoonup b)\otimes a{}_{(0)}+(a{}_{(1)}\rightharpoonup b)\otimes a{}_{(0)}\\ -a{}_{(0)}\otimes(b\leftharpoonup a{}_{(-1)})+b{}_{(0)}\otimes(a\leftharpoonup b{}_{(1)})+b{}_{(0)}\otimes(b{}_{(1)}\rightharpoonup a)-(a\leftharpoonup b{}_{(-1)})\otimes b{}_{(0)},$
\end{enumerate} \begin{enumerate} \item[(CBB2)] $\Delta_{A}(ba)+Q(\theta(b,a))+\tau\Delta_{A}(ba)+\tau Q(\theta(b,a))\\ =a_{1} \otimes b a_{2}+b a{}_{2} \otimes a{}_{1}+b{}_{1} a\otimes b{}_{2}+b{}_{2}\otimes b{}_{1} a\\ +a{}_{(0)}\otimes(b\leftharpoonup a{}_{(1)})+(b\leftharpoonup a{}_{(1)})\otimes a{}_{(0)}+(b{}_{(-1)}\rightharpoonup a)\otimes b{}_{(0)}+b{}_{(0)}\otimes(b{}_{(-1)}\rightharpoonup a).$
\end{enumerate} (ii) A \emph{cocycle braided alternative bialgebra} $H$ is simultaneously a cocycle alternative algebra $(H, \sigma)$ and a cycle alternative coalgebra $(H, P)$ satisfying the conditions \begin{enumerate} \item[(CBB3)] $\Delta_{H}(xy)+P(\sigma(x,y))\\ =x{}_{1} y\otimes x{}_{2}+x{}_{2} y\otimes x{}_{1}-x{}_{2}\otimes y x{}_{1}+y{}_{1}\otimes x y{}_{2}\\ +y{}_{1}\otimes y{}_{2} x-x y{}_{1}\otimes y{}_{2}+(x{}_{[-1]}\triangleright y)\otimes x{}_{[0]}+(x{}_{[1]}\triangleright y)\otimes x{}_{[0]}\\ -x{}_{[0]}\otimes(y\triangleleft x{}_{[-1]})+y{}_{[0]}\otimes(x\triangleleft y{}_{[1]})+y{}_{[0]}\otimes(y{}_{[1]}\triangleright x)-(x\triangleleft y{}_{[-1]})\otimes y{}_{[0]},$
\end{enumerate} \begin{enumerate} \item[(CBB4)] $\Delta_{H}(yx)+P(\sigma(y,x))+\tau\Delta_{H}(yx)+\tau P(\sigma(y,x))\\ =x{}_{1} \otimes y x{}_{2}+y x{}_{2} \otimes x{}_{1}+y{}_{1} x\otimes y{}_{2}+y{}_{2}\otimes y{}_{1} x\\ +x{}_{[0]}\otimes(y\triangleleft x{}_{[1]})+(y\triangleleft x{}_{[1]})\otimes x{}_{[0]}+(y{}_{[-1]}\triangleright x)\otimes y{}_{[0]}+y{}_{[0]}\otimes(y{}_{[-1]}\triangleright x).$
\end{enumerate} \end{definition}
The next theorem says that we can obtain an ordinary alternative bialgebra from two cocycle braided alternative bialgebras.
\begin{theorem}\label{main2} Let $A$, $H$ be cocycle braided alternative bialgebras, $(A, H)$ be a cocycle cross product system and a cycle cross coproduct system. Then the cocycle cross product alternative algebra and cycle cross coproduct alternative coalgebra fit together to become an ordinary alternative bialgebra if and only if $(A, H)$ forms a cocycle double matched pair. We will call it the cocycle bicrossproduct alternative bialgebra and denote it by $A^{P}_{\sigma}\# {}^{Q}_{\theta}H$. \end{theorem}
\begin{proof} First, we need to check the first compatibility condition $$ \begin{aligned} &\Delta_{E}((a+x)(b+y))\\ =&\Delta_{E} (a+x)\bullet (b+y)+\tau\Delta_{E} (a+x)\bullet (b+y)\\ &- (b+y)\bullet\tau\Delta_{E}(a+x)+(a+x)\bullet \Delta_{E}(b+y)\\ &+[\Delta_{E}(b+y),(a+x)].
\end{aligned}
$$ The left hand side is equal to \begin{eqnarray*} &&\Delta_{E}((a+x)(b+y))\\ &=&\Delta_{E}(a b+x \rightharpoonup b+a\leftharpoonup y+\sigma(x, y)+ x y+x \triangleleft b+a\triangleright y+\theta(a, b))\\ &=&\Delta_A(a b)+\Delta_A(x \rightharpoonup b)+\Delta_{A}(a\leftharpoonup y)+\Delta_{A}(\sigma(x,y))\\ &&+\phi(a b)+\phi(x \rightharpoonup b)+\phi(a\leftharpoonup y)+\phi(\sigma(x, y))\\ &&+\psi(a b)+\psi(x \rightharpoonup b)+\psi(a\leftharpoonup y)+\psi(\sigma(x, y))\\ &&+P(ab)+P(x \rightharpoonup b)+P(a\leftharpoonup y)+P(\sigma(x, y))\\ &&+\Delta_{H}(x y)+\Delta_{H}(x \triangleleft b)+\Delta_{H}(a \triangleright y)+\Delta_{H}(\theta(a,b))\\ &&+\rho(x y)+\rho(x \triangleleft b)+\rho(a \triangleright y)+\rho(\theta(a, b))\\ &&+\gamma(x y)+\gamma(x \triangleleft b)+\gamma(a \triangleright y)+\gamma(\theta(a, b))\\ &&+Q(xy)+Q(x \triangleleft b)+Q(a\triangleright y)+Q(\theta(a, b)), \end{eqnarray*} and the right hand side is equal to \begin{eqnarray*} &&\Delta_{E} (a+x)\bullet (b+y)+\tau\Delta_{E} (a+x)\bullet (b+y)- (b+y)\bullet\tau\Delta_{E}(a+x)\\ &&+(a+x)\bullet \Delta_{E}(b+y)+[\Delta_{E}(b+y),(a+x)]\\ &=&(a_{1} \otimes a_{2}+a_{(-1)} \otimes a_{(0)}+a_{(0)} \otimes a_{(1)}+a{}_{<1>}\otimes a{}_{<2>}+x_{1} \otimes x_{2}+x_{[-1]} \otimes x_{[0]}\\ &&+x_{[0]}\otimes x_{[1]}+x{}_{\{1\}}\otimes x{}_{\{2\}}) \bullet(b+ y)+(a_{2} \otimes a_{1}+a_{(0)} \otimes a_{(-1)}+a_{(1)} \otimes a_{(0)}\\ &&+a{}_{<2>}\otimes a{}_{<1>}+x_{2} \otimes x_{1}+x{}_{[0]}\otimes x{}_{[-1]}+x{}_{[1]}\otimes x{}_{[0]}+x{}_{\{2\}}\otimes x{}_{\{1\}}) \bullet(b+ y)\\ &&-(b+y)\bullet(a_{2} \otimes a_{1}+a_{(0)} \otimes a_{(-1)}+a_{(1)} \otimes a_{(0)}+a{}_{<2>}\otimes a{}_{<1>}+x_{2} \otimes x_{1}\\ &&+x{}_{[0]}\otimes x{}_{[-1]}+x{}_{[1]}\otimes x{}_{[0]}+x{}_{\{2\}}\otimes x{}_{\{1\}})+(a+ x) \bullet(b_{1} \otimes b_{2}+b_{(-1)} \otimes b_{(0)}\\ &&+b_{(0)} \otimes b_{(1)}+b{}_{<1>}\otimes b{}_{<2>}+y_{1} \otimes y_{2}+y{}_{[-1]}\otimes y{}_{[0]}+y{}_{[0]}\otimes y{}_{[1]}+y{}_{\{1\}}\otimes y{}_{\{2\}})\\ &&+[b_{1} \otimes b_{2}+b_{(-1)} \otimes b_{(0)}+b_{(0)} \otimes b_{(1)}+b{}_{<1>}\otimes b{}_{<2>}+y_{1} \otimes y_{2}+y{}_{[-1]}\otimes y{}_{[0]}\\ &&+y{}_{[0]}\otimes y{}_{[1]}+y{}_{\{1\}}\otimes y{}_{\{2\}},a+x]\\ &=&a_{1}b \otimes a_{2}+(a{}_{1}\leftharpoonup y)\otimes a{}_{2}+(a{}_{1}\triangleright y)\otimes a{}_{2}+\theta(a{}_{1},b)\otimes a{}_{2}\\ &&+(a{}_{(-1)} \rightharpoonup b)\otimes a{}_{(0)}+\sigma(a{}_{(-1)},y)\otimes a{}_{(0)}+a{}_{(-1)} y\otimes a{}_{(0)}+(a{}_{(-1)}\triangleleft b)\otimes a{}_{(0)}\\ &&+a{}_{(0)} b\otimes a{}_{(1)}+(a{}_{(0)}\leftharpoonup y)\otimes a{}_{(1)}+(a{}_{(0)}\triangleright y)\otimes a{}_{(1)}+\theta(a{}_{(0)},b)\otimes a{}_{(1)}\\ &&+(a{}_{<1>}\rightharpoonup b)\otimes a{}_{<2>}+\sigma(a{}_{<1>},y)\otimes a{}_{<2>}+a{}_{<1>} y\otimes a{}_{<2>}+(a{}_{<1>}\triangleleft b)\otimes a{}_{<2>}\\ &&+(x{}_{1}\rightharpoonup b)\otimes x{}_{2}+\sigma(x{}_{1},y)\otimes x{}_{2}+x{}_{1} y\otimes x{}_{2}+(x{}_{1}\triangleleft b)\otimes x{}_{2}\\ &&+x{}_{[-1]} b\otimes x{}_{[0]}+(x{}_{[-1]}\leftharpoonup y)\otimes x{}_{[0]}+(x{}_{[-1]}\triangleright y)\otimes x{}_{[0]}+\theta(x{}_{[-1]},b)\otimes x{}_{[0]}\\ &&+(x{}_{[0]}\rightharpoonup b)\otimes x{}_{[1]}+\sigma(x{}_{[0]},y)\otimes x{}_{[1]}+x{}_{[0]} y\otimes x{}_{[1]}+(x{}_{[0]}\triangleleft b)\otimes x{}_{[1]}\\ &&+x{}_{\{1\}} b\otimes x{}_{\{2\}}+(x{}_{\{1\}}\leftharpoonup y)\otimes x{}_{\{2\}}+(x{}_{\{1\}}\triangleright y)\otimes x{}_{\{2\}}+\theta(x{}_{\{1\}},b)\otimes x{}_{\{2\}}\\ &&+a{}_{2} b\otimes a{}_{1}+(a{}_{2}\leftharpoonup y)\otimes a{}_{1}+(a{}_{2}\triangleright y)\otimes a{}_{1}+\theta(a{}_{2},b)\otimes a{}_{1}\\ &&+a{}_{(0)} b\otimes a{}_{(-1)}+(a{}_{(0)}\leftharpoonup y)\otimes a{}_{(-1)}+(a{}_{(0)}\triangleright y)\otimes a{}_{(-1)}+\theta(a{}_{(0)},b)\otimes a{}_{(-1)}\\ &&+(a{}_{(1)}\rightharpoonup b)\otimes a{}_{(0)}+\sigma(a{}_{(1)},y)\otimes a{}_{(0)}+a{}_{(1)} y\otimes a{}_{(0)}+(a{}_{1}\triangleleft b)\otimes a{}_{(0)}\\ &&+(a{}_{<2>}\rightharpoonup b)\otimes a{}_{<1>}+\sigma(a{}_{<2>},y)\otimes a{}_{<1>}+a{}_{<2>} y\otimes a{}_{<1>}+(a{}_{<2>}\triangleleft b)\otimes a{}_{<1>}\\ &&+(x{}_{2}\rightharpoonup b)\otimes x{}_{1}+\sigma(x{}_{2},y)\otimes x{}_{1}+x{}_{2} y\otimes x{}_{1}+(x{}_{2}\triangleleft b)\otimes x{}_{1}\\ &&+(x{}_{[0]}\rightharpoonup b)\otimes x{}_{[-1]}+\sigma(x{}_{[0]},y)\otimes x{}_{[-1]}+x{}_{[0]} y\otimes x{}_{[-1]}+(x{}_{[0]}\triangleleft b)\otimes x{}_{[-1]}\\ &&+x{}_{[1]} b\otimes x{}_{[0]}+(x{}_{[1]}\leftharpoonup y)\otimes x{}_{[0]}+(x{}_{[1]}\triangleright y)\otimes x{}_{[0]}+\theta(x{}_{[1]},b)\otimes x{}_{[0]}\\ &&+x{}_{\{2\}} b\otimes x{}_{\{1\}}+(x{}_{\{2\}}\leftharpoonup y)\otimes x{}_{\{1\}}+(x{}_{\{2\}}\triangleright y)\otimes x{}_{\{1\}}+\theta(x{}_{\{2\}},b)\otimes x{}_{\{1\}}\\ &&-a{}_{2}\otimes b a{}_{1}-a{}_{2}\otimes (y\rightharpoonup a{}_{1})-a{}_{2}\otimes(y\triangleleft a{}_{1})-a{}_{2}\otimes\theta(b,a{}_{1})\\ &&-a{}_{(0)}\otimes (b\leftharpoonup a{}_{(-1)})-a{}_{(0)}\otimes\sigma(y,a{}_{(-1)})-a{}_{(0)}\otimes y a{}_{(-1)}-a{}_{(0)}\otimes(b\triangleright a{}_{(-1)})\\ &&-a{}_{(1)}\otimes b a{}_{(0)}-a{}_{(1)}\otimes(y\rightharpoonup a{}_{(0)})-a{}_{(1)}\otimes (y\triangleleft a{}_{(0)})-a{}_{(1)}\otimes\theta(b,a{}_{(0)})\\ &&-a{}_{<2>}\otimes(b\leftharpoonup a{}_{<1>})-a{}_{<2>}\otimes\sigma(y,a{}_{<1>})-a{}_{<2>}\otimes y a{}_{<1>}-a{}_{<2>}\otimes(b\triangleright a{}_{<1>})\\ &&-x{}_{2}\otimes(b\leftharpoonup x{}_{1})-x{}_{2}\otimes\sigma(y,x{}_{1})-x{}_{2}\otimes y x{}_{1}-x{}_{2}\otimes (b\triangleright x{}_{1})\\ &&-x{}_{[0]}\otimes b x{}_{[-1]}-x{}_{[0]}\otimes(y\rightharpoonup x{}_{[-1]})-x{}_{[0]}\otimes(y\triangleleft x{}_{[-1]})-x{}_{[0]}\otimes\theta(b,x{}_{[-1]})\\ &&-x{}_{[1]}\otimes(b\leftharpoonup x{}_{[0]})-x{}_{[1]}\otimes\sigma(y,x{}_{[0]})-x{}_{[1]}\otimes y x{}_{[0]}-x{}_{[1]}\otimes(b\triangleright x{}_{[0]})\\ &&-x{}_{\{2\}}\otimes b x{}_{\{1\}}-x{}_{\{2\}}\otimes(y\rightharpoonup x{}_{\{1\}})-x{}_{\{2\}}\otimes(y\triangleleft x{}_{\{1\}})-x{}_{\{2\}}\otimes\theta(b,x{}_{\{1\}})\\ &&+b{}_{1}\otimes a b{}_{2}+b{}_{1}\otimes (x\rightharpoonup b{}_{2})+b{}_{1}\otimes(x\triangleleft b{}_{2})+b{}_{1}\otimes\theta(a,b{}_{2})\\ &&+b{}_{(-1)}\otimes a b{}_{(0)}+b{}_{(-1)}\otimes (x\rightharpoonup b{}_{(0)})+b{}_{(-1)}\otimes(x\triangleleft b{}_{(0)})+a{}_{(-1)}\otimes\theta(a,b{}_{(0)})\\ &&+b{}_{(0)}\otimes (a\leftharpoonup b{}_{(1)})+b{}_{(0)}\otimes\sigma(x,b{}_{(1)})+b{}_{(0)}\otimes x b{}_{(1)}+b{}_{(0)}\otimes(a\triangleright b{}_{(1)})\\ &&+b{}_{<1>}\otimes(a\leftharpoonup b{}_{<2>})+b{}_{<1>}\otimes\sigma(x, b{}_{<2>})+b{}_{<1>}\otimes x b{}_{<2>}+b{}_{<1>}\otimes(a\triangleright b{}_{<2>})\\ &&+y{}_{1}\otimes (a\leftharpoonup y{}_{2})+y{}_{1}\otimes\sigma(x, y{}_{2})+y{}_{1}\otimes x y{}_{2}+y{}_{1}\otimes(a\triangleright y{}_{2})\\ &&+y{}_{[-1]}\otimes(a\leftharpoonup y{}_{[0]})+y{}_{[-1]}\otimes\sigma(x,y{}_{[0]})+y{}_{[-1]}\otimes x y{}_{[0]}+y{}_{[-1]}\otimes(a\triangleright y{}_{[0]})\\ &&+y{}_{[0]}\otimes a y{}_{[1]}+y{}_{[0]}\otimes(x\rightharpoonup y{}_{[1]})+y{}_{[0]}\otimes(x\triangleleft y{}_{[1]})+y{}_{[0]}\otimes\theta(a,y{}_{[1]})\\ &&+y{}_{\{1\}}\otimes a y{}_{\{2\}}+y{}_{\{1\}}\otimes(x\rightharpoonup y{}_{\{2\}})+y{}_{\{1\}}\otimes(x\triangleleft y{}_{\{2\}})+y{}_{\{1\}}\otimes\theta(a, y{}_{\{2\}})\\ &&+b{}_{1}\otimes b{}_{2} a+b{}_{1}\otimes (b{}_{2}\leftharpoonup x)+b{}_{1}\otimes(b{}_{2}\triangleright x)+b{}_{1}\otimes\theta(b{}_{2},a)\\ &&-a b{}_{1}\otimes b{}_{2}-(x\rightharpoonup b{}_{1})\otimes b{}_{2}-(x\triangleleft b{}_{1})\otimes b{}_{2}-\theta(a,b{}_{1})\otimes b{}_{2}\\ &&+b{}_{(-1)}\otimes b{}_{(0)} a+b{}_{(-1)}\otimes(b{}_{(0)}\leftharpoonup x)+b{}_{(-1)}\otimes(b{}_{(0)}\triangleright x)+b{}_{(-1)}\otimes\theta(b{}_{(0)},a)\\ &&-(a\leftharpoonup b{}_{(-1)})\otimes b{}_{(0)}-\sigma(x,b{}_{(-1)})\otimes b{}_{(0)}-x b{}_{(-1)}\otimes b{}_{(0)}-(a\triangleright b{}_{(-1)})\otimes b{}_{(0)}\\ &&+b{}_{(0)}\otimes (b{}_{(1)}\rightharpoonup a)+b{}_{(0)}\otimes\sigma(b{}_{(1)},x)+b{}_{(0)}\otimes b{}_{(1)} x+b{}_{(0)}\otimes(b{}_{(1)}\triangleleft a)\\ &&-a b{}_{(0)}\otimes b{}_{(1)}-(x\rightharpoonup b{}_{(0)})\otimes b{}_{(1)}-(x\triangleleft b{}_{(0)})\otimes b{}_{(1)}-\theta(a,b{}_{(0)})\otimes b{}_{(1)}\\ &&+b{}_{<1>}\otimes (b{}_{<2>}\rightharpoonup a)+b{}_{<1>}\otimes \sigma(b{}_{<2>} ,x)+b{}_{<1>}\otimes b{}_{<2>} x+b{}_{<1>}\otimes (b{}_{<2>}\triangleleft a)\\ &&-(a\leftharpoonup b{}_{<1>})\otimes b{}_{<2>}-\sigma(x, b{}_{<1>})\otimes b{}_{<2>}-x b{}_{<1>}\otimes b{}_{<2>}-(a\triangleright b{}_{<1>})\otimes b{}_{<2>}\\ &&+y{}_{1}\otimes (y{}_{2}\rightharpoonup a)+y{}_{1}\otimes\sigma(y{}_{2},x)+y{}_{1}\otimes y{}_{2} x+y{}_{1}\otimes(y{}_{2}\triangleleft a)\\ &&-(a\leftharpoonup y{}_{1})\otimes y{}_{2}-\sigma(x,y{}_{1})\otimes y{}_{2}-x y{}_{1}\otimes y{}_{2}-(a\triangleright y{}_{1})\otimes y{}_{2}\\ &&+y{}_{[-1]}\otimes(y{}_{[0]}\rightharpoonup a)+y{}_{[-1]}\otimes\sigma(y{}_{[0]},x)+y{}_{[-1]}\otimes y{}_{[0]} x+y{}_{[-1]}\otimes(y{}_{[0]}\triangleleft a)\\ &&-a y{}_{[-1]}\otimes y{}_{[0]}-(x\rightharpoonup y{}_{[-1]})\otimes y{}_{[0]}-(x\triangleleft y{}_{[-1]})\otimes y{}_{[0]}-\theta(a,y{}_{[-1]})\otimes y{}_{[0]}\\ &&+y{}_{[0]}\otimes y{}_{[1]} a+y{}_{[0]}\otimes(y{}_{[1]}\leftharpoonup x)+y{}_{[0]}\otimes(y{}_{[1]}\triangleright x)+y{}_{[0]}\otimes\theta(y{}_{[1]},a)\\ &&-(a\leftharpoonup y{}_{[0]})\otimes y{}_{[1]}-\sigma(x,y{}_{[0]})\otimes y{}_{[1]}-x y{}_{[0]}\otimes y{}_{[1]}-(a\triangleright y{}_{[0]})\otimes y{}_{[1]}\\ &&+y{}_{\{1\}}\otimes y{}_{\{2\}} a+y{}_{\{1\}}\otimes(y{}_{\{2\}}\leftharpoonup x)+y{}_{\{1\}}\otimes(y{}_{\{2\}}\triangleright x)+y{}_{\{1\}}\otimes\theta(y{}_{\{2\}},a)\\ &&-a y{}_{\{1\}}\otimes y{}_{\{2\}}-(x\rightharpoonup y{}_{\{1\}})\otimes y{}_{\{2\}}-(x\triangleleft y{}_{\{1\}})\otimes y{}_{\{2\}}-\theta(a, y{}_{\{1\}})\otimes y{}_{\{2\}}. \end{eqnarray*} If we compare both the two sides item by item, one will find all the cocycle double matched pair conditions (CDM1)--(CDM14) in Definition \ref{cocycledmp}.
Next, we check the second compatibility condition $$ \begin{aligned} &\Delta_{E}((b+y)(a+x))+\tau\Delta_{E}((b+y)(a+x))\\ =&(b+y)\bullet \Delta_{E} (a+x)+(b+y)\cdot\tau\Delta_{E} (a+x)\\ &+ \Delta_{E}(b+y)\bullet (a+x)+\tau \Delta_{E}(b+y)\cdot (a+x).
\end{aligned}
$$ The left hand side is equal to \begin{eqnarray*} &&\Delta_{E}((b+y)(a+x))+\tau\Delta_{E}((b+y)(a+x))\\ &=&\Delta_E( b a+y \rightharpoonup a+b \leftharpoonup x+\sigma(y, x)+ y x+y\triangleleft a+b\triangleright x+\theta(b, a))\\ &&+\tau\Delta_E( b a+y \triangleright a+b \triangleleft x+\sigma(y, x)+ y x+y\triangleleft a+b\triangleright x+\theta(b, a))\\ &=&\Delta_A( ba)+\Delta_A(y \rightharpoonup a)+\Delta_A(b \leftharpoonup x)+\Delta_A(\sigma(y, x))+\phi(b a)+\phi(y \rightharpoonup a)\\ &&+\phi(b \leftharpoonup x)+\phi(\sigma(y, x))+\psi(b a)+\psi(y \rightharpoonup a)+\psi(b \leftharpoonup x)+\psi(\sigma(y, x))\\ &&+P(b a)+P(y\rightharpoonup a)+P(b\leftharpoonup x)+P(\sigma(y, x))+\Delta_{H}(yx)+\Delta_{H}(y\triangleleft a)\\ &&+\Delta_{H}(b\triangleright x)+\Delta_{H}(\theta(b, a))+\rho(y x)+\rho(y\triangleleft a)+\rho(b\triangleright x)+\rho(\theta(b, a))\\ &&+\gamma(y x)+\gamma(y\triangleleft a)+\gamma(b\triangleright x)+\gamma(\theta(b, a))+Q(y x)+Q(y\triangleleft a)\\ &&+Q(b\triangleright x)+Q(\theta(b, a))+\tau\Delta_A( ba)+\tau\Delta_A(y \rightharpoonup a)+\tau\Delta_A(b \leftharpoonup x)\\ &&+\tau\Delta_A(\sigma(y, x))+\tau\phi(b a)+\tau\phi(y \rightharpoonup a)+\tau\phi(b \leftharpoonup x)+\tau\phi(\sigma(y, x))\\ &&+\tau\psi(b a)+\tau\psi(y \rightharpoonup a)+\tau\psi(b \leftharpoonup x)+\tau\psi(\sigma(y, x))+\tau P(b a)\\ &&+\tau P(y\rightharpoonup a)+\tau P(b\leftharpoonup x)+\tau P(\sigma(y, x))+\tau\Delta_{H}(yx)+\tau\Delta_{H}(y\triangleleft a)\\ &&+\tau\Delta_{H}(b\triangleright x)+\tau\Delta_{H}(\theta(b, a))+\tau\rho(y x)+\tau\rho(y\triangleleft a)+\tau\rho(b\triangleright x)\\ &&+\tau\rho(\theta(b, a))+\tau\gamma(y x)+\tau\gamma(y\triangleleft a)+\tau\gamma(b\triangleright x)+\tau\gamma(\theta(b, a))\\ &&+\tau Q(y x)+\tau Q(y\triangleleft a)+\tau Q(b\triangleright x)+\tau Q(\theta(b, a)), \end{eqnarray*} and the right hand side is equal to \begin{eqnarray*} &&(b+y)\bullet \Delta_{E} (a+x)+(b+y)\cdot\tau\Delta_{E} (a+x)\\ &&+ \Delta_{E}(b+y)\bullet (a+x)+\tau \Delta_{E}(b+y)\cdot (a+x)\\ &=&(b+y)\bullet(a_{1} \otimes a_{2}+a_{(-1)} \otimes a_{(0)}+a_{(0)} \otimes a_{(1)}+a{}_{<1>}\otimes a{}_{<2>}+x_{1} \otimes x_{2}\\ &&+x_{[-1]} \otimes x_{[0]}+x_{[0]}\otimes x_{[1]}+x{}_{\{1\}}\otimes x{}_{\{2\}})+(b+y)\cdot(a_{2} \otimes a_{1}+a_{(0)} \otimes a_{(-1)}\\ &&+a_{(1)} \otimes a_{(0)}+a{}_{<2>}\otimes a{}_{<1>}+x_{2} \otimes x_{1}+x{}_{[0]}\otimes x{}_{[-1]}+x{}_{[1]}\otimes x{}_{[0]}+x{}_{\{2\}}\otimes x{}_{\{1\}})\\ &&+(b_{1} \otimes b_{2}+b_{(-1)} \otimes b_{(0)}+b_{(0)} \otimes b_{(1)}+b{}_{<1>}\otimes b{}_{<2>}+y_{1} \otimes y_{2}+y{}_{[-1]}\otimes y{}_{[0]}\\ &&+y{}_{[0]}\otimes y{}_{[1]}+y{}_{\{1\}}\otimes y{}_{\{2\}})\bullet(a+x)+(b{}_{2}\otimes b{}_{1}+b{}_{(0)}\otimes b{}_{(-1)}+b{}_{(1)}\otimes b{}_{(0)}\\ &&+b{}_{<2>}\otimes b{}_{<1>}+y{}_{2}\otimes y{}_{1}+y{}_{[0]}\otimes y{}_{[-1]}+y{}_{[1]}\otimes y{}_{[0]}+y{}_{\{2\}}\otimes y{}_{\{1\}})\cdot (a+x)\\ &=&a{}_{1}\otimes b a{}_{2}+a{}_{1}\otimes(y\rightharpoonup a{}_{2})+a{}_{1}\otimes(y\triangleleft a{}_{2})+a{}_{1}\otimes\theta(b, a{}_{2})\\ &&+a{}_{(-1)}\otimes b a{}_{(0)}+a{}_{(-1)}\otimes (y\rightharpoonup a{}_{(0)})+a{}_{(-1)}\otimes(y\triangleleft a{}_{(0)})+a{}_{(-1)}\otimes\theta(b, a{}_{(0)})\\ &&+a{}_{(0)}\otimes(b\leftharpoonup a{}_{(1)})+a{}_{(0)}\otimes\sigma(y, a{}_{(1)})+a{}_{(0)}\otimes y a{}_{(1)}+a{}_{(0)}\otimes (b\triangleright a{}_{(1)})\\ &&+a{}_{<1>}\otimes(b\leftharpoonup a{}_{<2>})+a{}_{<1>}\otimes\sigma(y, a{}_{<2>})+a{}_{<1>}\otimes y a{}_{<2>}+a{}_{<1>}\otimes(b\triangleright a{}_{<2>})\\ &&+x{}_{1}\otimes(b\leftharpoonup x{}_{2})+x{}_{1}\otimes\sigma(y, x{}_{2})+x{}_{1}\otimes y x{}_{2}+x{}_{1}\otimes(b\triangleright x{}_{2})\\ &&+x{}_{[-1]}\otimes(b\leftharpoonup x{}_{[0]})+x{}_{[-1]}\otimes\sigma(y, x{}_{[0]})+x{}_{[-1]}\otimes y x{}_{[0]}+x{}_{[-1]}\otimes(b\triangleright x{}_{[0]})\\ &&+x{}_{[0]}\otimes b x{}_{[1]}+x{}_{[0]}\otimes(y\rightharpoonup x{}_{[1]})+x{}_{[0]}\otimes(y\triangleleft x{}_{[1]})+x{}_{[0]}\otimes\theta(b, x{}_{[1]})\\ &&+x{}_{\{1\}}\otimes b x{}_{\{2\}}+x{}_{\{1\}}\otimes(y\rightharpoonup x{}_{\{2\}})+x{}_{\{1\}}\otimes(y\triangleleft x{}_{\{2\}})+x{}_{\{1\}}\otimes\theta(b, x{}_{\{2\}})\\ &&+b a{}_{2}\otimes a{}_{1}+(y\rightharpoonup a{}_{2})\otimes a{}_{1}+(y\triangleleft a{}_{2})\otimes a{}_{1}+\theta(b, a{}_{2})\otimes a{}_{1}\\ &&+b a{}_{(0)}\otimes a{}_{(-1)}+(y\rightharpoonup a{}_{(0)})\otimes a{}_{(-1)}+(y\triangleleft a{}_{(0)})\otimes a{}_{(-1)}+\theta(b, a{}_{(0)})\otimes a{}_{(-1)}\\ &&+(b\leftharpoonup a{}_{(1)})\otimes a{}_{(0)}+\sigma(y, a{}_{(1)})\otimes a{}_{(0)}+y a{}_{(1)}\otimes a{}_{(0)}+(b\triangleright a{}_{(1)})\otimes a{}_{(0)}\\ &&+(b\leftharpoonup a{}_{<2>})\otimes a{}_{<1>}+\sigma(y, a{}_{<2>})\otimes a{}_{<1>}+y a{}_{<2>}\otimes a{}_{<1>}+(b\triangleright a{}_{<2>})\otimes a{}_{<1>}\\ &&+(b\leftharpoonup x{}_{2})\otimes x{}_{1}+\sigma(y, x{}_{2})\otimes x{}_{1}+y x{}_{2}\otimes x{}_{1}+(b\triangleright x{}_{2})\otimes x{}_{1}\\ &&+(b\leftharpoonup x{}_{[0]})\otimes x{}_{[-1]}+\sigma(y, x{}_{[0]})\otimes x{}_{[-1]}+y x{}_{[0]}\otimes x{}_{[-1]}+(b\triangleright x{}_{[0]})\otimes x{}_{[-1]}\\ &&+b x{}_{[1]}\otimes x{}_{[0]}+(y\rightharpoonup x{}_{[1]})\otimes x{}_{[0]}+(y\triangleleft x{}_{[1]})\otimes x{}_{[0]}+\theta(b, x{}_{[1]})\otimes x{}_{[0]}\\ &&+b x{}_{\{2\}}\otimes x{}_{\{1\}}+(y\rightharpoonup x{}_{\{2\}})\otimes x{}_{\{1\}}+(y\triangleleft x{}_{\{2\}})\otimes x{}_{\{1\}}+\theta(b, x{}_{\{2\}})\otimes x{}_{\{1\}}\\ &&+b{}_{1} a\otimes b{}_{2}+(b{}_{1}\leftharpoonup x)\otimes b{}_{2}+(b{}_{1}\triangleright x)\otimes b{}_{2}+\theta(b{}_{1}, a)\otimes b{}_{2}\\ &&+(b{}_{(-1)}\rightharpoonup a)\otimes b{}_{(0)}+\sigma(b{}_{(-1)}, x)\otimes b{}_{(0)}+b{}_{(-1)} x\otimes b{}_{(0)}+(b{}_{(-1)}\triangleleft a)\otimes b{}_{(0)}\\ &&+b{}_{(0)} a\otimes b{}_{(1)}+(b{}_{(0)}\leftharpoonup x)\otimes b{}_{(1)}+(b{}_{(0)}\triangleright x)\otimes b{}_{(1)}+\theta(b{}_{(0)}, a)\otimes b{}_{(1)}\\ &&+(b{}_{<1>}\rightharpoonup a)\otimes b{}_{<2>}+\sigma(b{}_{<1>}, x)\otimes b{}_{<2>}+b{}_{<1>} x\otimes b{}_{<2>}+(b{}_{<1>}\triangleleft a)\otimes b{}_{<2>}\\ &&+(y{}_{1}\rightharpoonup a)\otimes y{}_{2}+\sigma(y{}_{1}, x)\otimes y{}_{2}+y{}_{1} x\otimes y{}_{2}+(y{}_{1}\triangleleft a)\otimes y{}_{2}\\ &&+y{}_{[-1]} a\otimes y{}_{[0]}+(y{}_{[-1]}\leftharpoonup x)\otimes y{}_{[0]}+(y{}_{[-1]}\triangleright x)\otimes y{}_{[0]}+\theta(y{}_{[-1]}, a)\otimes y{}_{[0]}\\ &&+(y{}_{[0]}\rightharpoonup a)\otimes y{}_{[1]}+\sigma(y{}_{[0]}, x)\otimes y{}_{[1]}+y{}_{[0]} x\otimes y{}_{[1]}+(y{}_{[0]}\triangleleft a)\otimes y{}_{[1]}\\ &&+y{}_{\{1\}} a\otimes y{}_{\{2\}}+(y{}_{\{1\}}\leftharpoonup x)\otimes y{}_{\{2\}}+(y{}_{\{1\}}\triangleright x)\otimes y{}_{\{2\}}+\theta(y{}_{\{1\}}, a)\otimes y{}_{\{2\}}\\ &&+b{}_{2}\otimes b{}_{1} a+b{}_{2}\otimes(b{}_{1}\leftharpoonup x)+b{}_{2}\otimes (b{}_{1}\triangleright x)+b{}_{2}\otimes \theta(b{}_{1}, a)\\ &&+b{}_{(0)}\otimes(b{}_{(-1)}\rightharpoonup a)+b{}_{(0)}\otimes\sigma(b{}_{(-1)}, x)+b{}_{(0)}\otimes b{}_{(-1)} x+b{}_{(0)}\otimes(b{}_{(-1)}\triangleleft a)\\ &&+b{}_{(1)}\otimes b{}_{(0)} a+b{}_{(1)}\otimes(b{}_{(0)}\leftharpoonup x)+b{}_{(1)}\otimes(b{}_{(0)}\triangleright x)+b{}_{(1)}\otimes\theta(b{}_{(0)}, a)\\ &&+b{}_{<2>}\otimes(b{}_{<1>}\rightharpoonup a)+b{}_{<2>}\otimes\sigma(b{}_{<1>}, x)+b{}_{<2>}\otimes b{}_{<1>} x+b{}_{<2>}\otimes(b{}_{<1>}\triangleleft a)\\ &&+y{}_{2}\otimes(y{}_{1}\rightharpoonup a)+y{}_{2}\otimes\sigma(y{}_{1}, x)+y{}_{2}\otimes y{}_{1} x+y{}_{2}\otimes(y{}_{1}\triangleleft a)\\ &&+y{}_{[0]}\otimes y{}_{[-1]} a+y{}_{[0]}\otimes(y{}_{[-1]}\leftharpoonup x)+y{}_{[0]}\otimes(y{}_{[-1]}\triangleright x)+y{}_{[0]}\otimes\theta(y{}_{[-1]}, a)\\ &&+y{}_{[1]}\otimes(y{}_{[0]}\rightharpoonup a)+y{}_{[1]}\otimes\sigma(y{}_{[0]}, x)+y{}_{[1]}\otimes y{}_{[0]} x+y{}_{[1]}\otimes(y{}_{[0]}\triangleleft a)\\ &&+y{}_{\{2\}}\otimes y{}_{\{1\}} a+y{}_{\{2\}}\otimes(y{}_{\{1\}}\leftharpoonup x)+y{}_{\{2\}}\otimes(y{}_{\{1\}}\triangleright x)+y{}_{\{2\}}\otimes\theta(y{}_{\{1\}}, a). \end{eqnarray*} If we compare both the two sides item by item, one obtain all the cocycle double matched pair conditions (CDM15)--(CDM28) in Definition \ref{cocycledmp}.
This complete the proof. \end{proof}
\section{Extending structures for alternative bialgebraas} In this section, we will study the extending problem for alternative bialgebras. We will find some special cases when the braided alternative bialgebra is deduced into an ordinary alternative bialgebra. It is proved that the extending problem can be solved by using of the non-abelian cohomology theory based on our cocycle bicrossedproduct for braided alternative bialgebras in last section.
\subsection{Extending structures for alternative algebras } First we are going to study extending problem for alternative algebras and alternative coalgebras.
There are two cases for $A$ to be an alternative algebra in the cocycle cross product system defined in last section, see condition (CC11)--(CC12). The first case is when we let $\rightharpoonup$, $\leftharpoonup$ to be trivial and $\theta\neq 0$, then from condition (CP9) we get $\sigma(x,\theta(a, b))+\sigma(x,\theta(b, a))=0$, since $\theta\neq 0$ we assume $\sigma=0$ for simplicity, thus we obtain the following type $(a1)$ unified product for alternative algebras.
\begin{lemma} Let ${A}$ be an alternative algebra and $V$ be a vector space. An extending datum of ${A}$ by $V$ of type (a1) is $\Omega^{(1)}({A},V)=(\triangleright,\triangleleft, \theta)$ consisting of bilinear maps \begin{eqnarray*} \triangleright: A\otimes V \to V,\quad \triangleleft: V\otimes A \to V,\quad\theta: A\otimes A\to V. \end{eqnarray*}
Denote by $A_{}\#_{\theta}V$ the vector space $E={A}\oplus V$ together with the multiplication given by \begin{eqnarray} (a+ x)(b+ y)=ab+\big( xy+x\triangleleft b+a\triangleright y+\theta(a, b)\big). \end{eqnarray} Then $A_{}\# {}_{\theta}V$ is an alternative algebra if and only if the following compatibility conditions hold for all $a$, $b\in {A}$, $x$, $y$, $z\in V$: \begin{enumerate} \item[(A1)] $(ab +ba)\triangleright x+(\theta(a, b) +\theta(b, a)) x=a\triangleright (b\triangleright x)+b\triangleright (a\triangleright x) $, \item[(A2)] $(ab) \triangleright x+\theta(a, b) x+(a\triangleright x)\triangleleft b=a\triangleright (b\triangleright x+x\triangleleft b) $, \item[(A3)] $x \triangleleft (a b)+x\theta(a, b)+a \triangleright (x\triangleleft b)=(a\triangleright x+x \triangleleft a) \triangleleft b $, \item[(A4)] $x \triangleleft (a b+b a)+x(\theta(b, a)+\theta(a, b))=(x \triangleleft a) \triangleleft b+(x \triangleleft b) \triangleleft a $, \item[(A5)] $ a \triangleright (x y)+x(a\triangleright y) =(a \triangleright x+x\triangleleft a) y $, \item[(A6)] $ a \triangleright (x y+y x) =(a \triangleright x)y+(a \triangleright y)x$, \item[(A7)] $(x y+yx)\triangleleft a = x(y \triangleleft a)+y(x \triangleleft a)$, \item[(A8)] $(x y) \triangleleft a +(x\triangleleft a)y= x(y \triangleleft a+a\triangleright y)$, \item[(A9)]$\theta(a b, c)+\theta(ba, c)+(\theta(a, b)+\theta(b,a) ) \triangleleft c=\theta(a, b c)+\theta(b, a c)+b\triangleright \theta(a, c)+a\triangleright \theta(b, c),$ \item[(A10)]$\theta(a b, c)+\theta(ac, b)+\theta(a, b) \triangleleft c+\theta(a,c) \triangleleft b=\theta(a, b c)+\theta(a, cb)+a\triangleright (\theta(c, b)+ \theta(b, c)),$ \item[(A11)] $(x y)z-x(yz)=-(yx)z+y(xz)$, \item[(A12)] $(x y)z-x(yz)=-(xz)y+x(zy)$. \end{enumerate} \end{lemma} Note that (A1)--(A8) are deduced from (CP5)--(CP8) , (CP13)--(CP16) and by (A11)--(A12) we obtain that $V$ is an alternative algebra. Furthermore, $V$ is in fact an alternative subalgebra of $A_{}\#_{\theta}V$ but $A$ is not although $A$ is itself an alternative algebra.
Denote the set of all algebraic extending datum of ${A}$ by $V$ of type (a1) by $\mathcal{A}^{(1)}({A},V)$.
In the following, we always assume that $A$ is a subspace of a vector space $E$, there exists a projection map $p: E \to{A}$ such that $p(a) = a$, for all $a \in {A}$. Then the kernel space $V := \ker(p)$ is also a subspace of $E$ and a complement of ${A}$ in $E$.
\begin{lemma}\label{lem:33-1} Let ${A}$ be an alternative algebra and $E$ be a vector space containing ${A}$ as a subspace. Suppose that there is an alternative algebra structure on $E$ such that $V$ is an alternative subalgebra of $E$ and the canonical projection map $p: E\to A$ is an alternative algebra homomorphism.
Then there exists an alternative algebraic extending datum $\Omega^{(1)}({A},V)$ of ${A}$ by $V$ such that $E\cong A_{}\#_{\theta}V$. \end{lemma}
\begin{proof} Since $V$ is an alternative subalgebra of $E$, we have $x\cdot_E y\in V$ for all $x, y\in V$. We define the extending datum of ${A}$ through $V$ by the following formulas: \begin{eqnarray*} \triangleright: A\otimes {V} \to V, \qquad {a} \triangleright {x} &:=&{a}\cdot_E {x},\\ \triangleleft: V\otimes {A} \to V, \qquad {x} \triangleleft {a} &:=&{x}\cdot_E {a},\\ \theta: A\otimes A \to V, \qquad \theta(a,b) &:=&a\cdot_E b-p \bigl(a\cdot_E b\bigl),\\ {\cdot_V}: V \otimes V \to V, \qquad {x}\cdot_V {y}&:=& {x}\cdot_E{y} . \end{eqnarray*} for any $a , b\in {A}$ and $x, y\in V$. It is easy to see that the above maps are well defined and $\Omega^{(1)}({A}, V)$ is an extending system of ${A}$ trough $V$ and \begin{eqnarray*} \varphi:A_{}\#_{\theta}V\to E, \qquad \varphi(a+ x) := a+x \end{eqnarray*} is an isomorphism of alternative algebras. \end{proof}
\begin{lemma} Let $\Omega^{(1)}(A, V) = \bigl(\triangleright, \triangleleft, \theta, \cdot \bigl)$ and $\Omega'^{(1)}(A, V) = \bigl(\triangleright ', \, \triangleleft', \theta', \cdot ' \bigl)$ be two algebraic extending datums of ${A}$ by $V$ of type (a1) and $A_{}\#_{\theta} V$, $A_{}\#_{\theta'} V$ be the corresponding unified products. Then there exists a bijection between the set of all homomorphisms of alternative algebras $\varphi:A_{\theta}\#_{\triangleright, \triangleleft} V\to A_{\theta'}\#_{\triangleright', \triangleleft'} V$ whose restriction on ${A}$ is the identity map and the set of pairs $(r,s)$, where $r:V\rightarrow {A}$ and $s:V\rightarrow V$ are two linear maps satisfying \begin{eqnarray} &&{r}(x\triangleleft a)={r}(x)\cdot' a,\\ &&{r}(a\triangleright x)= a\cdot'{r}(y),\\ &&a\cdot' b=ab+r\theta(a,b),\\ &&{r}(xy)={r}(x)\cdot' {r}(y),\\ &&{s}(x)\triangleleft' a+\theta'(r(x), a)={s}(x\triangleleft a),\\ &&a\triangleright'{s}(y)+\theta'(a,r(y) )={s}(a\triangleright y),\\ &&\theta'(a,b)=s\theta(a,b),\\ &&{s}(xy)={s}(x)\cdot' {s}(y)+{s}(x)\triangleleft'{r}(y)+{r}(x)\triangleright'{s}(y)+\theta'(r(x), r(y)), \end{eqnarray} for all $a, b\in{A}$ and $x$, $y\in V$.
Under the above bijection the homomorphism of alternative algebras $\varphi=\varphi_{r,s}: A_{}\#_{\theta}V\to A_{}\#_{\theta'} V$ to $(r,s)$ is given by $\varphi(a+x)=(a+r(x))+ s(x)$ for all $a\in {A}$ and $x\in V$. Moreover, $\varphi=\varphi_{r,s}$ is an isomorphism if and only if $s: V\rightarrow V$ is a linear isomorphism. \end{lemma}
\begin{proof} Let $\varphi: A_{}\#_{\theta}V\to A_{\theta'}\#_{\beta'} V$ be an alternative algebra homomorphism whose restriction on ${A}$ is the identity map. Then $\varphi$ is determined by two linear maps $r: V\rightarrow {A}$ and $s: V\rightarrow V$ such that $\varphi(a+x)=(a+r(x))+s(x)$ for all $a\in {A}$ and $x\in V$. In fact, we have to show $$\varphi((a+ x)(b+ y))=\varphi(a+ x)\cdot'\varphi(b+ y).$$ The left hand side is equal to \begin{eqnarray*} &&\varphi((a+ x)(b+ y))\\ &=&\varphi\left({ab}+ x\triangleleft b+a\triangleright y+{xy}+\theta(a,b)\right)\\ &=&{ab}+ r(x\triangleleft b)+r(a\triangleright y)+r({xy})+r\theta(a,b)\\ &&+ s(x\triangleleft b)+s(a\triangleright y)+s({xy})+s\theta(a,b), \end{eqnarray*} and the right hand side is equal to \begin{eqnarray*} &&\varphi(a+ x)\cdot' \varphi(b+ y)\\ &=&(a+r(x)+s(x))\cdot' (b+r(y)+s(y))\\ &=&(a+r(x))\cdot' (b+r(y))+ s(x)\triangleleft'(b+r(y))+(a+r(x))\triangleright's(y)\\ && +s(x)\cdot' s(y)+\theta'(a+r(x),b+r(y)). \end{eqnarray*} Thus $\varphi$ is a homomorphism of alternative algebras if and only if the above conditions hold. \end{proof}
The second case is when $\theta=0$, we obtain the following type (a2) unified product. \begin{theorem}\cite{Z5} Let $A$ be an alternative algebra and $V$ be a vector space. An \textit{extending datum of $A$ through $V$} of type (a2) is a system $\Omega(A, V) = \bigl(\triangleleft, \, \triangleright, \, \leftharpoonup, \, \rightharpoonup, \, \sigma \bigl)$ consisting of linear maps \begin{eqnarray*} &&\rightharpoonup: V \otimes A \to A, \quad \leftharpoonup: A \otimes V \to A, \quad\triangleleft : V \otimes A \to V, \quad \triangleright: A \otimes V \to V ,
\quad\sigma: V\otimes V \to A. \end{eqnarray*} Denote by $A_{}\# {}_{\sigma}V$ the vector space $E={A}\oplus V$ together with the multiplication \begin{align} (a+ x)(b+ y)=\big(ab+x\rightharpoonup b+a\leftharpoonup y+\sigma(x, y)\big)+\big( xy+x\triangleleft b+a\triangleright y\big). \end{align} Then $A_{}\# {}_{\sigma}V$ is an alternative algebra if and only if the following compatibility conditions hold for any $a, b, c\in A$, $x, y, z\in V$: \begin{enumerate} \item[(B1)] $x\rightharpoonup (ab)+a(x\rightharpoonup b)+a\leftharpoonup(x\triangleleft b)=(x\rightharpoonup a+a\leftharpoonup x)b+(x\triangleleft a+a\triangleright x)\rightharpoonup b $, \item[(B2)] $x\rightharpoonup (ab+ ba)=(x\rightharpoonup a)b+(x\triangleleft a)\rightharpoonup b+(x\rightharpoonup b)a+(x\triangleleft b)\rightharpoonup a $, \item[(B3)] $(ab) \leftharpoonup x+(a\leftharpoonup x)b+(a\triangleright x)\rightharpoonup b=a(b\leftharpoonup x+x\rightharpoonup b)+a\leftharpoonup(b\triangleright x+x\triangleleft b) $, \item[(B4)] $(ab +ba )\leftharpoonup x=a(b\leftharpoonup x)+a\leftharpoonup(b\triangleright x)+b(a\leftharpoonup x)+b\leftharpoonup(a\triangleright x)$, \item[(B5)] $(xy)\rightharpoonup a+(x\rightharpoonup a)\leftharpoonup y+\sigma(x\triangleleft a,y)+\sigma(x,y)a\\ =x\rightharpoonup(y\rightharpoonup a+a\leftharpoonup y)+\sigma(x,y\triangleleft a)+\sigma(x, a\triangleright y)$, \item[(B6)] $(xy+yx)\rightharpoonup a+(\sigma(x,y)+\sigma(y,x))a\\ =x\rightharpoonup(y\rightharpoonup a)+\sigma(x,y\triangleleft a)+y\rightharpoonup(x\rightharpoonup a)+\sigma(y,x\triangleleft a)$, \item[(B7)] $a\leftharpoonup (x y)+x\rightharpoonup(a\leftharpoonup y)+\sigma(x,a\triangleright y)+a\sigma(x,y)\\ =(a\leftharpoonup x+x\rightharpoonup a)\leftharpoonup y+\sigma(a\triangleright x,y)+\sigma(x\triangleleft a,y)$, \item[(B8)] $a\leftharpoonup (x y+ y x)+a(\sigma(x,y)+\sigma(y,x))\\ =(a\leftharpoonup x)\leftharpoonup y+\sigma(a\triangleright x,y)+(a\leftharpoonup y)\leftharpoonup x+\sigma(a\triangleright y,x)$, \item[(B9)] $x\triangleleft (ab)+a\triangleright(x\triangleleft b)=(x\triangleleft a+a\triangleright x)\triangleleft b $, \item[(B10)] $x\triangleleft (ab+ba)=(x\triangleleft a)\triangleleft b+(x\triangleleft b)\triangleleft a $, \item[(B11)] $(ab)\triangleright x+(a\triangleright x)\triangleleft b=a\triangleright(b\triangleright x+x\triangleleft b) $, \item[(B12)] $(ab+ba)\triangleright x=a\triangleright(b\triangleright x)+b\triangleright(a\triangleright x)$, \item[(B13)] $(xy)\triangleleft a+(x\triangleleft a)y+(x\rightharpoonup a)\triangleright y\\ =x(y\triangleleft a+a\triangleright y)+x\triangleleft(y\rightharpoonup a+a\leftharpoonup y)$, \item[(B14)] $(xy+yx)\triangleleft a=x(y\triangleleft a)+x\triangleleft(y\rightharpoonup a)+y(x\triangleleft a)+y\triangleleft(x\rightharpoonup a)$, \item[(B15)] $a\triangleright (xy)+x(a\triangleright y)+x\triangleleft(a\leftharpoonup y)\\ =(a\leftharpoonup x+x\rightharpoonup a)\triangleright y+(a\triangleright x+x\triangleleft a)y$, \item[(B16)] $a\triangleright (xy+ yx)=(a\leftharpoonup x)\triangleright y+(a\triangleright x)y+(a\leftharpoonup y)\triangleright x+(a\triangleright y)x$, \item[(B17)] $\sigma(x y, z)+(\sigma(y,x)+\sigma(x, y))\leftharpoonup z+\sigma(yx,z)\\ =\sigma(y,xz)+{\sigma}(x, y z)+y\rightharpoonup\sigma(x,z)+x \rightharpoonup \sigma(y, z),$ \item[(B18)] $\sigma(x y, z)+\sigma(x z, y)+\sigma(x, y)\leftharpoonup z+\sigma(x, z)\leftharpoonup y\\ ={\sigma}(x, z y)+{\sigma}(x, y z)+x \rightharpoonup (\sigma(z, y)+ \sigma(y, z))$, \item[(B19)] $(x y) z+(y x)z+(\sigma(x,y)+\sigma(y,x))\triangleright z\\ =x(y z)+y(x z)+y\triangleleft\sigma(x,z)+x\triangleleft\sigma(y,z),$ \item[(B20)] $(x y) z+(x z)y+\sigma(x,y)\triangleright z+\sigma(x,z)\triangleright y\\ =x(y z)+x(z y)+x\triangleleft(\sigma(z,y)+\sigma(y,z))$. \end{enumerate} \end{theorem}
\begin{theorem}\cite{Z5} Let $A$ be an alternative algebra, $E$ be a vector space containing $A$ as a subspace. If there is an alternative algebra structure on $E$ such that $A$ is an alternative subalgebra of $E$. Then there exists an alternative algebraic extending structure $\Omega(A, V) = \bigl(\triangleleft, \, \triangleright, \, \leftharpoonup, \, \rightharpoonup, \, \sigma \bigl)$ of $A$ through $V$ such that there is an isomorphism of alternative algebras $E\cong A_{\sigma}\#_{}H$. \end{theorem}
\begin{lemma} Let $\Omega^{(1)}(A, V) = \bigl(\triangleright, \triangleleft, \leftharpoonup, \rightharpoonup, \sigma, \cdot \bigl)$ and $\Omega'^{(1)}(A, V) = \bigl(\triangleright', \triangleleft ', \leftharpoonup ', \rightharpoonup ', \sigma ', \cdot ' \bigl)$ be two algebraic extending structures of $A$ through $V$ and $A{}_{\sigma}\#_{}V$, $A{}_{\sigma'}\#_{} V$ the associated unified products. Then there exists a bijection between the set of all homomorphisms of algebras $\psi: A{}_{\sigma}\#_{}V\to A{}_{\sigma'}\#_{} V$which stabilize $A$ and the set of pairs $(r, s)$, where $r: V \to A$, $s: V \to V$ are linear maps satisfying the following compatibility conditions for any $x \in A$, $u$, $v \in V$: \begin{enumerate} \item[(M1)] $r(x \cdot y) = r(x)\cdot'r(y) + \sigma ' (s(x), s(y)) - \sigma(x, y) + r(x) \leftharpoonup' s(y) + s(x) \rightharpoonup' r(y)$, \item[(M2)] $s(x \cdot y) = r(x) \triangleright ' s(y) + s(x)\triangleleft ' r(y) + s(x) \cdot ' s(y)$,
\item[(M3)] $r(x\triangleleft {a}) = r(x)\cdot' {a} - x \rightharpoonup {a} + s(x) \rightharpoonup' {a}$,
\item[(M5)] $r({a} \triangleright x) = {a}\cdot'r(x) - {a}\leftharpoonup x + {a} \leftharpoonup' s(x)$,
\item[(M4)] $s(x\triangleleft {a}) = s(x)\triangleleft' {a}$,
\item[(M6)] $s({a}\triangleright x) = {a} \triangleright' s(x)$. \end{enumerate} Under the above bijection the homomorphism of algebras $\varphi =\varphi _{(r, s)}: A_{\sigma}\# {}_{}H \to A_{\sigma'}\# {}_{}H$ corresponding to $(r, s)$ is given for any $a\in A$ and $x \in V$ by: $$\varphi(a+ x) = (a + r(x))+ s(x).$$ Moreover, $\varphi = \varphi _{(r, s)}$ is an isomorphism if and only if $s: V \to V$ is an isomorphism linear map. \end{lemma}
Let ${A}$ be an alternative algebra and $V$ be a vector space. Two algebraic extending systems $\Omega^{(i)}({A}, V)$ and ${\Omega'^{(i)}}({A}, V)$ are called equivalent if $\varphi_{r,s}$ is an isomorphism. We denote it by $\Omega^{(i)}({A}, V)\equiv{\Omega'^{(i)}}({A}, V)$. From the above lemmas, we obtain the following result.
\begin{theorem}\label{thm3-1} Let ${A}$ be an alternative algebra, $E$ be a vector space containing ${A}$ as a subspace and $V$ be a complement of ${A}$ in $E$. Denote $\mathcal{HA}(V,{A}):=\mathcal{A}^{(1)}({A},V)\sqcup \mathcal{A}^{(2)}({A},V) /\equiv$. Then the map \begin{eqnarray} \notag&&\Psi: \mathcal{HA}(V,{A})\rightarrow Extd(E,{A}),\\ &&\overline{\Omega^{(1)}({A},V)}\mapsto A_{}\#_{\theta} V,\quad \overline{\Omega^{(2)}({A},V)}\mapsto A_{\sigma}\# {}_{} V \end{eqnarray} is bijective, where $\overline{\Omega^{(i)}({A}, V)}$ is the equivalence class of $\Omega^{(i)}({A}, V)$ under $\equiv$. \end{theorem}
\subsection{Extending structures for alternative coalgebras}
Next we consider the alternative coalgebra structures on $E=A^{P}\# {}^{Q}V$.
There are two cases for $(A,\Delta_A)$ to be an alternative coalgebra. The first case is when $Q=0$, then we obtain the following type (c1) unified product for alternative coalgebras. \begin{lemma}\label{cor02co} Let $({A},\Delta_A)$ be an alternative coalgebra and $V$ be a vector space. An extending datum of ${A}$ by $V$ of type (c1) is $\Omega^c({A},V)=(\phi, {\psi},\rho,\gamma, P, \Delta_V)$ with linear maps \begin{eqnarray*} &&\phi: A \to V \otimes A,\quad \psi: A \to A\otimes V,\\ &&\rho: V \to A\otimes V,\quad \gamma: V \to V \otimes A,\\ && {P}: A\rightarrow {V}\otimes {V},\quad\Delta_V: V\rightarrow V\otimes V. \end{eqnarray*}
Denote by $A^{P}\# {}^{} V$ the vector space $E={A}\oplus V$ with the linear map $\Delta_E: E\rightarrow E\otimes E$ given by $$\Delta_{E}(a)=(\Delta_{A}+\phi+\psi+P)(a),\quad \Delta_{E}(x)=(\Delta_{V}+\rho+\gamma)(x), $$ that is $$\Delta_{E}(a)= a{}_{1} \otimes a{}_{2}+ a{}_{(-1)} \otimes a{}_{(0)}+a{}_{(0)}\otimes a{}_{(1)}+a{}_{<1>}\otimes a{}_{<2>},$$ $$\Delta_{E}(x)= x{}_{1} \otimes x{}_{2}+ x{}_{[-1]} \otimes x{}_{[0]}+x{}_{[0]} \otimes x{}_{[1]},$$ Then $A^{P}\# {}^{} V$ is an alternative coalgebra with the comultiplication given above if and only if the following compatibility conditions hold: \begin{enumerate}
\item[(C1)] $\phi(a{}_{1})\otimes a{}_{2}+\gamma(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{(-1)}\otimes \Delta_{A}(a{}_{(0)})\\ =-\tau_{12}\big(\psi(a{}_{1})\otimes a{}_{2}+\rho(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{1}\otimes\phi(a{}_{2})-a{}_{(0)}\otimes\gamma(a{}_{(1)})\big)$,
\item[(C2)] $P(a{}_{1})\otimes a{}_{2}+\Delta_{H}(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{(-1)}\otimes\phi(a{}_{(0)})-a{}_{<1>}\otimes\gamma(a{}_{<2>})\\ =-\tau_{12}\big(P(a{}_{1})\otimes a{}_{2}+\Delta_{H}(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{(-1)}\otimes\phi(a{}_{(0)})-a{}_{<1>}\otimes\gamma(a{}_{<2>})\big)$,
\item[(C3)] $\Delta_{A}(a{}_{(0)})\otimes a{}_{(1)}-a{}_{1}\otimes\psi(a{}_{2})-a{}_{(0)}\otimes\rho(a{}_{(1)})\\ =-\tau_{12}\big(\Delta_{A}(a{}_{(0)})\otimes a{}_{(1)}-a{}_{1}\otimes\psi(a{}_{2})-a{}_{(0)}\otimes\rho(a{}_{(1)})\big)$,
\item[(C4)] $\psi(a{}_{(0)})\otimes a{}_{(1)}+\rho(a{}_{<1>})\otimes a{}_{<2>}-a{}_{1}\otimes P(a{}_{2})-a{}_{(0)}\otimes\Delta_{H}(a{}_{(1)})\\ =-\tau_{12}\big(\phi(a{}_{(0)})\otimes a{}_{(1)}+\gamma(a{}_{<1>})\otimes a{}_{<2>}-a{}_{(-1)}\otimes\psi(a{}_{(0)})-a{}_{<1>}\otimes\rho(a{}_{<2>})\big)$,
\item[(C5)] $\gamma(x{}_{[0]})\otimes x{}_{[1]}-x{}_{[0]}\otimes\Delta_{A}(x{}_{[1]})=-\tau_{12}\big(\rho(x{}_{[0]})\otimes x{}_{[1]}-x{}_{[-1]}\otimes\gamma(x{}_{[0]})\big)$,
\item[(C6)] $\Delta_{H}(x{}_{[0]})\otimes x{}_{[1]}-x{}_{1}\otimes \gamma(x{}_{2})-x{}_{[0]}\otimes\phi(x{}_{[1]})\\ =-\tau_{12}\big(\Delta_{H}(x{}_{[0]})\otimes x{}_{[1]}-x{}_{1}\otimes \gamma(x{}_{2})-x{}_{[0]}\otimes\phi(x{}_{[1]})\big)$,
\item[(C7)] $\Delta_{A}(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes \rho(x{}_{[0]})=-\tau_{12}\big(\Delta_{A}(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes \rho(x{}_{[0]})\big)$,
\item[(C8)] $\rho(x{}_{1})\otimes x{}_{2}+\psi(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes\Delta_{H}(x{}_{[0]})\\ =-\tau_{12}\big(\gamma(x{}_{1})\otimes x{}_{2}+\phi(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{1}\otimes\rho(x{}_{2})-x{}_{[0]}\otimes\psi(x{}_{[1]})\big)$,
\item[(C9)] $\phi(a{}_{1})\otimes a{}_{2}+\gamma(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{(-1)}\otimes \Delta_{A}(a{}_{(0)})\\ =-\tau_{23}\big(\phi(a{}_{1})\otimes a{}_{2}+\gamma(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{(-1)}\otimes \Delta_{A}(a{}_{(0)})\big)$,
\item[(C10)] $P(a{}_{1})\otimes a{}_{2}+\Delta_{H}(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{(-1)}\otimes\phi(a{}_{(0)})-a{}_{<1>}\otimes\gamma(a{}_{<2>})\\ =-\tau_{23}\big(\phi(a{}_{(0)})\otimes a{}_{(1)}+\gamma(a{}_{<1>})\otimes a{}_{<2>}-a{}_{(-1)}\otimes\psi(a{}_{(0)})-a{}_{<1>}\otimes\rho(a{}_{<2>})\big)$,
\item[(C11)] $\Delta_{A}(a{}_{(0)})\otimes a{}_{(1)}-a{}_{1}\otimes\psi(a{}_{2})-a{}_{(0)}\otimes\rho(a{}_{(1)})\\ =-\tau_{23}\big(\psi(a{}_{1})\otimes a{}_{2}+\rho(a{}_{(-1)})\otimes a{}_{(0)}-a{}_{1}\otimes\phi(a{}_{2})-a{}_{(0)}\otimes\gamma(a{}_{(1)})\big)$,
\item[(C12)] $\psi(a{}_{(0)})\otimes a{}_{(1)}+\rho(a{}_{<1>})\otimes a{}_{<2>}-a{}_{1}\otimes P(a{}_{2})-a{}_{(0)}\otimes\Delta_{H}(a{}_{(1)})\\ =-\tau_{23}\big(\psi(a{}_{(0)})\otimes a{}_{(1)}+\rho(a{}_{<1>})\otimes a{}_{<2>}-a{}_{1}\otimes P(a{}_{2})-a{}_{(0)}\otimes\Delta_{H}(a{}_{(1)})\big)$,
\item[(C13)] $\gamma(x{}_{[0]})\otimes x{}_{[1]}-x{}_{[0]}\otimes\Delta_{A}(x{}_{[1]})=-\tau_{23}\big(\gamma(x{}_{[0]})\otimes x{}_{[1]}-x{}_{[0]}\otimes\Delta_{A}(x{}_{[1]})\big)$,
\item[(C14)] $\Delta_{H}(x{}_{[0]})\otimes x{}_{[1]}-x{}_{1}\otimes \gamma(x{}_{2})-x{}_{[0]}\otimes\phi(x{}_{[1]})\\ =-\tau_{23}\big(\phi(x{}_{[-1]})\otimes x{}_{[0]}+\gamma(x{}_{1})\otimes x{}_{2}-x{}_{1}\otimes\rho(x{}_{2})-x{}_{[0]}\otimes\psi(x{}_{[1]})\big)$,
\item[(C15)] $\Delta_{A}(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes \rho(x{}_{[0]})=-\tau_{23}\big(\rho(x{}_{[0]})\otimes x{}_{[1]}-x{}_{[-1]}\otimes\gamma(x{}_{[0]})\big)$,
\item[(C16)] $\rho(x{}_{1})\otimes x{}_{2}+\psi(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes\Delta_{H}(x{}_{[0]})\\ =-\tau_{23}\big(\rho(x{}_{1})\otimes x{}_{2}+\psi(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes\Delta_{H}(x{}_{[0]})\big)$,
\item[(C17)] $\Delta_H(a{}_{<1>})\otimes a{}_{<2>}+P(a{}_{(0)})\otimes a{}_{(1)}-a{}_{(-1)}\otimes P(a{}_{(0)})-a{}_{<1>}\otimes \Delta_H(a{}_{<2>})\\ =-\tau_{12}\big(\Delta_H(a{}_{<1>})\otimes a{}_{<2>}+P(a{}_{(0)})\otimes a{}_{(1)}-a{}_{(-1)}\otimes P(a{}_{(0)})-a{}_{<1>}\otimes \Delta_H(a{}_{<2>})\big)$,
\item[(C18)] $\Delta_H(a{}_{<1>})\otimes a{}_{<2>}+P(a{}_{(0)})\otimes a{}_{(1)}-a{}_{(-1)}\otimes P(a{}_{(0)})-a{}_{<1>}\otimes \Delta_H(a{}_{<2>})\\ =-\tau_{23}\big(\Delta_H(a{}_{<1>})\otimes a{}_{<2>}+P(a{}_{(0)})\otimes a{}_{(1)}-a{}_{(-1)}\otimes P(a{}_{(0)})-a{}_{<1>}\otimes \Delta_H(a{}_{<2>})\big)$,
\item[(C19)] $\Delta_H(x{}_{1})\otimes x{}_{2}+ P(x{}_{[-1]}) \otimes x{}_{[0]}-x_1\otimes \Delta_H(x_2)-x{}_{[0]}\otimes P(x{}_{[1]})\\ =-\tau_{12}\big(\Delta_H(x{}_{1})\otimes x{}_{2}+ P(x{}_{[-1]}) \otimes x{}_{[0]}-x_1\otimes \Delta_H(x_2)-x{}_{[0]}\otimes P(x{}_{[1]})\big)$,
\item[(C20)] $\Delta_H(x{}_{1})\otimes x{}_{2}+ P(x{}_{[-1]}) \otimes x{}_{[0]}-x_1\otimes \Delta_H(x_2)-x{}_{[0]}\otimes P(x{}_{[1]})\\ =-\tau_{23}\big(\Delta_H(x{}_{1})\otimes x{}_{2}+ P(x{}_{[-1]}) \otimes x{}_{[0]}-x_1\otimes \Delta_H(x_2)-x{}_{[0]}\otimes P(x{}_{[1]})\big)$. \end{enumerate} \end{lemma} Denote the set of all coalgebraic extending datum of ${A}$ by $V$ of type (c1) by $\mathcal{C}^{(1)}({A},V)$.
\begin{lemma}\label{lem:33-3} Let $({A},\Delta_A)$ be an alternative coalgebra and $E$ be a vector space containing ${A}$ as a subspace. Suppose that there is an alternative coalgebra structure $(E,\Delta_E)$ on $E$ such that $p: E\to {A}$ is an alternative coalgebra homomorphism. Then there exists an alternative coalgebraic extending system $\Omega^c({A}, V)$ of $({A},\Delta_A)$ by $V$ such that $(E,\Delta_E)\cong A^{P}\# {}^{} V$. \end{lemma}
\begin{proof} Let $p: E\to {A}$ and $\pi: E\to V$ be the projection map and $V=\ker({p})$. Then the extending datum of $({A},\Delta_A)$ by $V$ is defined as follows: \begin{eqnarray*} &&{\phi}: A\rightarrow V\otimes {A},~~~~{\phi}(a)=(\pi\otimes {p})\Delta_E(a),\\ &&{\psi}: A\rightarrow A\otimes V,~~~~{\psi}(a)=({p}\otimes \pi)\Delta_E(a),\\ &&{\rho}: V\rightarrow A\otimes V,~~~~{\rho}(x)=({p}\otimes \pi)\Delta_E(x),\\ &&{\gamma}: V\rightarrow V\otimes {A},~~~~{\gamma}(x)=(\pi\otimes {p})\Delta_E(x),\\ &&\Delta_V: V\rightarrow V\otimes V,~~~~\Delta_V(x)=(\pi\otimes \pi)\Delta_E(x),\\ &&Q: V\rightarrow {A}\otimes {A},~~~~Q(x)=({p}\otimes {p})\Delta_E(x)\\ &&P: A\rightarrow {V}\otimes {V},~~~~P(a)=({\pi}\otimes {\pi})\Delta_E(a). \end{eqnarray*} One check that $\varphi: A^{P}\# {}^{} V\to E$ given by $\varphi(a+x)=a+x$ for all $a\in A, x\in V$ is an alternative coalgebra isomorphism. \end{proof}
\begin{lemma}\label{lem-c1} Let $\Omega^{(1)}({A}, V)=(\phi, {\psi},\rho,\gamma, P, \Delta_V)$ and ${\Omega'^{(1)}}({A}, V)=(\phi', {\psi'},\rho',\gamma', P', \Delta'_V)$ be two alternative coalgebraic extending datums of $({A},\Delta_A)$ by $V$. Then there exists a bijection between the set of alternative coalgebra homomorphisms $\varphi: A^{P}\# {}^{} V\rightarrow A^{P'}\# {}^{} V$ whose restriction on ${A}$ is the identity map and the set of pairs $(r,s)$, where $r:V\rightarrow {A}$ and $s:V\rightarrow V$ are two linear maps satisfying \begin{eqnarray} \label{comorph11}&&P'(a)=s(a{}_{<1>})\otimes s(a{}_{<2>}),\\ \label{comorph121}&&\phi'(a)={s}(a{}_{(-1)})\otimes a{}_{(0)}+s(a{}_{<1>})\otimes r(a{}_{<2>}),\\ \label{comorph122}&&\psi'(a)=a{}_{(0)}\otimes {s}(a{}_{(1)}) +r(a{}_{<1>})\otimes s(a{}_{<2>}),\\ \label{comorph13}&&\Delta'_A(a)=\Delta_A(a)+{r}(a{}_{(-1)})\otimes a{}_{(0)}+a{}_{(0)}\otimes {r}(a{}_{(1)})+r(a{}_{<1>})\otimes r(a{}_{<2>})\\ \label{comorph21}&&\Delta_V'({s}(x))=({s}\otimes {s})\Delta_V(x),\\ \label{comorph221}&&{\rho}'({s}(x))+\psi'(r(x))=r(x{}_{1})\otimes s(x{}_{2})+x{}_{[-1]}\otimes s(x{}_{[0]}),\\ \label{comorph222}&&{\gamma}'({s}(x))+\phi'(r(x))=s(x{}_{1})\otimes r(x{}_{2})+s(x{}_{[0]})\otimes x{}_{[1]},\\ \label{comorph23}&&\Delta'_A({r}(x))+P'(r(x))=r(x{}_{1})\otimes r(x{}_{2})+x{}_{[-1]}\otimes r(x{}_{[0]})+r(x{}_{[0]})\otimes x{}_{[1]}. \end{eqnarray}
Under the above bijection the alternative coalgebra homomorphism $\varphi=\varphi_{r,s}: A^{P}\# {}^{} V\rightarrow A^{P'}\# {}^{} V$ to $(r,s)$ is given by $\varphi(a+x)=(a+r(x))+s(x)$ for all $a\in {A}$ and $x\in V$. Moreover, $\varphi=\varphi_{r,s}$ is an isomorphism if and only if $s: V\rightarrow V$ is a linear isomorphism. \end{lemma} \begin{proof} Let $\varphi: A^{P}\# {}^{} V\rightarrow A^{P'}\# {}^{} V$ be an alternative coalgebra homomorphism whose restriction on ${A}$ is the identity map. Then $\varphi$ is determined by two linear maps $r: V\rightarrow {A}$ and $s: V\rightarrow V$ such that $\varphi(a+x)=(a+r(x))+s(x)$ for all $a\in {A}$ and $x\in V$. We will prove that $\varphi$ is a homomorphism of alternative coalgebras if and only if the above conditions hold. First it is easy to see that $\Delta'_E\varphi(a)=(\varphi\otimes \varphi)\Delta_E(a)$ for all $a\in {A}$. \begin{eqnarray*} \Delta'_E\varphi(a)&=&\Delta'_E(a)=\Delta'_A(a)+\phi'(a)+\psi'(a)+P'(a), \end{eqnarray*} and \begin{eqnarray*} &&(\varphi\otimes \varphi)\Delta_E(a)\\ &=&(\varphi\otimes \varphi)\left(\Delta_A(a)+\phi(a)+\psi(a)+P(a)\right)\\ &=&\Delta_A(a)+{r}(a{}_{(-1)})\otimes a{}_{(0)}+{s}(a{}_{(-1)})\otimes a{}_{(0)}+a{}_{(0)}\otimes {r}(a{}_{(1)}) +a{}_{(0)}\otimes {s}(a{}_{(1)})\\ &&+r(a{}_{<1>})\otimes r(a{}_{<2>})+r(a{}_{<1>})\otimes s(a{}_{<2>})+s(a{}_{<1>})\otimes r(a{}_{<2>})+s(a{}_{<1>})\otimes s(a{}_{<2>}). \end{eqnarray*} Thus we obtain that $\Delta'_E\varphi(a)=(\varphi\otimes \varphi)\Delta_E(a)$ if and only if the conditions \eqref{comorph11}, \eqref{comorph121}, \eqref{comorph122} and \eqref{comorph13} hold. Then we consider that $\Delta'_E\varphi(x)=(\varphi\otimes \varphi)\Delta_E(x)$ for all $x\in V$. \begin{eqnarray*} \Delta'_E\varphi(x)&=&\Delta'_E({r}(x)+{s}(x))=\Delta'_E({r}(x))+\Delta'_E({s}(x))\\ &=&\Delta'_A({r}(x))+\phi'(r(x))+\psi'(r(x))+P(r(x))+\Delta'_V({s}(x))+{\rho}'({s}(x))+{\gamma}'({s}(x))), \end{eqnarray*} and \begin{eqnarray*} &&(\varphi\otimes \varphi)\Delta_E(x)\\ &=&(\varphi\otimes \varphi)(\Delta_V(x)+{\rho}(x)+{\gamma}(x))\\ &=&(\varphi\otimes \varphi)(x{}_{1}\otimes x{}_{2}+x{}_{[-1]}\otimes x{}_{[0]}+x{}_{[0]}\otimes x{}_{[1]})\\ &=&r(x{}_{1})\otimes r(x{}_{2})+r(x{}_{1})\otimes s(x{}_{2})+s(x{}_{1})\otimes r(x{}_{2})+s(x{}_{1})\otimes s(x{}_{2})\\ &&+x{}_{[-1]}\otimes r(x{}_{[0]})+x{}_{[-1]}\otimes s(x{}_{[0]})+r(x{}_{[0]})\otimes x{}_{[1]}+s(x{}_{[0]})\otimes x{}_{[1]}. \end{eqnarray*} Thus we obtain that $\Delta'_E\varphi(x)=(\varphi\otimes \varphi)\Delta_E(x)$ if and only if the conditions \eqref{comorph21}, \eqref{comorph221}, \eqref{comorph222} and \eqref{comorph23} hold. By definition, we obtain that $\varphi=\varphi_{r,s}$ is an isomorphism if and only if $s: V\rightarrow V$ is a linear isomorphism. \end{proof}
The second case is $\phi=0$ and $\psi=0$, we obtain the following type (c2) unified coproduct for coalgebras. \begin{lemma}\label{cor02} Let $({A},\Delta_A)$ be an alternative coalgebra and $V$ be a vector space. An extending datum of $({A},\Delta_A)$ by $V$ of type (c2) is $\Omega^{(2)}({A},V)=(\rho, \gamma, {Q}, \Delta_V)$ with linear maps \begin{eqnarray*} &&\rho: V \to A\otimes V,\quad \gamma: V \to V \otimes A,\quad \Delta_{V}: V \to V\otimes V,\quad Q: V \to A\otimes A. \end{eqnarray*}
Denote by $A^{}\# {}^{Q} V$ the vector space $E={A}\oplus V$ with the comultiplication $\Delta_E: E\rightarrow E\otimes E$ given by \begin{eqnarray} \Delta_{E}(a)&=&\Delta_{A}(a),\quad \Delta_{E}(x)=(\Delta_{V}+\rho+\gamma+Q)(x), \\ \Delta_{E}(a)&=& a{}_{1} \otimes a{}_{2},\quad \Delta_{E}(x)= x{}_{1} \otimes x{}_{2}+ x{}_{[-1]} \otimes x{}_{[0]}+x{}_{[0]} \otimes x{}_{[1]}+x{}_{\{1\}}\otimes x{}_{\{2\}}. \end{eqnarray} Then $A^{}\# {}^{Q} V$ is an alternative coalgebra with the comultiplication given above if and only if the following compatibility conditions hold:
\begin{enumerate} \item[(D1)] $\gamma(x{}_{[0]})\otimes x{}_{[1]}-x{}_{1}\otimes Q(x{}_{2})-x{}_{[0]}\otimes\Delta_{A}(x{}_{[1]})=-\tau_{12}\big(\rho(x{}_{[0]})\otimes x{}_{[1]}-x{}_{[-1]}\otimes\gamma(x{}_{[0]})\big)$,
\item[(D2)] $\Delta_{V}(x{}_{[0]})\otimes x{}_{[1]}-x{}_{1}\otimes \gamma(x{}_{2})=-\tau_{12}\big(\Delta_{V}(x{}_{[0]})\otimes x{}_{[1]}-x{}_{1}\otimes \gamma(x{}_{2})\big)$,
\item[(D3)] $Q(x{}_{1})\otimes x{}_{2}+\Delta_{A}(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes \rho(x{}_{[0]})\\ =-\tau_{12}\big(Q(x{}_{1})\otimes x{}_{2}+\Delta_{A}(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes \rho(x{}_{[0]})\big)$,
\item[(D4)] $\rho(x{}_{1})\otimes x{}_{2}-x{}_{[-1]}\otimes\Delta_{V}(x{}_{[0]})=-\tau_{12}\big(\gamma(x{}_{1})\otimes x{}_{2}-x{}_{1}\otimes\rho(x{}_{2})\big)$,
\item[(D5)] $\gamma(x{}_{[0]})\otimes x{}_{[1]}-x{}_{1}\otimes Q(x{}_{2})-x{}_{[0]}\otimes\Delta_{A}(x{}_{[1]})\\ =-\tau_{23}\big(\gamma(x{}_{[0]})\otimes x{}_{[1]}-x{}_{1}\otimes Q(x{}_{2})-x{}_{[0]}\otimes\Delta_{A}(x{}_{[1]})\big)$,
\item[(D6)] $\Delta_{V}(x{}_{[0]})\otimes x{}_{[1]}-x{}_{1}\otimes \gamma(x{}_{2})=-\tau_{23}\big(\gamma(x{}_{1})\otimes x{}_{2}-x{}_{1}\otimes\rho(x{}_{2})\big)$,
\item[(D7)] $Q(x{}_{1})\otimes x{}_{2}+\Delta_{A}(x{}_{[-1]})\otimes x{}_{[0]}-x{}_{[-1]}\otimes \rho(x{}_{[0]})=-\tau_{23}\big(\rho(x{}_{[0]})\otimes x{}_{[1]}-x{}_{[-1]}\otimes\gamma(x{}_{[0]})\big)$,
\item[(D8)] $\rho(x{}_{1})\otimes x{}_{2}-x{}_{[-1]}\otimes\Delta_{V}(x{}_{[0]})=-\tau_{23}\big(\rho(x{}_{1})\otimes x{}_{2}-x{}_{[-1]}\otimes\Delta_{V}(x{}_{[0]})\big)$,
\item[(D9)] $ \Delta_A(x{}_{\{1\}})\otimes x{}_{\{2\}}+Q(x{}_{[0]})\otimes x{}_{[1]}-x{}_{\{1\}}\otimes \Delta_A(x{}_{\{2\}})-x{}_{[-1]}\otimes Q(x{}_{[0]})\\ =-\tau_{12}\big(\Delta_A(x{}_{\{1\}})\otimes x{}_{\{2\}}+Q(x{}_{[0]})\otimes x{}_{[1]}-x{}_{\{1\}}\otimes \Delta_A(x{}_{\{2\}})-x{}_{[-1]}\otimes Q(x{}_{[0]})\big)$,
\item[(D10)] $ \Delta_A(x{}_{\{1\}})\otimes x{}_{\{2\}}+Q(x{}_{[0]})\otimes x{}_{[1]}-x{}_{\{1\}}\otimes \Delta_A(x{}_{\{2\}})-x{}_{[-1]}\otimes Q(x{}_{[0]})\\ =-\tau_{23}\big(\Delta_A(x{}_{\{1\}})\otimes x{}_{\{2\}}+Q(x{}_{[0]})\otimes x{}_{[1]}-x{}_{\{1\}}\otimes \Delta_A(x{}_{\{2\}})-x{}_{[-1]}\otimes Q(x{}_{[0]})\big)$,
\item[(D11)] $\Delta_V(x{}_{1})\otimes x{}_{2}-x_1\otimes \Delta_V(x_2)=-\tau_{12}\big(\Delta_V(x{}_{1})\otimes x{}_{2}-x_1\otimes \Delta_V(x_2)\big)$,
\item[(D12)] $\Delta_V(x{}_{1})\otimes x{}_{2}-x_1\otimes \Delta_V(x_2)=-\tau_{23}\big(\Delta_V(x{}_{1})\otimes x{}_{2}-x_1\otimes \Delta_V(x_2)\big)$. \end{enumerate} \end{lemma} Note that in this case $(V,\Delta_V)$ is an alternative coalgebra.
Denote the set of all alternative coalgebraic extending datum of ${A}$ by $V$ of type (c2) by $\mathcal{C}^{(2)}({A},V)$.
Similar to the alternative algebra case, one show that any alternative coalgebra structure on $E$ containing ${A}$ as an alternative subcoalgebra is isomorphic to such a unified coproduct. \begin{lemma}\label{lem:33-4} Let $({A},\Delta_A)$ be an alternative coalgebra and $E$ be a vector space containing ${A}$ as a subspace. Suppose that there is an alternative coalgebra structure $(E,\Delta_E)$ on $E$ such that $({A},\Delta_A)$ is an alternative subcoalgebra of $E$. Then there exists an alternative coalgebraic extending system $\Omega^{(2)}({A}, V)$ of $({A},\Delta_A)$ by $V$ such that $(E,\Delta_E)\cong A^{}\# {}^{Q} V$. \end{lemma}
\begin{proof} Let $p: E\to {A}$ and $\pi: E\to V$ be the projection map and $V=ker({p})$. Then the extending datum of $({A},\Delta_A)$ by $V$ is defined as follows: \begin{eqnarray*} &&{\rho}: V\rightarrow A\otimes V,~~~~{\phi}(x)=(p\otimes {\pi})\Delta_E(x),\\ &&{\gamma}: V\rightarrow V\otimes {A},~~~~{\phi}(x)=(\pi\otimes {p})\Delta_E(x),\\ &&\Delta_V: V\rightarrow V\otimes V,~~~~\Delta_V(x)=(\pi\otimes \pi)\Delta_E(x),\\ &&Q: V\rightarrow {A}\otimes {A},~~~~Q(x)=({p}\otimes {p})\Delta_E(x). \end{eqnarray*} One check that $\varphi: A^{}\# {}^{Q} V\to E$ given by $\varphi(a+x)=a+x$ for all $a\in A, x\in V$ is an alternative coalgebra isomorphism. \end{proof}
\begin{lemma}\label{lem-c2} Let $\Omega^{(2)}({A}, V)=(\rho, \gamma, {Q}, \Delta_V)$ and ${\Omega'^{(2)}}({A}, V)=(\rho', \gamma', {Q'}, \Delta'_V)$ be two alternative coalgebraic extending datums of $({A},\Delta_A)$ by $V$. Then there exists a bijection between the set of alternative coalgebra homomorphisms $\varphi: A \# {}^{Q} V\rightarrow A \# {}^{Q'} V$ whose restriction on ${A}$ is the identity map and the set of pairs $(r,s)$, where $r:V\rightarrow {A}$ and $s:V\rightarrow V$ are two linear maps satisfying \begin{eqnarray} \label{comorph1}&&{\rho}'({s}(x))=r(x{}_{1})\otimes s(x{}_{2})+x{}_{[-1]}\otimes s(x{}_{[0]}),\\ \label{comorph2}&&{\gamma}'({s}(x))=s(x{}_{1})\otimes r(x{}_{2})+s(x{}_{[0]})\otimes x{}_{[1]},\\ \label{comorph3}&&\Delta_V'({s}(x))=({s}\otimes {s})\Delta_V(x)\\%+s(x_{(0)})\otimes r(x_{(1)})- r(x_{(1)})\otimes s(x_{(0)}),\\ \label{comorph4}&&\Delta'_A({r}(x))+{Q'}({s}(x))=r(x{}_{1})\otimes r(x{}_{2})+x{}_{[-1]}\otimes r(x{}_{[0]})+r(x{}_{[0]})\otimes x{}_{[1]}+{Q}(x). \end{eqnarray}
Under the above bijection the alternative coalgebra homomorphism $\varphi=\varphi_{r,s}: A^{ }\# {}^{Q} V\rightarrow A^{ }\# {}^{Q'} V$ to $(r,s)$ is given by $\varphi(a+x)=(a+r(x))+s(x)$ for all $a\in {A}$ and $x\in V$. Moreover, $\varphi=\varphi_{r,s}$ is an isomorphism if and only if $s: V\rightarrow V$ is a linear isomorphism. \end{lemma} \begin{proof} The proof is similar as the proof of Lemma \ref{lem-c1}. Let $\varphi: A^{ }\# {}^{Q} V\rightarrow A^{}\# {}^{Q'} V$ be an alternative coalgebra homomorphism whose restriction on ${A}$ is the identity map. First it is easy to see that $\Delta'_E\varphi(a)=(\varphi\otimes \varphi)\Delta_E(a)$ for all $a\in {A}$. Then we consider that $\Delta'_E\varphi(x)=(\varphi\otimes \varphi)\Delta_E(x)$ for all $x\in V$. \begin{eqnarray*} \Delta'_E\varphi(x)&=&\Delta'_E({r}(x)+{s}(x))=\Delta'_E({r}(x))+\Delta'_E({s}(x))\\ &=&\Delta'_A({r}(x))+\Delta'_V({s}(x))+{\rho}'({s}(x))+{\gamma}'({s}(x))+{Q}'({s}(x)), \end{eqnarray*} and \begin{eqnarray*} &&(\varphi\otimes \varphi)\Delta_E(x)\\ &=&(\varphi\otimes \varphi)(\Delta_V(x)+{\rho}(x)+{\gamma}(x)+{Q}(x))\\ &=&(\varphi\otimes \varphi)(x{}_{1}\otimes x{}_{2}+x{}_{[-1]}\otimes x{}_{[0]}+x{}_{[0]}\otimes x{}_{[1]}+{Q}(x))\\ &=&r(x{}_{1})\otimes r(x{}_{2})+r(x{}_{1})\otimes s(x{}_{2})+s(x{}_{1})\otimes r(x{}_{2})+s(x{}_{1})\otimes s(x{}_{2})\\ &&+x{}_{[-1]}\otimes r(x{}_{[0]})+x{}_{[-1]}\otimes s(x{}_{[0]})+r(x{}_{[0]})\otimes x{}_{[1]}+s(x{}_{[0]})\otimes x{}_{[1]}+{Q}(x). \end{eqnarray*} Thus we obtain that $\Delta'_E\varphi(x)=(\varphi\otimes \varphi)\Delta_E(x)$ if and only if the conditions \eqref{comorph1}, \eqref{comorph2}, \eqref{comorph3} and \eqref{comorph4} hold. By definition, we obtain that $\varphi=\varphi_{r,s}$ is an isomorphism if and only if $s: V\rightarrow V$ is a linear isomorphism. \end{proof}
Let $({A},\Delta_A)$ be an alternative coalgebra and $V$ be a vector space. Two alternative coalgebraic extending systems $\Omega^{(i)}({A}, V)$ and ${\Omega'^{(i)}}({A}, V)$ are called equivalent if $\varphi_{r,s}$ is an isomorphism. We denote it by $\Omega^{(i)}({A}, V)\equiv{\Omega'^{(i)}}({A}, V)$. From the above lemmas, we obtain the following result. \begin{theorem}\label{thm3-2} Let $({A},\Delta_A)$ be an alternative coalgebra, $E$ be a vector space containing ${A}$ as a subspace and $V$ be a ${A}$-complement in $E$. Denote $\mathcal{HC}(V,{A}):=\mathcal{C}^{(1)}({A},V)\sqcup\mathcal{C}^{(2)}({A},V) /\equiv$. Then the map \begin{eqnarray*} &&\Psi: \mathcal{HC}_{{A}}^2(V,{A})\rightarrow CExtd(E,{A}),\\ &&\overline{\Omega^{(1)}({A},V)}\mapsto A^{P}\# {}^{} V,
\quad \overline{\Omega^{(2)}({A},V)}\mapsto A^{}\# {}^{Q} V \end{eqnarray*} is bijective, where $\overline{\Omega^{(i)}({A},V)}$ is the equivalence class of $\Omega^{(i)}({A}, V)$ under $\equiv$. \end{theorem}
\subsection{Extending structures for alternative bialgebras}
Let $(A,\cdot,\Delta_A)$ be an alternative bialgebra. From (CBB1) and (CBB2) we have the following two cases.
The first case is that we assume $Q=0$ and $\rightharpoonup, \leftharpoonup$ to be trivial. Then by the above Theorem \ref{main2}, we obtain the following result.
\begin{theorem}\label{thm-41} Let $(A,\cdot,\Delta_A)$ be an alternative bialgebra and $V$ be a vector space. An extending datum of ${A}$ by $V$ of type (I) is $\Omega^{(1)}({A},V)=(\triangleright, \triangleleft, \phi, \psi,\rho,\gamma,\theta, P, \cdot_V, \Delta_V)$ consisting of linear maps \begin{eqnarray*} \triangleright: V\otimes {A}\rightarrow V,~~~~\triangleleft:A\otimes V\rightarrow V,~~~~\theta: A\otimes A \rightarrow {V},~~~~\cdot_V:V\otimes V \rightarrow V,\\
\phi :A \to V\otimes A, \quad{\psi}: V\to V\otimes A,~~~~{P}: A\rightarrow {V}\otimes {V},~~~~\Delta_V: V\rightarrow V\otimes V,\\
\rho:V\to A \otimes V,~~~~\gamma :V\to V \otimes A. \end{eqnarray*} Then the unified product $A^{P}_{}\# {}^{}_{\theta}\, V$ with bracket \begin{align} (a+ x) (b+ y):=ab+( xy+ a\triangleright y+x\triangleleft b+\theta(a, b)) \end{align} and comultiplication \begin{eqnarray} \Delta_E(a)=\Delta_A(a)+{\phi}(a)+{\psi}(a)+P(a),\quad \Delta_E(x)=\Delta_V(x)+{\rho}(x)+{\gamma}(x) \end{eqnarray} forms an alternative bialgebra if and only if $A_{}\# {}_{\theta} V$ forms an alternative algebra, $A^{P}\# {}^{} \, V$ forms an alternative coalgebra and the following conditions are satisfied: \begin{enumerate} \item[(E1)] $\phi(ab)+\gamma(\theta(a, b))\\
=-a{}_{(1)}\otimes b a{}_{(0)}+b{}_{(-1)}\otimes a b{}_{(0)}+b{}_{(-1)}\otimes b{}_{(0)} a+(a{}_{(-1)}\triangleleft b)\otimes a{}_{(0)}\\ +(a{}_{(1)}\triangleleft b)\otimes a{}_{(0)}-(a\triangleright b{}_{(-1)})\otimes b{}_{(0)}+\theta(a{}_{1}, b)\otimes a{}_{2}+\theta(a{}_{2}, b)\otimes a{}_{1}-\theta(a, b{}_{1})\otimes b{}_{2}$,
\item[(E2)] $\psi(a b)+\rho(\theta(a, b))\\ =a{}_{(0)} b\otimes a{}_{(1)}+a{}_{(0)} b\otimes a{}_{(-1)}-a b{}_{(0)}\otimes b{}_{(1)}-a{}_{(0)}\otimes (b\triangleright a{}_{(-1)})+b{}_{(0)}\otimes(a\triangleright b{}_{(1)})\\ +b{}_{(0)}\otimes(b{}_{(1)}\triangleleft a)+b{}_{1}\otimes\theta(a, b{}_{2})+b{}_{1}\otimes\theta(b{}_{2}, a)-a{}_{2}\otimes\theta(b, a{}_{1})$,
\item[(E3)] $\rho(x y)=-x_{[1]} \otimes y x_{[0]} +y{}_{[-1]}\otimes x y{}_{[0]}+y{}_{[-1]}\otimes y{}_{[0]} x$,
\item[(E4)] $\gamma(x y)=x{}_{[0]} y\otimes x{}_{[1]}+x{}_{[0]} y\otimes x{}_{[-1]}-xy_{[0]}\otimes y_{[1]}$,
\item[(E5)] $\Delta_{V}(a \triangleright y)$\\ $=(a{}_{(0)}\triangleright y)\otimes a{}_{(1)}+(a{}_{(0)}\triangleright y)\otimes a{}_{(-1)}-a{}_{(1)}\otimes(y\triangleleft a{}_{(0)})+y{}_{1}\otimes(a\triangleright y{}_{2})\\ +y{}_{1}\otimes(y{}_{2}\triangleleft a)-\left(a \triangleright y_{1}\right) \otimes y_{2}+y{}_{[0]}\otimes\theta(a,y{}_{[1]})+y{}_{[0]}\otimes\theta(y{}_{[1]},a)\\ -\theta(a,y{}_{[-1]})\otimes y{}_{[0]}+a{}_{<1>} y\otimes a{}_{<2>}+a{}_{<2>} y\otimes a{}_{<1>}-a{}_{<2>}\otimes y a{}_{<1>}$,
\item[(E6)] $\Delta_{V}(x \triangleleft b)$\\ $=(x{}_{1}\triangleleft b)\otimes x{}_{2}+(x{}_{2}\triangleleft b)\otimes x{}_{1}-x{}_{2}\otimes(b\triangleright x{}_{1})+b{}_{(-1)}\otimes(x\triangleleft b{}_{(0)})\\ +b{}_{(-1)}\otimes(b{}_{(0)}\triangleright x)-\left(x\triangleleft b_{(0)}\right) \otimes b_{(1)}+\theta(x{}_{[-1]},b)\otimes x{}_{[0]}+\theta(x{}_{[1]},b)\otimes x{}_{[0]}\\ -x{}_{[0]}\otimes\theta(b,x{}_{[-1]})+b{}_{<1>}\otimes x b{}_{<2>}+b{}_{<1>}\otimes b{}_{<2>} x-x b{}_{<1>}\otimes b{}_{<2>}$,
\item[(E7)]$\Delta_{V}(\theta(a,b))+P(a, b)$\\ $=\theta(a{}_{(0)},b)\otimes a{}_{(1)}+\theta(a{}_{(0)},b)\otimes a{}_{(-1)}-\theta(a,b{}_{(0)})\otimes b{}_{(1)}+b{}_{(-1)}\otimes\theta(a,b{}_{(0)})\\ +b{}_{(-1)}\otimes\theta(b{}_{(0)},a)-a{}_{(1)}\otimes\theta(b,a{}_{(0)})+(a{}_{<1>}\triangleleft b)\otimes a{}_{<2>}+(a{}_{<2>}\triangleleft b)\otimes a{}_{<1>}\\ -(a\triangleright b{}_{<1>})\otimes b{}_{<2>}+b{}_{<1>}\otimes(a\triangleright b{}_{<2>})+b{}_{<1>}\otimes(b{}_{<2>}\triangleleft a)-a{}_{<2>}\otimes(b\triangleright a{}_{<1>})$,
\item[(E8)]
$\gamma(x\triangleleft b)=(x{}_{[0]}\triangleleft b)\otimes x{}_{[1]}+(x{}_{[0]}\triangleleft b)\otimes x{}_{[-1]}-x{}_{[0]}\otimes b x{}_{[-1]}-\left(x\triangleleft b_{1}\right) \otimes b_{2}-x b_{(-1)} \otimes b_{(0)}$,
\item[(E9)] $\rho(a \triangleright y)=-a{}_{2}\otimes(y\triangleleft a{}_{1})-a{}_{(0)}\otimes y a{}_{(-1)}+y{}_{[-1]}\otimes(a\triangleright y{}_{[0]})+y{}_{[-1]}\otimes(y{}_{[0]}\triangleleft a)-a y_{[-1]} \otimes y_{[0]}$,
\item[(E10)] $\rho(x\triangleleft b)=x{}_{[-1]} b\otimes x{}_{[0]}+x{}_{[1]} b\otimes x{}_{[0]}-x{}_{[1]}\otimes(b\triangleright x{}_{[0]})\\ +b{}_{1}\otimes(x\triangleleft b{}_{2})+b{}_{(0)}\otimes x b{}_{(1)}+b{}_{1}\otimes(b{}_{2}\triangleright x)+b{}_{(0)}\otimes b{}_{(1)} x$,
\item[(E11)] $\gamma(a \triangleright y)=(a{}_{1}\triangleright y)\otimes a{}_{2}+a{}_{(-1)} y\otimes a{}_{(0)}+(a{}_{2}\triangleright y)\otimes a{}_{1}\\ +a{}_{(1)} y\otimes a{}_{(0)}+y{}_{[0]}\otimes a y{}_{[1]}+y{}_{[0]}\otimes y{}_{[1]} a-\left(a\triangleright y_{[0]}\right)\otimes y_{[1]}$,
\item[(E12)] $\phi(b a)+\gamma(\theta(b,a))+\tau\psi(b a)+\tau\rho(\theta(b,a))\\ =a{}_{(-1)}\otimes b a{}_{(0)}+(b\triangleright a{}_{(1)})\otimes a{}_{(0)}+(b{}_{(-1)}\triangleleft a)\otimes b{}_{(0)}+b{}_{(1)}\otimes b{}_{(0)} a\\ +\theta(b,a{}_{2})\otimes a{}_{1}+\theta(b{}_{1},a)\otimes b{}_{2}$,
\item[(E13)] $\psi(b a)+\rho(\theta(b,a))+\tau\phi(b a)+\tau\gamma(\theta(b,a))\\ =a{}_{(0)}\otimes(b\triangleright a{}_{(1)})+b a{}_{(0)} \otimes a{}_{(-1)}+ b{}_{(0)} a\otimes b{}_{(1)}+b{}_{(0)}\otimes(b{}_{(-1)}\triangleleft a)\\ +a{}_{1}\otimes\theta(b,a{}_{2})+b{}_{2}\otimes\theta(b{}_{1},a)$,
\item[(E14)] $\rho(y x)+\tau\gamma(y x)=x{}_{[-1]}\otimes y x{}_{[0]}+y{}_{[1]}\otimes y{}_{[0]} x$,
\item[(E15)] $\gamma(y x)+\tau\rho(y x)=y x{}_{[0]}\otimes x{}_{[-1]}+y{}_{[0]} x\otimes y{}_{[1]}$,
\item[(E16)] $\Delta_{V}(y\triangleleft a)+\tau\Delta_{V}(y\triangleleft a)\\ =a{}_{(-1)}\otimes(y\triangleleft a{}_{(0)})+(y\triangleleft a{}_{(0)})\otimes a{}_{(-1)}+(y{}_{1}\triangleleft a)\otimes y{}_{2}+y{}_{2}\otimes(y{}_{1}\triangleleft a)\\ +a{}_{<1>}\otimes y a{}_{<2>}+y a{}_{<2>}\otimes a{}_{<1>}+\theta(y{}_{[-1]},a)\otimes y{}_{[0]}+y{}_{[0]}\otimes\theta(y{}_{[-1]},a)$,
\item[(E17)] $\Delta_{V}(b \triangleright x)+\tau\Delta_{V}(b \triangleright x)\\ =x{}_{1}\otimes(b\triangleright x{}_{2})+(b\triangleright x{}_{2})\otimes x{}_{1}+(b{}_{(0)}\triangleright x)\otimes b{}_{(1)}+b{}_{(1)}\otimes (b{}_{(0)}\triangleright x)\\ +x{}_{[0]}\otimes\theta(b,x{}_{[1]})+\theta(b,x{}_{[1]})\otimes x{}_{[0]}+b{}_{<1>} x\otimes b{}_{<2>}+b{}_{<2>}\otimes b{}_{<1>} x$,
\item[(E18)] $\gamma(y\triangleleft a)+\tau\rho(y\triangleleft a)=(y\triangleleft a{}_{2})\otimes a{}_{1}+y a{}_{(1)}\otimes a{}_{(0)}+y{}_{[0]}\otimes y{}_{[-1]} a+(y{}_{[0]}\triangleleft a)\otimes y{}_{[1]}$,
\item[(E19)] $\gamma(b\triangleright x)+\tau\rho(b\triangleright x)=x{}_{[0]}\otimes b x{}_{[1]}+(b\triangleright x{}_{[0]})\otimes x{}_{[-1]}+(b{}_{1}\triangleright x)\otimes b{}_{2}+b{}_{(-1)} x\otimes b{}_{(0)}$,
\item[(E20)] $\rho(y\triangleleft a)+\tau\gamma(y\triangleleft a)=a{}_{1}\otimes(y\triangleleft a{}_{2})+a{}_{(0)}\otimes y a{}_{(1)}+y{}_{[-1]} a\otimes y{}_{[0]}+y{}_{[1]}\otimes(y{}_{[0]}\triangleleft a)$,
\item[(E21)] $\rho(b\triangleright x)+\tau\gamma(b\triangleright x)=x{}_{[-1]}\otimes(b\triangleright x{}_{[0]})+b x{}_{[1]}\otimes x{}_{[0]}+b{}_{2}\otimes(b{}_{1}\triangleright x)+b{}_{(0)}\otimes b{}_{(-1)} x$,
\item[(E22)] $P(b,a)+\Delta_{V}(\theta(b,a))+\tau P(b,a)+\tau\Delta_{V}(\theta(b,a))\\ =a{}_{(-1)}\otimes\theta(b,a{}_{(0)})+\theta(b,a{}_{(0)})\otimes a{}_{(-1)}+a{}_{<1>}\otimes(b\triangleright a{}_{<2>})+(b\triangleright a{}_{<2>})\otimes a{}_{<1>}\\ +\theta(b{}_{(0)},a)\otimes b{}_{(1)}+b{}_{(1)}\otimes\theta(b{}_{(0)},a)+(b{}_{<1>}\triangleleft a)\otimes b{}_{<2>}+b{}_{<2>}\otimes(b{}_{<1>}\triangleleft a)$,
\item[(E23)] $\Delta_{V}(xy)\\ =x{}_{1} y\otimes x{}_{2}+x{}_{2} y\otimes x{}_{1}-x{}_{2}\otimes y x{}_{1}+y{}_{1}\otimes x y{}_{2}\\ +y{}_{1}\otimes y{}_{2} x-x y{}_{1}\otimes y{}_{2}+(x{}_{[-1]}\triangleright y)\otimes x{}_{[0]}+(x{}_{[1]}\triangleright y)\otimes x{}_{[0]}\\ -x{}_{[0]}\otimes(y\triangleleft x{}_{[-1]})+y{}_{[0]}\otimes(x\triangleleft y{}_{[1]})+y{}_{[0]}\otimes(y{}_{[1]}\triangleright x)-(x\triangleleft y{}_{[-1]})\otimes y{}_{[0]},$
\item[(E24)] $\Delta_{V}(yx)+\tau\Delta_{V}(yx)\\ =x{}_{1} \otimes y x{}_{2}+y x{}_{2} \otimes x{}_{1}+y{}_{1} x\otimes y{}_{2}+y{}_{2}\otimes y{}_{1} x\\ +x{}_{[0]}\otimes(y\triangleleft x{}_{[1]})+(y\triangleleft x{}_{[1]})\otimes x{}_{[0]}+(y{}_{[-1]}\triangleright x)\otimes y{}_{[0]}+y{}_{[0]}\otimes(y{}_{[-1]}\triangleright x).$ \end{enumerate}
Conversely, any alternative bialgebra structure on $E$ with the canonical projection map $p: E\to A$ both an alternative algebra homomorphism and an alternative coalgebra homomorphism is of this form. \end{theorem} Note that in this case, $(V,\cdot,\Delta_V)$ is a braided alternative bialgebra. Although $(A,\cdot,\Delta_A)$ is not an alternative sub-bialgebra of $E=A^{P}_{}\# {}^{}_{\theta}\, V$, but it is indeed an alternative bialgebra and a subspace $E$. Denote the set of all alternative bialgebraic extending datum of type (I) by $\mathcal{IB}^{(1)}({A},V)$.
The second case is that we assume $P=0, \theta=0$ and $\phi, \psi$ to be trivial. Then by the above Theorem \ref{main2}, we obtain the following result.
\begin{theorem}\label{thm-42} Let $A$ be an alternative bialgebra and $V$ be a vector space. An extending datum of ${A}$ by $V$ of type (II) is $\Omega^{(2)}({A},V)=(\rightharpoonup, \leftharpoonup, \triangleright, \triangleleft, \sigma, \rho, \gamma, Q, \cdot_V, \Delta_V)$ consisting of linear maps \begin{eqnarray*} \triangleleft: V\otimes {A}\rightarrow {V},~~~~\triangleright: A\otimes {V}\rightarrow V,~~~~\sigma: V\otimes V \rightarrow {A},~~~\cdot_V:V\otimes V \rightarrow V,\\ {\rho}: V\to A\otimes V,~~~~{\gamma}: V\to V\otimes A,~~~~{Q}: V\rightarrow {A}\otimes {A},~~~~\Delta_V: V\rightarrow V\otimes V,\\ \rightharpoonup:V\otimes A \to A,~~~~ \leftharpoonup:A\otimes V \to A, \end{eqnarray*} Then the unified product $A^{}_{\sigma}\# {}^{Q}_{}\, V$ with bracket \begin{align} (a+ x)(b+ y):=\big(ab+x\rightharpoonup b+a\leftharpoonup y+\sigma(x, y))+( xy+x\triangleleft b+a\triangleright y\big) \end{align} and comultiplication \begin{eqnarray} \Delta_E(a)=\Delta_A(a),\quad \Delta_E(x)=\Delta_V(x)+{\rho}(x)+{\gamma}(x)+Q(x) \end{eqnarray} forms an alternative bialgebra if and only if $A_{\sigma}\# {}_{} V$ forms an alternative algebra, $A^{}\# {}^{Q}V$ forms an alternative coalgebra and the following conditions are satisfied:
\begin{enumerate} \item[(F1)] $\rho(x y)\\ =(x{}_{[1]}\leftharpoonup y)\otimes x{}_{[0]}+(x{}_{[-1]}\leftharpoonup y)\otimes x{}_{[0]}-x_{[1]} \otimes y x_{[0]}-\left(x \rightharpoonup y_{[-1]}\right) \otimes y_{[0]} \\ +y{}_{[-1]}\otimes x y{}_{[0]}+y{}_{[-1]}\otimes y{}_{[0]} x+\sigma(x{}_{1},y)\otimes x{}_{2}+\sigma(x{}_{2},y)\otimes x{}_{1}-\sigma(x,y{}_{1})\otimes y{}_{2}\\ +y{}_{\{1\}}\otimes (y{}_{\{2\}}\triangleright x)+y{}_{\{1\}}\otimes (x\triangleleft y{}_{\{2\}})-x{}_{\{2\}}\otimes(y\triangleleft x{}_{\{1\}})$,
\item[(F2)] $\gamma(x y)\\ =x{}_{[0]} y\otimes x{}_{[1]}+x{}_{[0]} y\otimes x{}_{[-1]}-x_{[0]}\otimes (y\rightharpoonup x{}_{[-1]})-xy_{[0]}\otimes y_{[1]}+y{}_{[0]}\otimes (x\rightharpoonup y{}_{[1]})\\ +y{}_{[0]}\otimes(y{}_{[1]}\leftharpoonup x)+(x{}_{\{1\}}\triangleright y)\otimes x{}_{\{2\}}+(x{}_{\{2\}}\triangleright y)\otimes x{}_{\{1\}}-x{}_{2}\otimes\sigma(y,x{}_{1})\\ +y{}_{1}\otimes\sigma(x,y{}_{2})+y{}_{1}\otimes\sigma(y{}_{2},x)-(x\triangleleft y{}_{\{1\}})\otimes y{}_{\{2\}}$,
\item[(F3)] $\Delta_{A}(x \rightharpoonup b)+Q(x\triangleleft b)$ \\ $=(x{}_{[0]}\rightharpoonup b)\otimes x{}_{[1]}+(x{}_{[0]}\rightharpoonup b)\otimes x{}_{[-1]}-x{}_{[1]}\otimes(b\leftharpoonup x{}_{[0]})-\left(x \rightharpoonup b_{1}\right) \otimes b_{2}\\ +b{}_{1}\otimes(x\rightharpoonup b{}_{2})+b{}_{1}\otimes(b{}_{2}\leftharpoonup x)+x{}_{\{1\}} b\otimes x{}_{\{2\}}+x{}_{\{2\}} b\otimes x{}_{\{1\}}-x{}_{\{2\}}\otimes b x{}_{\{1\}}$,
\item[(F4)] $\Delta_{A}(a\leftharpoonup y)+Q(a\triangleright y)$\\ $=(a{}_{1}\leftharpoonup y)\otimes a{}_{2}+\left(a_{2} \leftharpoonup y\right)\otimes a{}_{1}-a{}_{2}\otimes(y\rightharpoonup a{}_{1})-\left(a\leftharpoonup y_{[0]}\right) \otimes y_{[1]}\\ +y{}_{[-1]}\otimes(a\leftharpoonup y{}_{[0]})+y{}_{[-1]}\otimes(y{}_{[0]}\rightharpoonup a)+y{}_{\{1\}}\otimes a y{}_{\{2\}}+y{}_{\{1\}}\otimes y{}_{\{2\}} a-a y{}_{\{1\}}\otimes y{}_{\{2\}}$,
\item[(F5)] $\Delta_{V}(a \triangleright y)=y{}_{1}\otimes(a\triangleright y{}_{2})+y{}_{1}\otimes(y{}_{2}\triangleleft a)-\left(a \triangleright y_{1}\right) \otimes y_{2}$,
\item[(F6)] $\Delta_{V}(x \triangleleft b)=(x{}_{1}\triangleleft b)\otimes x{}_{2}+(x{}_{2}\triangleleft b)\otimes x{}_{1}-x{}_{2}\otimes(b\triangleright x{}_{1})$,
\item[(F7)]$\Delta_{A}(\sigma(x,y))+Q(x, y)$\\ $=\sigma(x{}_{[0]},y)\otimes x{}_{[1]}+\sigma(x{}_{[0]},y)\otimes x{}_{[-1]}-\sigma(x,y{}_{[0]})\otimes y{}_{[1]}+y{}_{[-1]}\otimes\sigma(x,y{}_{[0]})\\ +y{}_{[-1]}\otimes\sigma(y{}_{[0]},x)-x{}_{[1]}\otimes\sigma(y,x{}_{[0]})+(x{}_{\{1\}}\leftharpoonup y)\otimes x{}_{\{2\}}+(x{}_{\{2\}}\leftharpoonup y)\otimes x{}_{\{1\}}\\ -x{}_{\{2\}}\otimes(y\rightharpoonup x{}_{\{1\}})+y{}_{\{1\}}\otimes(x\rightharpoonup y{}_{\{2\}})+y{}_{\{1\}}\otimes(y{}_{\{2\}}\leftharpoonup x)-(x\rightharpoonup y{}_{\{1\}})\otimes y{}_{\{2\}}$,
\item[(F8)]
$\gamma(x\triangleleft b)=(x{}_{[0]}\triangleleft b)\otimes x{}_{[1]}+(x{}_{[0]}\triangleleft b)\otimes x{}_{[-1]}-x{}_{2}\otimes(b\leftharpoonup x{}_{1})-x{}_{[0]}\otimes b x{}_{[-1]}-\left(x\triangleleft b_{1}\right) \otimes b_{2}$,
\item[(F9)] $\gamma(a \triangleright y)=(a{}_{1}\triangleright y)\otimes a{}_{2}+(a{}_{2}\triangleright y)\otimes a{}_{1}+y{}_{1}\otimes(a\leftharpoonup y{}_{2})\\ +y{}_{[0]}\otimes a y{}_{[1]}+y{}_{1}\otimes(y{}_{2}\rightharpoonup a)+y{}_{[0]}\otimes y{}_{[1]} a-\left(a\triangleright y_{[0]}\right)\otimes y_{[1]}$,
\item[(F10)] $\rho(a \triangleright y)=-a{}_{2}\otimes(y\triangleleft a{}_{1})+y{}_{[-1]}\otimes(a\triangleright y{}_{[0]})-\left(a\leftharpoonup y_{1}\right) \otimes y_{2}+y{}_{[-1]}\otimes(y{}_{[0]}\triangleleft a)-a y_{[-1]} \otimes y_{[0]}$,
\item[(F11)] $\rho(x\triangleleft b)=(x{}_{1}\rightharpoonup b)\otimes x{}_{2}+x{}_{[-1]} b\otimes x{}_{[0]}+(x{}_{2}\rightharpoonup b)\otimes x{}_{1}\\ +x{}_{[1]} b\otimes x{}_{[0]}-x{}_{[1]}\otimes(b\triangleright x{}_{[0]})+b{}_{1}\otimes(x\triangleleft b{}_{2})+b{}_{1}\otimes(b{}_{2}\triangleright x)$,
\item[(F12)] $\rho(y x)+\tau\gamma(y x)\\ =x{}_{[-1]}\otimes y x{}_{[0]}+(y\rightharpoonup x{}_{[1]})\otimes x{}_{[0]}+(y{}_{[-1]}\leftharpoonup x)\otimes y{}_{[0]}+y{}_{[1]}\otimes y{}_{[0]} x\\ +x{}_{\{1\}}\otimes(y\triangleleft x{}_{\{2\}})+y{}_{\{2\}}\otimes(y{}_{\{1\}}\triangleright x)+\sigma(y,x{}_{2})\otimes x{}_{1}+\sigma(y{}_{1},x)\otimes y{}_{2}$,
\item[(F13)] $\gamma(y x)+\tau\rho(y x)\\ =y x{}_{[0]}\otimes x{}_{[-1]}+y{}_{[0]} x\otimes y{}_{[1]}+y{}_{[0]}\otimes(y{}_{[-1]}\leftharpoonup x)+x{}_{[0]}\otimes(y\rightharpoonup x{}_{[1]})\\ +x{}_{1}\otimes\sigma(y,x{}_{2})+y{}_{2}\otimes\sigma(y{}_{1},x)+(y\triangleleft x{}_{\{2\}})\otimes x{}_{\{1\}}+(y{}_{\{1\}}\triangleright x)\otimes y{}_{\{2\}}$,
\item[(F14)] $\Delta_{A}(b \leftharpoonup x)+Q(b\triangleright x)+\tau\Delta_{A}(b \leftharpoonup x)+\tau Q(b\triangleright x)\\ =x{}_{[-1]}\otimes(b\leftharpoonup x{}_{[0]})+(b\leftharpoonup x{}_{[0]})\otimes x{}_{[-1]}+(b{}_{1}\leftharpoonup x)\otimes b{}_{2}+b{}_{2}\otimes(b{}_{1}\leftharpoonup x)\\ +x{}_{\{1\}}\otimes b x{}_{\{2\}}+b x{}_{\{2\}}\otimes x{}_{\{1\}}$,
\item[(F15)] $\Delta_{A}(y\rightharpoonup a)+Q(y\triangleleft a)+\tau\Delta_{A}(y\rightharpoonup a)+\tau Q(y\triangleleft a)\\ =a{}_{1}\otimes(y\rightharpoonup a{}_{2})+(y\rightharpoonup a{}_{2})\otimes a{}_{1}+(y{}_{[0]}\rightharpoonup a)\otimes y{}_{[1]}+y{}_{[1]}\otimes(y{}_{[0]}\rightharpoonup a)\\ +y{}_{\{1\}} a\otimes y{}_{\{2\}}+y{}_{\{2\}}\otimes y{}_{\{1\}} a$,
\item[(F16)] $\Delta_{V}(y\triangleleft a)+\tau\Delta_{V}(y\triangleleft a)=(y{}_{1}\triangleleft a)\otimes y{}_{2}+y{}_{2}\otimes(y{}_{1}\triangleleft a)$,
\item[(F17)] $\Delta_{V}(b \triangleright x)+\tau\Delta_{V}(b \triangleright x)=x{}_{1}\otimes(b\triangleright x{}_{2})+(b\triangleright x{}_{2})\otimes x{}_{1}$,
\item[(F18)] $\gamma(y\triangleleft a)+\tau\rho(y\triangleleft a)\\ =(y\triangleleft a{}_{2})\otimes a{}_{1}+(y{}_{[0]}\triangleleft a)\otimes y{}_{[1]}+y{}_{2}\otimes(y{}_{1}\rightharpoonup a)+y{}_{[0]}\otimes y{}_{[-1]} a$,
\item[(F19)] $\gamma(b\triangleright x)+\tau\rho(b\triangleright x)\\ =x{}_{1}\otimes(b\leftharpoonup x{}_{2})+x{}_{[0]}\otimes b x{}_{[1]}+(b\triangleright x{}_{[0]})\otimes x{}_{[-1]}+(b{}_{1}\triangleright x)\otimes b{}_{2}$,
\item[(F20)] $\rho(y\triangleleft a)+\tau\gamma(y\triangleleft a)\\ =a{}_{1}\otimes(y\triangleleft a{}_{2})+(y{}_{1}\rightharpoonup a)\otimes y{}_{2}+y{}_{[-1]} a\otimes y{}_{[0]}+y{}_{[1]}\otimes(y{}_{[0]}\triangleleft a)$,
\item[(F21)] $\rho(b\triangleright x)+\tau\gamma(b\triangleright x)\\ =x{}_{[-1]}\otimes(b\triangleright x{}_{[0]})+(b\leftharpoonup x{}_{2})\otimes x{}_{1}+b x{}_{[1]}\otimes x{}_{[0]}+b{}_{2}\otimes(b{}_{1}\triangleright x)$,
\item[(F22)] $\Delta_{A}(\sigma(y,x))+Q(y,x)+\tau\Delta_{A}(\sigma(y,x))+\tau Q(y,x)\\ =x{}_{[-1]}\otimes\sigma(y,x{}_{[0]})+\sigma(y,x{}_{[0]})\otimes x{}_{[-1]}+x{}_{\{1\}}\otimes(y\rightharpoonup x{}_{\{2\}})+(y\rightharpoonup x{}_{\{2\}})\otimes x{}_{\{1\}}\\ +\sigma(y{}_{[0]},x)\otimes y{}_{[1]}+y{}_{[1]}\otimes\sigma(y{}_{[0]},x)+(y{}_{\{1\}}\leftharpoonup x)\otimes y{}_{\{2\}}+y{}_{\{2\}}\otimes(y{}_{\{1\}}\leftharpoonup x)$,
\item[(F23)] $\Delta_{V}(xy)\\ =x{}_{1} y\otimes x{}_{2}+x{}_{2} y\otimes x{}_{1}-x{}_{2}\otimes y x{}_{1}+y{}_{1}\otimes x y{}_{2}\\ +y{}_{1}\otimes y{}_{2} x-x y{}_{1}\otimes y{}_{2}+(x{}_{[-1]}\triangleright y)\otimes x{}_{[0]}+(x{}_{[1]}\triangleright y)\otimes x{}_{[0]}\\ -x{}_{[0]}\otimes(y\triangleleft x{}_{[-1]})+y{}_{[0]}\otimes(x\triangleleft y{}_{[1]})+y{}_{[0]}\otimes(y{}_{[1]}\triangleright x)-(x\triangleleft y{}_{[-1]})\otimes y{}_{[0]},$
\item[(F24)] $\Delta_{V}(yx)+\tau\Delta_{V}(yx)\\ =x{}_{1} \otimes y x{}_{2}+y x{}_{2} \otimes x{}_{1}+y{}_{1} x\otimes y{}_{2}+y{}_{2}\otimes y{}_{1} x\\ +x{}_{[0]}\otimes(y\triangleleft x{}_{[1]})+(y\triangleleft x{}_{[1]})\otimes x{}_{[0]}+(y{}_{[-1]}\triangleright x)\otimes y{}_{[0]}+y{}_{[0]}\otimes(y{}_{[-1]}\triangleright x).$ \end{enumerate}
Conversely, any alternative bialgebra structure on $E$ with the canonical injection map $i: A\to E$ both an alternative algebra homomorphism and an alternative coalgebra homomorphism is of this form. \end{theorem}
Note that in this case, $(A,\cdot,\Delta_A)$ is an alternative sub-bialgebra of $E=A^{}_{\sigma}\# {}^{Q}_{}\, V$ and $(V,\cdot,\Delta_V)$ is a braided alternative bialgebra. Denote the set of all alternative bialgebraic extending datum of type (II) by $\mathcal{IB}^{(2)}({A},V)$.
In the above two cases, we find that the braided alternative bialgebra $V$ play a special role in the extending problem of alternative bialgebra $A$. Note that $A^{P}_{}\# {}^{}_{\theta}\, V$ and $A^{}_{\sigma}\# {}^{Q}_{}\, V$ are all alternative bialgebra structures on $E$. Conversely, any alternative bialgebra extending system $E$ of ${A}$ through $V$ is isomorphic to such two types. Now from Theorem \ref{thm-41}, Theorem \ref{thm-42} we obtain the main result of in this section, which solve the extending problem for alternative bialgebra.
\begin{theorem}\label{bim1} Let $({A}, \cdot, \Delta_A)$ be an alternative bialgebra, $E$ be a vector space containing ${A}$ as a subspace and $V$ be a complement of ${A}$ in $E$. Denote by $$\mathcal{HLB}(V,{A}):=\mathcal{IB}^{(1)}({A},V)\sqcup\mathcal{IB}^{(2)}({A},V)/\equiv.$$ Then the map \begin{eqnarray} &&\Upsilon: \mathcal{HLB}(V,{A})\rightarrow BExtd(E,{A}),\\ &&\overline{\Omega^{(1)}({A},V)}\mapsto A^{P}_{}\# {}^{}_{\theta}\, V,\quad \overline{\Omega^{(2)}({A},V)}\mapsto A^{}_{\sigma}\# {}^{Q}_{}\, V \end{eqnarray} is bijective, where $\overline{\Omega^{(i)}({A}, V)}$ is the equivalence class of $\Omega^{(i)}({A}, V)$ under $\equiv$. \end{theorem}
A very special case is that when $\rightharpoonup$ and $\leftharpoonup$ are trivial in the above Theorem \ref{thm-42}. We obtain the following result. \begin{corollary}\label{thm4} Let $A$ be an alternative bialgebra and $V$ be a vector space. An extending datum of ${A}$ by $V$ is $\Omega({A},V)=(\triangleright, \triangleleft, \sigma, \rho, \gamma, Q, \Delta_V)$ consisting of eight linear maps \begin{eqnarray*} \triangleleft: V\otimes {A}\rightarrow {V},~~~~\triangleright: A\otimes {V}\rightarrow V,~~~~\sigma: V\otimes V \rightarrow {A},~~~\cdot_V:V\otimes V \rightarrow V,\\ {\rho}: V\to A\otimes V,~~~~{\gamma}: V\to V\otimes A,~~~~{Q}: V\rightarrow {A}\otimes {A},~~~~\Delta_V: V\rightarrow V\otimes V. \end{eqnarray*} Then the unified product $A^{}_{\sigma}\# {}^{Q}_{}\, V$ with bracket \begin{align} (a+ x) (b+ y):=(ab+\sigma(x, y))+ (xy+x\triangleleft b+a\triangleright y) \end{align} and comultiplication \begin{eqnarray} \Delta_E(a)=\Delta_A(a),\quad \Delta_E(x)=\Delta_V(x)+{\rho}(x)+{\gamma}(x)+Q(x) \end{eqnarray} forms an alternative bialgebra if and only if $A_{\sigma}\# {}_{} V$ forms an alternative algebra, $A^{}\# {}^{Q} \, V$ forms an alternative coalgebra and the following conditions are satisfied: \begin{enumerate} \item[(G1)] $\rho(x y)\\ =-x_{[1]} \otimes y x_{[0]}+y{}_{[-1]}\otimes x y{}_{[0]}+y{}_{[-1]}\otimes y{}_{[0]} x+\sigma(x{}_{1},y)\otimes x{}_{2}+\sigma(x{}_{2},y)\otimes x{}_{1}\\ -\sigma(x,y{}_{1})\otimes y{}_{2}+y{}_{\{1\}}\otimes (y{}_{\{2\}}\triangleright x)+y{}_{\{1\}}\otimes (x\triangleleft y{}_{\{2\}})-x{}_{\{2\}}\otimes(y\triangleleft x{}_{\{1\}})$,
\item[(G2)] $\gamma(x y)\\ =x{}_{[0]} y\otimes x{}_{[1]}+x{}_{[0]} y\otimes x{}_{[-1]}-xy_{[0]}\otimes y_{[1]}+(x{}_{\{1\}}\triangleright y)\otimes x{}_{\{2\}}+(x{}_{\{2\}}\triangleright y)\otimes x{}_{\{1\}}\\ -x{}_{2}\otimes\sigma(y,x{}_{1})+y{}_{1}\otimes\sigma(x,y{}_{2})+y{}_{1}\otimes\sigma(y{}_{2},x)-(x\triangleleft y{}_{\{1\}})\otimes y{}_{\{2\}}$,
\item[(G3)] $Q(x\triangleleft b)=x{}_{\{1\}} b\otimes x{}_{\{2\}}+x{}_{\{2\}} b\otimes x{}_{\{1\}}-x{}_{\{2\}}\otimes b x{}_{\{1\}}$,
\item[(G4)] $Q(a\triangleright y)=y{}_{\{1\}}\otimes a y{}_{\{2\}}+y{}_{\{1\}}\otimes y{}_{\{2\}} a-a y{}_{\{1\}}\otimes y{}_{\{2\}}$,
\item[(G5)] $\Delta_{V}(a \triangleright y)=y{}_{1}\otimes(a\triangleright y{}_{2})+y{}_{1}\otimes(y{}_{2}\triangleleft a)-\left(a \triangleright y_{1}\right) \otimes y_{2}$,
\item[(G6)] $\Delta_{V}(x \triangleleft b)=(x{}_{1}\triangleleft b)\otimes x{}_{2}+(x{}_{2}\triangleleft b)\otimes x{}_{1}-x{}_{2}\otimes(b\triangleright x{}_{1})$,
\item[(G7)]$\Delta_{A}(\sigma(x,y))+Q(x, y)$\\ $=\sigma(x{}_{[0]},y)\otimes x{}_{[1]}+\sigma(x{}_{[0]},y)\otimes x{}_{[-1]}-\sigma(x,y{}_{[0]})\otimes y{}_{[1]}+y{}_{[-1]}\otimes\sigma(x,y{}_{[0]})\\ +y{}_{[-1]}\otimes\sigma(y{}_{[0]},x)-x{}_{[1]}\otimes\sigma(y,x{}_{[0]})$,
\item[(G8)]
$\gamma(x\triangleleft b)=(x{}_{[0]}\triangleleft b)\otimes x{}_{[1]}+(x{}_{[0]}\triangleleft b)\otimes x{}_{[-1]}-x{}_{[0]}\otimes b x{}_{[-1]}-\left(x\triangleleft b_{1}\right) \otimes b_{2}$,
\item[(G9)] $\rho(a \triangleright y)=-a{}_{2}\otimes(y\triangleleft a{}_{1})+y{}_{[-1]}\otimes(a\triangleright y{}_{[0]})+y{}_{[-1]}\otimes(y{}_{[0]}\triangleleft a)-a y_{[-1]} \otimes y_{[0]}$,
\item[(G10)] $\rho(x\triangleleft b)=x{}_{[-1]} b\otimes x{}_{[0]}+x{}_{[1]} b\otimes x{}_{[0]}-x{}_{[1]}\otimes(b\triangleright x{}_{[0]})+b{}_{1}\otimes(x\triangleleft b{}_{2})+b{}_{1}\otimes(b{}_{2}\triangleright x)$,
\item[(G11)] $\gamma(a \triangleright y)=(a{}_{1}\triangleright y)\otimes a{}_{2}+(a{}_{2}\triangleright y)\otimes a{}_{1}+y{}_{[0]}\otimes a y{}_{[1]}+y{}_{[0]}\otimes y{}_{[1]} a-\left(a\triangleright y_{[0]}\right)\otimes y_{[1]}$,
\item[(G12)] $\rho(y x)+\tau\gamma(y x)\\ =x{}_{[-1]}\otimes y x{}_{[0]}+y{}_{[1]}\otimes y{}_{[0]} x+x{}_{\{1\}}\otimes(y\triangleleft x{}_{\{2\}})\\ +y{}_{\{2\}}\otimes(y{}_{\{1\}}\triangleright x)+\sigma(y,x{}_{2})\otimes x{}_{1}+\sigma(y{}_{1},x)\otimes y{}_{2}$,
\item[(G13)] $\gamma(y x)+\tau\rho(y x)\\ =y x{}_{[0]}\otimes x{}_{[-1]}+y{}_{[0]} x\otimes y{}_{[1]}+x{}_{1}\otimes\sigma(y,x{}_{2})\\ +y{}_{2}\otimes\sigma(y{}_{1},x)+(y\triangleleft x{}_{\{2\}})\otimes x{}_{\{1\}}+(y{}_{\{1\}}\triangleright x)\otimes y{}_{\{2\}}$,
\item[(G14)] $Q(b\triangleright x)+\tau Q(b\triangleright x)=x{}_{\{1\}}\otimes b x{}_{\{2\}}+b x{}_{\{2\}}\otimes x{}_{\{1\}}$,
\item[(G15)] $Q(y\triangleleft a)+\tau Q(y\triangleleft a)=y{}_{\{1\}} a\otimes y{}_{\{2\}}+y{}_{\{2\}}\otimes y{}_{\{1\}} a$,
\item[(G16)] $\Delta_{V}(y\triangleleft a)+\tau\Delta_{V}(y\triangleleft a)=(y{}_{1}\triangleleft a)\otimes y{}_{2}+y{}_{2}\otimes(y{}_{1}\triangleleft a)$,
\item[(G17)] $\Delta_{V}(b \triangleright x)+\tau\Delta_{V}(b \triangleright x)=x{}_{1}\otimes(b\triangleright x{}_{2})+(b\triangleright x{}_{2})\otimes x{}_{1}$,
\item[(G18)] $\gamma(y\triangleleft a)+\tau\rho(y\triangleleft a)=(y\triangleleft a{}_{2})\otimes a{}_{1}+(y{}_{[0]}\triangleleft a)\otimes y{}_{[1]}+y{}_{[0]}\otimes y{}_{[-1]} a$,
\item[(G19)] $\gamma(b\triangleright x)+\tau\rho(b\triangleright x)=x{}_{[0]}\otimes b x{}_{[1]}+(b\triangleright x{}_{[0]})\otimes x{}_{[-1]}+(b{}_{1}\triangleright x)\otimes b{}_{2}$,
\item[(G20)] $\rho(y\triangleleft a)+\tau\gamma(y\triangleleft a)=a{}_{1}\otimes(y\triangleleft a{}_{2})+y{}_{[-1]} a\otimes y{}_{[0]}+y{}_{[1]}\otimes(y{}_{[0]}\triangleleft a)$,
\item[(G21)] $\rho(b\triangleright x)+\tau\gamma(b\triangleright x)=x{}_{[-1]}\otimes(b\triangleright x{}_{[0]})+b x{}_{[1]}\otimes x{}_{[0]}+b{}_{2}\otimes(b{}_{1}\triangleright x)$,
\item[(G22)] $\Delta_{A}(\sigma(y,x))+Q(y,x)+\tau\Delta_{A}(\sigma(y,x))+\tau Q(y,x)\\ =x{}_{[-1]}\otimes\sigma(y,x{}_{[0]})+\sigma(y,x{}_{[0]})\otimes x{}_{[-1]}+\sigma(y{}_{[0]},x)\otimes y{}_{[1]}+y{}_{[1]}\otimes\sigma(y{}_{[0]},x)$,
\item[(G23)] $\Delta_{V}(xy)\\ =x{}_{1} y\otimes x{}_{2}+x{}_{2} y\otimes x{}_{1}-x{}_{2}\otimes y x{}_{1}+y{}_{1}\otimes x y{}_{2}\\ +y{}_{1}\otimes y{}_{2} x-x y{}_{1}\otimes y{}_{2}+(x{}_{[-1]}\triangleright y)\otimes x{}_{[0]}+(x{}_{[1]}\triangleright y)\otimes x{}_{[0]}\\ -x{}_{[0]}\otimes(y\triangleleft x{}_{[-1]})+y{}_{[0]}\otimes(x\triangleleft y{}_{[1]})+y{}_{[0]}\otimes(y{}_{[1]}\triangleright x)-(x\triangleleft y{}_{[-1]})\otimes y{}_{[0]},$
\item[(G24)] $\Delta_{V}(yx)+\tau\Delta_{V}(yx)\\ =x{}_{1} \otimes y x{}_{2}+y x{}_{2} \otimes x{}_{1}+y{}_{1} x\otimes y{}_{2}+y{}_{2}\otimes y{}_{1} x\\ +x{}_{[0]}\otimes(y\triangleleft x{}_{[1]})+(y\triangleleft x{}_{[1]})\otimes x{}_{[0]}+(y{}_{[-1]}\triangleright x)\otimes y{}_{[0]}+y{}_{[0]}\otimes(y{}_{[-1]}\triangleright x).$ \end{enumerate} \end{corollary}
\vskip7pt \footnotesize{ \noindent Tao Zhang\\ College of Mathematics and Information Science,\\ Henan Normal University, Xinxiang 453007, P. R. China;\\
E-mail address: \texttt{{[email protected]}}
\vskip7pt \footnotesize{ \noindent Fang Yang\\ College of Mathematics and Information Science,\\ Henan Normal University, Xinxiang 453007, P. R. China;\\
E-mail address: \texttt{{[email protected]}}
\end{document} | arXiv |
\begin{definition}[Definition:Initial Homomorphism from Integers to Ring with Unity]
Let $\Z$ be the ring of integers.
Let $R$ be a ring with unity.
The '''initial homomorphism''' $\Z \to R$ is the unital ring homomorphism that sends $n \in \Z$ to the $n$th power of $1$ in $R$:
:$ n \mapsto n \cdot 1$.
\end{definition} | ProofWiki |
\begin{document}
\title{Propagation of singularities under Schr\"odinger equations on manifolds with ends} \begin{abstract}
We prove a microlocal smoothing effect of Schr\"odinger equations on manifolds. We employ radially homogeneous wavefront sets introduced by Ito and Nakamura (Amer. J. Math., 2009). In terms of radially homogeneous wavefront sets, we can apply our theory to both of asymptotically conical and hyperbolic manifolds. We relate wavefront sets in initial states to radially homogeneous wavefront sets in states after a time development. We also prove a relation between radially homogeneous wavefront sets and homogeneous wavefront sets and prove a special case of Nakamura (2005). \end{abstract}
\section{Introduction}
\subsection{Motivation: homogeneous wavefront sets on Euclidean spaces}\label{subs_hwf_euclid}
For proving a microlocal smoothing effect of a Schr\"odinger equation \[
i\frac{\partial u}{\partial t}(t, x)=Hu(t, x) \quad (t, x)\in \mathbb{R}\times \mathbb{R}^n, \] it is known that one does not need only the usual wavefront sets, which is localized with respect to positions, but also another notion of a wavefront set by which we can access to behavior of functions near infinity. One of the methods is a homogeneous wavefront set. Recall the definition of (homogeneous) wavefront sets from \cite{Nakamura05}:
\begin{divstep} \fstep{Wavefront sets on Euclidean spaces}For $u\in L^2(\mathbb{R}^n)$, we define a set $\rmop{WF}(u)\subset T^*\mathbb{R}^n\setminus 0$ by the following property: a point $(x_0, \xi_0)\in T^*\mathbb{R}^n\setminus 0$ \textit{does not} belong to $\rmop{WF} (u)$ if there exists $a\in C_c^\infty (T^*\mathbb{R}^n)$ such that $a=1$ near $(x_0, \xi_0)$ and
\[
\|a^\mathrm{w}(x, \hbar D)u\|_{L^2}=O(\hbar^\infty)
\]
as $\hbar \to 0$.
Here $a^\mathrm{w}(x, \hbar D)$ is the usual semiclassical Weyl quantization of the symbol $a(x, \xi)$:
\[
a^\mathrm{w}(x, \hbar D)u(x):=\frac{1}{(2\pi\hbar)^n}\int_{\mathbb{R}^{2n}} a\left(\frac{x+y}{2}, \xi\right) e^{i\xi \cdot (x-y)/\hbar}u(y)\, \mathrm{d} y \mathrm{d}\xi.
\]
\fstep{Homogeneous wavefront sets on Euclidean spaces}For $u\in L^2(\mathbb{R}^n)$, we define a set $\rmop{HWF}(u)\subset T^*\mathbb{R}^n\setminus \{(0, 0)\}$ by the following property: a point $(x_0, \xi_0)\in T^*\mathbb{R}^n\setminus \{(0, 0)\}$ \textit{does not} belong to $\rmop{HWF} (u)$ if there exists $a\in C_c^\infty (T^*\mathbb{R}^n)$ such that $a=1$ near $(x_0, \xi_0)$ and
\[
\|a^\mathrm{w}(\hbar x, \hbar D)u\|_{L^2}=O(\hbar^\infty)
\]
as $\hbar \to 0$.
Here $a^\mathrm{w}(\hbar x, \hbar D)$ is the semiclassical Weyl quantization of the symbol $a(\hbar x, \xi)$: \[
a^\mathrm{w}(\hbar x, \hbar D)u(x):=\frac{1}{(2\pi\hbar)^n}\int_{\mathbb{R}^{2n}} a\left(\frac{\hbar x+\hbar y}{2}, \xi\right) e^{i\xi \cdot (x-y)/\hbar}u(y)\, \mathrm{d} y\mathrm{d} \xi. \] \end{divstep}
Nakamura \cite{Nakamura05} proved that, if \begin{itemize}
\item $H=-\sum_{j, k=1}^n \partial_{x_j}a_{jk}(x)\partial_{x_k}/2+V(x)$, with a positive definite matrix $(a_{jk}(x))_{j, k=1}^n$, $a_{jk}(x), V(x)\in \mathbb{R}$, $|\partial_x^\alpha (a_{jk}(x)-\delta_{jk})|\leq C_\alpha \jbracket{x}^{-\mu-|\alpha|}$ and $|\partial^\alpha V(x)|\leq C_\alpha \jbracket{x}^{\nu-|\alpha|}$ for some $\mu>0$ and $\nu <2$, and
\item a classical orbit $(x(t), \xi(t))$ with respect to the classical Hamiltonian $h_0(x, \xi):=\sum_{j, k=1}^n a_{jk}(x)\xi_j \xi_k/2$ is nontrapping ($|x(t)|\to\infty$ as $t\to\infty$) and has an initial point $(x(0), \xi(0))=(x_0, \xi_0)$ and an asymptotic momentum $\xi_\infty:=\lim_{t\to \infty}\xi(t)$, \end{itemize} then $(x_0, \xi_0)\in \rmop{WF}(u)$ implies $(t_0\xi_\infty, \xi_\infty)\in \rmop{HWF}(e^{-it_0 H}u)$ for $u\in L^2(\mathbb{R}^n)$ and $t_0>0$. As a corollary, $e^{-itH}u=O(\jbracket{x}^{-\infty})$ ($\Gamma \ni x\to \infty$) for some $t>0$ and some conic neighborhood $\Gamma$ of the asymptotic momentum $\xi_\infty$ implies $(x_0, \xi_0)\not\in \rmop{WF}(u)$, which is proved by Craig, Kappeler and Strauss \cite{Craig-Kappeler-Strauss96}.
K. Ito \cite{Ito06} generalizes this result to Euclidean spaces with asymptotically flat scattering metrics. Other studies on singularities of solutions to Schr\"odinger equations is in Doi \cite{Doi96} and Nakamura \cite{Nakamura09}.
There are other concepts of wavefront sets for investigating propagation of singularities under Schr\"odinger equations. One of them is a Gabor wavefront set, defined in terms of Gabor transforms (also known as short-time Fourier transforms or wave packet transforms). Schulz and Wahlberg \cite{Schulz-Wahlberg17} proved the equality of homogeneous wavefront sets and Gabor wavefront sets. Gabor wavefront sets are studied in Cordero, Nicola and Rodino \cite{Cordero-Nicola-Rodino15}, Pravda-Starov, Rodino and Wahlberg \cite{Pravda-Starov-Rodino-Wahlberg18}. Other study by Gabor transforms is in Kato, Kobayashi and S. Ito \cite{Kato-Kobayashi-Ito15,Kato-Kobayashi-Ito17}. Another concept of wavefront sets is a quadratic scattering wavefront set, which is studied by Wunsch \cite{Wunsch99}. An equivalence of quadratic scattering wavefront sets and homogeneous wavefront sets is proved by K. Ito \cite{Ito06}. Melrose \cite{Melrose94} introduced scattering wavefront sets for investigating singularities at infinity. Analytic wavefront sets are also employed for an investigation of propagation of singularities under Schr\"odinger equations. They are studied by Robbiano and Zuily \cite{Robbiano-Zuily99, Robbiano-Zuily02}, Martinez, Nakamura and Sordoni \cite{Martinez-Nakamura-Sordoni09}.
\subsection{Radially homogeneous wavefront sets on manifolds}
In the following, contrary to Section \ref{subs_hwf_euclid}, we employ pseudodifferential operators acting on \textit{half-densities}. We will briefly describe basic definition and properties on half-densities in Section \ref{subs_psido_half_densities}.
We recall wavefront sets on manifolds:
\begin{defi}[Wavefront sets on manifolds]
Let $u\in L^2(M; \Omega^{1/2})$. $\rmop{WF}(u)$ is a subset of $T^*M\setminus 0$ defined as follows: $(x_0, \xi_0)\in T^*M\setminus 0$ is \textit{not} in $\rmop{WF}(u)$ if there exist local coordinate $\varphi: U\, (\subset M)\to V\, (\subset \mathbb{R}^n)$, $\chi\in C_c^\infty (U)$ and $a\in C_c^\infty (T^*M)$ such that $\chi=1$ near $x_0$, $a=1$ near $(x_0, \xi_0)$, $\rmop{supp}a \subset T^* U$ and
\[
\| \chi\varphi^* (\tilde \varphi_*a)^\mathrm{w}(x, \hbar D)\varphi_*(\chi u)\|_{L^2}=O(\hbar^\infty).
\] \end{defi}
Next we introduce radially homogeneous wavefront sets, which are introduced by K. Ito and Nakamura \cite{Ito-Nakamura09} to prove a microlocal smoothing effect on scattering manifolds. Before we introduce radially homogeneous wavefront sets on manifolds, we need to equip manifolds with some structure corresponding to the dilation $x\mapsto \hbar x$ on Euclidean spaces. In this paper, motivated by the fact that the dilation $x\mapsto \hbar x$ on Euclidean spaces is equivalent to $(r, \theta)\mapsto (\hbar r, \theta)$ in polar coordinates, we introduce a structure of ends of manifolds:
\begin{assu}\label{assu_manifold_with_end_0}
Let $n$ be the dimension of $M$. We assume that there exist an open subset $E$ of $M$, a compact manifold $S$ with dimension $n-1$, and a diffeomorphism $\Psi: E\to \mathbb{R}_+\times S$. Here $\mathbb{R}_+:=(0, \infty)$. We also assume that $M\setminus E$ is a compact subset of $M$. \end{assu}
The mapping $\Psi: E\to \mathbb{R}_+\times S$ in Assumption \ref{assu_manifold_with_end_0} induces the canonical mapping \begin{equation}\label{eq_defi_psi_lift}
\tilde\Psi: T^*E \longrightarrow T^*(\mathbb{R}_+ \times S), \quad
\tilde \Psi (x, \rho \mathrm{d} r +\eta):= (\Psi (x), \rho, \eta). \end{equation}
We introduce a class of functions dependent only on angular variables $\theta$ near infinity:
\begin{defi}\label{defi_conical}
A function $u\in C^\infty (M)$ is \textit{cylindrical} if there exist a constant $R\geq 1$ and a function $u_\mathrm{ang}\in C^\infty (S)$ such that $(u\circ \Psi^{-1})(r, \theta)=u_\mathrm{ang}(\theta)$ for all $r\geq R$. \end{defi}
\begin{exam*}
\begin{itemize}
\item All constant functions are cylindrical.
\item All $u\in C_c^\infty (M)$ are cylindrical by considering $u_\mathrm{ang}=0$.
\item The set of all cylindrical functions forms an algebra with respect to the natural sum, multiplication by complex numbers and product.
\end{itemize} \end{exam*}
Throughout this paper, we use the term ``polar coordinates'' in the following sense:
\begin{defi}
We call $\varphi: U\, (\subset M) \to V\, (\subset \mathbb{R}^n)$ \textit{polar coordinates} if $\varphi$ is a local coordinate of the form $\varphi=(\rmop{id}\times \varphi^\prime)\circ \Psi$ where $\varphi^\prime: U^\prime\, (\subset S)\to V^\prime\, (\subset \mathbb{R}^{n-1})$ is a local coordinate on $S$. \end{defi}
Now we define radially homogeneous wavefront sets on manifolds.
\begin{defi}[Radially homogeneous wavefront sets on manifolds]\label{defi_hwf_manifolds}
For $u\in L^2(M; \Omega^{1/2})$, we define $\rmop{WF}\nolimits^\mathrm{rh} (u)$ as a subset of $T^*E$ defined as follows: $(x_0, \xi_0)\in T^*E$ is \textit{not} in $\rmop{WF}\nolimits^\mathrm{rh} (u)$ if there exist
\begin{itemize}
\item a polar coordinate $\varphi: U \to V$,
\item a cylindrical function $\chi\in C^\infty (M)$ such that $\rmop{supp} \chi \subset U$ and $\Psi_*\chi(r, \theta)=1$ for large $r$ and $\theta$ near $\theta_0$ with $\Psi(x_0)=(r_0, \theta_0)$, and
\item $a\in C_c^\infty (T^*V)$
\end{itemize}
such that $a=1$ near $\tilde\varphi (x_0, \xi_0)$ and
\begin{equation}\label{eq_hwf_manifold_definition}
\| \chi\varphi^* a^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar D_\theta)\varphi_*(\chi u)\|_{L^2}=O(\hbar^\infty).
\end{equation} \end{defi}
\subsection{Main result}
Our subject is a Schr\"odinger equation for half-densities on manifolds: \begin{equation}
\label{eq_schrodinger_manifold}
i\frac{\partial}{\partial t}u(t, x)=Hu(t, x). \end{equation} Here $H$ is a Hamiltonian of the form \[
H=-\frac{1}{2}\triangle_g+V(x), \] where $\triangle_g$ is the Laplace operator with respect to the Riemannian metric $g$ and $V$ is a real-valued smooth function with We further assume that \begin{equation}\label{eq_potential_bdd}
|\partial_r^{\alpha_0}\partial_\theta^{\alpha^\prime} V(r, \theta)|\leq C_\alpha \end{equation} holds for all multiindices $\alpha=(\alpha_0, \alpha^\prime)\in \mathbb{Z}_{\geq 0}\times \mathbb{Z}_{\geq 0}$ in polar coordinates $(r, \theta)$.
$H$ acts on half-densities as \[
H(\tilde u |\mathrm{vol}_g|^{1/2})=\left(-\frac{1}{2}\triangle_g \tilde u(x)+V(x)\tilde u(x)\right) |\mathrm{vol}_g|^{1/2}. \]
($|\mathrm{vol}_g|^{1/2}$ is the ``square root'' of the natural volume form $\mathrm{vol}_g=\sqrt{\det (g_{jk})}\mathrm{d} x_1\wedge \cdots \wedge \mathrm{d} x_n$ associated with the Riemannian metric $g$. We will explain details in Section \ref{subs_psido_half_densities}.)
We will explain our assumptions concretely in the following, but we emphasize that our setting includes not only the cases of asymptotically conical manifolds, but also those of asymptotically hyperbolic manifolds.
\begin{rema*}
We do not assume $\partial_x^\alpha V =O(\jbracket{x}^{2-\varepsilon})$ for some $\varepsilon>0$, but the boundedness \eqref{eq_potential_bdd} in order to argue in the symbol class $S^m_\mathrm{cyl}(T^*M)$ (introduced in Section \ref{subs_cylindrical_class}), which do not allow any growth in spatial direction ($r\to \infty$). It may be possible to introduce suitable classes of symbols with spatial growth and treat such potentials $V$, we restrict ourselves to the case of bounded potentials for simplicity. \end{rema*}
We take suitable polar coordinates such that the vector $\partial_r$ and the tangent space $T_{(r, \theta)}S$ intersect orthogonally:
\begin{assu}\label{assu_manifold_with_end_II}
Under Assumption \ref{assu_manifold_with_end_0}, the Riemannian metric $g$ has the representation \begin{equation}\label{eq_metric_polar}
g=\Psi^*(c(r, \theta)^2 \mathrm{d} r^2+h(r, \theta, \mathrm{d} \theta)), \quad \text{for } (r, \theta)\in \mathbb{R}_+ \times S \simeq E, \end{equation} where $c: E\to \mathbb{R}_+$ is a smooth function and $h(r, \theta, \mathrm{d} \theta)$ is a metric on $S$ dependent smoothly on the radial variable $r$. \end{assu}
We will see in Section \ref{subs_escape_function} a construction of such diffeomorphism $\Psi: E\to \mathbb{R}_+ \times S$ by escape functions, which is a generalization of the function $|x|$ in $\mathbb{R}^n$.
We further assume a compatibility condition of the diffeomorphism $\Psi: E\to \mathbb{R}_+\times S$ and the Riemannian metric . Let $h^*(r, \theta, \eta)$ be the fiber metric on $T^*S$ induced by $h(r, \theta, \mathrm{d} \theta)$. More explicitly, if $h(r, \theta, \mathrm{d} \theta)$ has a local representation \[
h(r, \theta, \mathrm{d} \theta)=\sum_{i, j=1}^{n-1}h_{ij}(r, \theta)\mathrm{d} \theta_i \mathrm{d} \theta_j, \] then we define \[
h^*(r, \theta, \eta):=\sum_{i, j=1}^{n-1}h^{ij}(r, \theta)\eta_i \eta_j, \] where $(h^{ij}(r, \theta))$ is the inverse matrix of $(h_{ij}(r, \theta))$ and $\eta=\eta_1 \mathrm{d} \theta_1+\cdots +\eta_{n-1}\mathrm{d} \theta_{n-1}$. Furthermore we define \[
|\xi|_{g^*}^2:=c(r, \theta)^{-2}\rho^2+h^*(r, \theta, \eta) \]
for $\xi=\rho \,\mathrm{d} r+\eta\in T^*_r \mathbb{R}_+\oplus T^*_\theta S$. $|\xi|_{g^*}$ is the norm of $\xi\in T^*M$ with respect to the fiber metric $g^*$ on $T^*M$ induced by the metric $g$. We define a free Hamiltonian $h_0(x, \xi)$ as \begin{equation}\label{eq_free_hamiltonian}
h_0(x, \xi):=\frac{1}{2}|\xi|_{g^*}^2. \end{equation}
\begin{assu}\label{assu_classical}
There exist a function $f(r)$ and constants $c_0>1/2$, $C>0$ and $\mu>0$ such that the following properties hold.
\begin{enumerate}[label=(\roman*)]
\item $f$ is positive, belongs to $C^1$ class and the inequality \begin{equation}\label{eq_ineq_f_logbdd}
c_0r^{-1}\leq \frac{f^\prime (r)}{f(r)}\leq C
\end{equation}
holds for all $r\geq 1$.
\item The inequality
\begin{equation}\label{eq_ineq_model_bdd}
C^{-1}f(r)^{-2}h^*(1, \theta, \eta)\leq h^*(r, \theta, \eta)\leq Cf(r)^{-2}h^*(1, \theta, \eta)
\end{equation}
holds for all $(r, \theta, \eta)\in [1, \infty)\times T^*S$.
\item \label{assu_sub_dtheta_h}The inequality
\begin{equation}\label{eq_ineq_angular_bdd}
|\partial_\theta h^{ij}(r, \theta, \eta)|\leq Ch^*(r, \theta, \eta)
\end{equation}
holds for all $(r ,\theta, \eta)\in [1, \infty)\times T^*S$.
\item (Classical analogue of Mourre estimate) The estimate
\begin{equation}\label{eq_classical_mourre} \{ f\rho, h_0\} \geq 2f^\prime (r)(h_0(r, \theta, \eta)-Cr^{-1-\mu}),\end{equation}
holds for all $(r, \theta, \rho, \eta)\in T^*(\mathbb{R}_+\times S)$. Here $\{ \cdot, \cdot\}$ is the Poisson bracket on $T^*M$:
\[
\{ a, b\}:=\sum_{i=1}^n \left( \frac{\partial a}{\partial x_i}\frac{\partial b}{\partial \xi_i}-\frac{\partial a}{\partial \xi_i}\frac{\partial b}{\partial x_i}\right).
\]
\item (Short range conditions)\label{assu_sub_short_range} $|c-1|$ and $|\partial_\theta c|$ are at most $O(r^{-1-\mu})$ as $r\to\infty$.
\end{enumerate} \end{assu}
\begin{rema*}
The size of $|\partial_\theta h^*(r, \theta, \eta)|$ in \eqref{eq_ineq_angular_bdd} and $|\partial_\theta V|$ and $|\partial_\theta c|$ in \ref{assu_sub_short_range} of Assumption \ref{assu_classical} are measured by the metric $h(1, \theta, \mathrm{d} \theta)$. \end{rema*}
\begin{exam*}[model manifolds]
Assumption \ref{assu_classical} is a generalization of model cases $g=\mathrm{d} r^2+f(r)^2 h(\theta, \mathrm{d} \theta)$. For instance $f(r)$ is the form $f(r)=r^a, e^{b r^c}$ ($a>1/2$, $b>0$, $0<c\leq 1$) for $r\geq 1$. The free Hamiltonian \eqref{eq_free_hamiltonian} with respect to this metric becomes
\[ h_0=\frac{1}{2}(\rho^2+f(r)^{-2}h^*(\theta, \eta)). \]
The Poisson bracket $\{f \rho, h_0\}=-\partial_r h_0$ is
\[ \{ f\rho, h_0\}=f^\prime(r)h_0(\theta, \eta). \]
This is a prototype of classical Mourre type estimate in Assumption \ref{assu_classical}. \end{exam*}
Then, for any nontrapping classical orbit $\Psi^{-1}(r(t), \theta(t), \rho(t), \eta(t))$ ($r(t)\to \infty$) with respect to the free Hamiltonian $h_0$, the limit \[
(\rho_\infty, \theta_\infty, \eta_\infty):=\lim_{t\to\infty} (\rho(t), \theta(t), \eta(t))\in \mathbb{R}_+\times T^*S \] exists under Assumption \ref{assu_classical}. We state this more precisely in Theorem \ref{theo_classical_estimate}. We remark that the classical analogue of Mourre estimate \eqref{eq_classical_mourre} plays an essential role in a proof of Theorem \ref{theo_classical_estimate}. The inequality \eqref{eq_classical_mourre} insures that the classical orbit $(x(t), \xi(t))$ approaches an asymptotic orbit $\Psi^{-1}(\rho_\infty t, \theta_\infty, \rho_\infty, \eta_\infty)$ rapidly.
\begin{rema*}
It is well known that the inequality
\begin{equation}\label{eq_mourre}
1_I(H)i[H, A]1_I(H)\geq c1_I(H)-1_I(H)K 1_I(H),
\end{equation}
where
\begin{itemize}
\item $A$, $H$ are self-adjoint operators on an abstract Hilbert space $\mathcal{H}$,
\item $I=(a, b)$ is a bounded open interval and $1_I:\mathbb{R}\to \{0, 1\}$, is the indicator function of $I$,
\item $c$ is a positive constant, and
\item $K$ is a compact operator on $\mathcal{H}$,
\end{itemize}
plays an important role in a quantum scattering theory. The inequality \eqref{eq_mourre} is called the \textit{Mourre estimate} \cite{Mourre80}. A typical case is $\mathcal{H}=L^2(\mathbb{R}^n)$, $H=-\triangle/2$ and $A=(x\cdot D_x+D_x\cdot x)/2$. The principal symbol of $A$ is equal to $x\cdot \xi=r\rho$ in polar coordinates, and the principal symbol of $i[H, A]$ is $\{ r\rho, h_0\}$. Thus we can regard \eqref{eq_classical_mourre} as a classical analogue of \eqref{eq_mourre} with $f(r)=r$. \end{rema*}
The last assumption is a boundedness of quantities related to the metric necessary for applying microlocal analysis.
\begin{assu}\label{assu_higher_derivative}
For all multiindices $\alpha\in \mathbb{Z}_{\geq 0}$ with $|\alpha|\geq 1$, there exists $C>0$ that the inequalities
\[
|\partial_{r, \theta}^\alpha h^*(r, \theta, \eta)|\leq Ch^*(r, \theta, \eta)
\]
and
\[
|\partial_{r, \theta}^\alpha c(r, \theta)|\leq Cr^{-1-\mu}
\]
hold for all $(r ,\theta, \eta)\in [1, \infty)\times T^*S$. \end{assu}
Now we state our main theorem.
\begin{theo}\label{theo_main_rhwf}
Suppose Assumption \ref{assu_manifold_with_end_0}--\ref{assu_higher_derivative}. Let $u\in L^2(M; \Omega^{1/2})$ and $t_0>0$. Let $(x(t), \xi (t))$ be a nontrapping classical orbit with initial point $(x_0, \xi_0)\in T^*M$ with respect to the free Hamiltonian $h_0$ and $(\rho_\infty, \theta_\infty, \eta_\infty)\in \mathbb{R}\times T^*S$ be the asymptotic angle and momentum. Then $(x_0, \xi_0)\in \rmop{WF}(u)$ implies $\tilde\Psi^{-1}(\rho_\infty t_0, \theta_\infty, \rho_\infty, \eta_\infty)\in \rmop{WF}\nolimits^\mathrm{rh} (e^{-it_0H}u)$. \end{theo}
We emphasize that the proof of the main theorem in the case of asymptotically conical/hyperbolic is unified.
In the case of $M=\mathbb{R}^n$, $S=S^{n-1}$ (($n-1$)-dimensional unit sphere) and $\Psi(x):=(|x|, x/|x|)$, we have a relation between radially homogeneous wavefront sets and homogeneous wavefront sets. First we remark a characterization of homogeneous wavefront sets by polar coordinates: \begin{prop}\label{prop_hwf_polar}
For $u\in L^2 (M; \Omega^{1/2})$, $x\neq 0$ and $\xi\in\mathbb{R}^n$, the following statements are equivalent.
\begin{enumerate}[label=(\roman*)]
\item $(x, \xi)\not\in \rmop{HWF}(u)$.
\item There exist polar coordinates $\varphi: U\to V$, cylindrical function $\chi\in C^\infty (\mathbb{R}^n)$ with $\rmop{supp}\chi \subset U$ and $\chi=1$ near the set $\{ \lambda x \mid \lambda\geq 1\}$, and $a\in C_c^\infty (V)$ with $a=1$ near $\Psi (x, \xi)$ such that
\begin{equation}\label{eq_hwf_polar}
\|a^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar^2 D_\theta)\varphi_*(\chi u)\|_{L^2(\mathbb{R}^n; \Omega^{1/2})}=O(\hbar^\infty)
\end{equation}
holds.
\end{enumerate} \end{prop}
We compare symbols $a(\hbar r, \theta, \hbar \rho, \hbar \eta)$ in \eqref{eq_hwf_manifold_definition} and $a(\hbar r, \theta, \hbar \rho, \hbar^2 \eta)$ in \eqref{eq_hwf_polar}. Let us pay attention to the $\eta$ variable. The support of $a(\hbar, \theta, \hbar \rho, \hbar \eta)$ is included in $|\eta-\hbar^{-1}\eta_0|\leq O(\hbar^{-1})$, whereas that of $a(\hbar, \theta, \hbar \rho, \hbar^2 \eta)$ is included in $|\eta-\hbar^{-2}\eta_0|\leq O(\hbar^{-2})$ ($\tilde\Psi(x_0, \xi_0)=(r_0, \theta_0, \rho_0, \eta_0)$). In particular, if $\eta_0=0$, then the support of $a(\hbar r, \theta, \hbar \rho, \hbar \eta)$ is included in the level set $\{a(\hbar r, \theta, \hbar \rho, \hbar^2 \eta)\}$ for sufficiently small $\hbar>0$. Thus we have the following corollary, noting that $\tilde\Psi^{-1}(r_0, \theta_0, \rho_0, 0)=(x_0, (\xi_0\cdot \hat x_0)\hat x_0)$ with $\hat x_0:=x_0/|x_0|$: \begin{coro}\label{coro_cylindrical_homogeneous}
Let $u\in L^2(\mathbb{R}^n; \Omega^{1/2})$. We define a homogeneous wavefront set of half-density $u=\tilde u |\mathrm{d} x|^{1/2}$ as that of the function $\tilde u(x)$. Then, for $x\neq 0$, $(x, \xi)\in \rmop{WF}\nolimits^\mathrm{rh} (u)$ implies $(x, (\xi\cdot \hat x) \hat x)\in \rmop{HWF}(u)$ where $\hat x:=x/|x|$. \end{coro} We also prove Corollary \ref{coro_cylindrical_homogeneous} in Section \ref{subs_cylindrical_homogeneous}. Combining Theorem \ref{theo_main_rhwf} and Corollary \ref{coro_cylindrical_homogeneous}, we immediately obtain a propagation of homogeneous wavefront sets on Euclidean spaces:
\begin{coro}
Let $M=\mathbb{R}^n$ with the usual Euclidean metric, $\Psi(x)=(x, x/|x|)$, and $H=-\triangle/2+V$ with $|\partial_r^{\alpha_0}\partial_\theta^{\alpha^\prime}V|\leq C_\alpha$ for all multiindices $\alpha=(\alpha_0, \alpha^\prime)\in \mathbb{Z}_{\geq 0}\times \mathbb{Z}_{\geq 0}^{n-1}$. Let $(x(t), \xi(t))=(x(0)+t\xi_\infty, \xi_\infty)$ be a free classical orbit. Then, for $u\in L^2(\mathbb{R}^n; \Omega^{1/2})$ and $t_0>0$, $(x(0), \xi(0))\in\rmop{WF}(u)$ implies $(t_0\xi_\infty, \xi_\infty)\in \rmop{HWF}(e^{-it_0 H}u)$. \end{coro}
There are many studies on Schr\"odinger equations on manifolds. For example, Schr\"odinger propagator on scattering manifolds are studied by Hassell and Wunsch \cite{Hassell-Wunsch05}. K. Ito and Nakamura \cite{Ito-Nakamura09} generalized the result of \cite{Hassell-Wunsch05}. Microlocal analysis on asymptotically hyperbolic spaces are studied by, for instance, Bouclet \cite{Bouclet11-1,Bouclet11-2}, S\'{a} Barreto \cite{SaBarreto05}, Melrose, S\'{a} Barreto and Vasy \cite{Melrose-SaBaretto-Vasy14}. Our idea of microlocal analysis in polar coordinates is inspired by Bouclet \cite{Bouclet11-1,Bouclet11-2}.
We describe outline of proof of the main Theorem \ref{theo_main_rhwf} in Section \ref{sect_outline_proof}. We reduce the proof of main theorem to three key propositions (Theorem \ref{theo_symbol_aim}, Theorem \ref{theo_hwf_quantization} and Proposition \ref{prop_wf_quantization}) there. In Section \ref{sect_classical_mechanics}, we prove the existence of asymptotic angle and momentum $(\theta_\infty, \rho_\infty, \eta_\infty)$. We develop a pseudodifferential calculus necessary for our aim in Section \ref{sect_psido_manifolds}. In particular, we prove two of key propositions (Theorem \ref{theo_hwf_quantization} and Proposition \ref{prop_wf_quantization}) in Section \ref{subs_quantization_wf_hwf}. Finally, in Section \ref{sect_heisenberg_derivative}, we estimate Heisenberg derivatives of operators constructed in Section \ref{sect_outline_proof} and prove the rest key proposition (Theorem \ref{theo_symbol_aim}).
\fstep{Notation}For derivatives, we use notations $D_{x_j}:=-i\partial_{x_j}$ and multiindices $D^\alpha:=D_{x_1}^{\alpha_1}\cdots D_{x_n}^{\alpha_n}$. We also denote $\jbracket{x}:=\sqrt{1+|x|^2}$. As in the definition \eqref{eq_defi_psi_lift} of $\tilde\Psi$ , for a diffeomorphism $\varphi: U\to V$, we denote the canonical mapping associated with $\varphi$ by $\tilde\varphi: T^*U \to T^*V$, $\tilde\varphi (x, \xi):=(\varphi (x), d\varphi^{-1}(\varphi (x))^* \xi)$.
\section{Outline of proof}\label{sect_outline_proof}
We prove our main theorem by following the argument in Nakamura \cite{Nakamura05}.
We construct symbols connecting wave functions at time $t=0$ and those at time $t=t_0$ by the following procedure.
\initstep\step\label{step_construction} We take a function $\chi\in C_c^\infty ([0, \infty))$ such that $\chi\geq 0$, $\chi^\prime (x)\leq 0$ and \[
\chi(x)=
\begin{cases}
1 & \text{if } x\leq 1, \\
0 & \text{if } x\geq 2.
\end{cases}\] Since $\chi$ is constant near $x=0$, $\chi$ belongs to $C^\infty(\mathbb{R})$. Take polar coordinates $\varphi: U\to V$ near $\Psi^{-1}(r_0, \theta_\infty)$ where $r_0\gg 1$. We take sufficiently small constants $\delta_0, \delta_1, \delta_2, \ldots>0$ and $\lambda \in (0, 1]$ and consider \begin{equation}\label{eq_defi_chi4}
\underbrace{\chi\left(\frac{|r-r(t)|}{4\delta_j t}\right)}_{=:\chi_{1j}}
\underbrace{\chi\left(\frac{|\theta-\theta(t)|}{\delta_j-t^{-\lambda}}\right)}_{=:\chi_{2j}}
\underbrace{\chi\left(\frac{|\rho-\rho(t)|}{\delta_j-t^{-\lambda}}\right)}_{=:\chi_{3j}}
\underbrace{\chi\left(\frac{|\eta-\eta(t)|}{\delta_j -t^{-\lambda}} \right)}_{=:\chi_{4j}} \end{equation} for sufficiently large $t$. Denote the range of $t$ by $[T_0, \infty)$ with a sufficiently large $T_0>0$. We pull $\chi_{1j}\chi_{2j}\chi_{3j} \chi_{4j}$ back by the canonical coordinates $\tilde\varphi: T^*U\to T^*V$ induced by $\varphi: U\to V$ and define \begin{equation} \label{eq_defi_psi-1}
\tilde\psi_j(t, x, \xi):=\tilde\varphi^*(\chi_{1j} \chi_{2j} \chi_{3j} \chi_{4j})\in C^\infty (T^*U; \mathbb{R}). \end{equation} Since the support of $\tilde\psi_j$ are included in $T^*U$, we extend $\tilde\psi_j$ to a smooth function on $T^*M$ by defining $\tilde\psi_j=0$ outside $T^*U$.
\step We take a cutoff function $\alpha\in C^\infty (\mathbb{R})$ which satisfies $\alpha^\prime\geq 0$ and \[\alpha(t)= \begin{cases}
0 & \text{if } t\leq T_0, \\
1 & \text{if } t\geq T_0+1. \end{cases}\] We define $\psi_0(t, x, \xi)$ as a solution to a transport equation \begin{equation}\label{eq_transport}
\begin{split}
&\partial_t \psi_j +\{ \psi_j, h_0\}=\alpha(t)(\partial_t \tilde\psi_j +\{ \tilde\psi_j, h_0\}), \\
&\psi_j(T_0+1, x, \xi)=\tilde\psi_j(T_0+1, x, \xi).
\end{split}
\end{equation}
\step We choose positive constants $c_1, c_2, \ldots$ and construct a symbol $\tilde a(\hbar, t, r, \theta, \rho, \eta)$ such that \begin{equation}\label{eq_definition_symbol}
\tilde a(\hbar, t)\sim \sum_{j=1}^\infty c_j\hbar^j t \psi_j(\hbar, t) \end{equation}
by the Borel theorem. We quantize symbols $\psi_0(\hbar^{-1}t)$ and $\tilde a(\hbar; \hbar^{-1}t)$ (the procedure of quantization is in Definition \ref{defi_quantization}) and obtain quantized operators $\mathop{\mathrm{Op}}\nolimits_\hbar \psi_0(\hbar^{-1}t)$ and $\mathop{\mathrm{Op}}\nolimits_\hbar (\tilde a(\hbar, \hbar^{-1}t))$. We define an operator $A_\hbar (t)$ as \begin{equation}\label{eq_defi_op_at}
A_\hbar (t):=\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(\hbar^{-1}t))^*\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(\hbar^{-1}t))+\mathop{\mathrm{Op}}\nolimits_\hbar (\tilde a(\hbar, \hbar^{-1}t)). \end{equation}
Now we state two key lemmas for the proof of the main theorem (Theorem \ref{theo_main_rhwf}). The first lemma states the positivity of the Heisenberg derivative modulo $O(\hbar^\infty)$ of the time-dependent operator $A_\hbar (t)$.
\begin{theo}\label{theo_symbol_aim} If we take suitable $\delta_0, \delta_1, \delta_2, \ldots>0$, $\lambda \in (0, 1]$, $T_0>0$ and $c_1, c_2, \ldots>0$ in above construction procedure, then the following statements hold. \begin{enumerate}[label=(\roman*)]
\item $a(\hbar, t)\in S^{-2}_\mathrm{cyl}(T^*M)$ and forms a bounded family in $S^{-2}_\mathrm{cyl}(T^*M)$.
\item For any $k\geq 0$, the inequality
\begin{equation}\label{eq_positive_heisenberg_derivative}\partial_t A_\hbar (t)-i[A_\hbar (t), H]\geq O_{L^2\to L^2}(\hbar^k)
\end{equation}
holds uniformly in $0\leq t \leq t_0$. \end{enumerate} \end{theo}
The second lemma states that the operator which appeared in the definition of radially homogeneous wavefront sets (Definition \ref{defi_hwf_manifolds}) is approximated by $A_\hbar (t)$.
\begin{theo}\label{theo_hwf_quantization}
Let polar coordinates $\varphi: U\to V$ in $E$, $a\in C_c^\infty (T^*V)$ and a cylindrical function $\chi\in C^\infty (M)$ satisfy conditions in Definition \ref{defi_hwf_manifolds} except for \eqref{eq_hwf_manifold_definition}. Then, if we take sufficiently small $\delta_0, \delta_1, \cdots >0$ in Step \ref{step_construction} properly, then we have
\[
A_\hbar (t_0)-A_\hbar (t_0)\chi \varphi^* a^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar D_\theta)(\varphi_*\chi)\varphi_*=O_{L^2\to L^2}(\hbar^\infty).
\] \end{theo}
In addition to the above key lemmas, we state a technical lemma on the wavefront sets in order to describe the proof of the main theorem briefly.
\begin{prop}\label{prop_wf_quantization}
Let $(x_0, \xi_0)\in T^*M\setminus 0$ and $u\in L^2(M; \Omega^{1/2})$. If there exists a symbol $a\in C_c^\infty (T^*M)$ such that $a=1$ near $(x_0, \xi_0)$ and $\mathop{\mathrm{Op}}\nolimits_\hbar (a)u=O_{L^2}(\hbar^\infty)$, then $(x_0, \xi_0)\not\in \rmop{WF}(u)$. \end{prop}
We can prove Theorem \ref{theo_main_rhwf} from Theorem \ref{theo_symbol_aim}, Proposition \ref{prop_wf_quantization} and Theorem \ref{theo_hwf_quantization} as follows.
\begin{proof}[Proof of Theorem \ref{theo_main_rhwf}] Assume $(\rho_\infty t_0, \theta_\infty, \rho_\infty, \eta_\infty)\not\in \rmop{WF}\nolimits^\mathrm{rh} (e^{-it_0H}u)$. By considering $\partial_t (e^{itH}A_\hbar (t)e^{-itH})=e^{itH}(\partial_t A_\hbar (t)-i[A_\hbar (t), H])e^{-itH}$, we have \begin{equation}\label{eq_expectation_estimate}
\begin{split}
\jbracket{A_\hbar (0)u, u}
&=\jbracket{A_\hbar (t_0)e^{-it_0H}u, e^{-it_0H}u} \\
&\quad -\int_0^{t_0} \jbracket{(\partial_t A_\hbar (s)-i[A_\hbar (s), H])e^{-isH}u, e^{-isH}u}\, \mathrm{d} s \\
&\leq \jbracket{A_\hbar (t_0)e^{-it_0H}u, e^{-it_0H}u}+O(\hbar^k)
\end{split} \end{equation} for any $k\geq 0$ by \eqref{eq_positive_heisenberg_derivative}. We consider $\jbracket{A_\hbar (0)u, u}$ and $\jbracket{A_\hbar (t_0)e^{-it_0H}u, e^{-it_0H}u}$ respectively.
\fstep{$\bm{\jbracket{A_\hbar (0)u, u}}$}Since $A_\hbar(0)=\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(0))^*\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(0))$ by \eqref{eq_definition_symbol}, we have \begin{equation}\label{eq_initial_wf}
\jbracket{A_\hbar (0)u, u}=\|\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(0))u\|_{L^2}^2. \end{equation}
\fstep{$\bm{\jbracket{A_\hbar (t_0)e^{-it_0H}u, e^{-it_0H}u}}$}We set $(x_\infty, \xi_\infty):=\tilde\Psi^{-1} (\rho_\infty t_0, \theta_\infty, \rho_\infty, \eta_\infty)$. Since $(x_\infty, \xi_\infty)\not\in \rmop{WF}\nolimits^\mathrm{rh} (e^{-it_0 H}u)$, there exist a polar coordinate $\varphi: U\to V$, cylindrical cutoff $\chi\in C^\infty (M)$ and $a\in C_c^\infty (T^*V)$ which satisfy the conditions in Definition \ref{defi_hwf_manifolds}. We put $B_\hbar =\chi\varphi^*b^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar D_\theta)(\varphi_*\chi)\varphi_*$. By Theorem \ref{theo_hwf_quantization}, we take sufficiently small $\delta_0, \delta_1, \cdots>0$ properly such that $A_\hbar (t_0)-A_\hbar (t_0)B_\hbar=O_{L^2\to L^2}(\hbar^\infty)$. Since $B_\hbar e^{-it_0 H}u=O_{L^2}(\infty^\infty)$ by the definition of radially homogeneous wavefront sets (Definition \ref{defi_hwf_manifolds}), we have \begin{equation}
\label{eq_at0_neg}
A_\hbar (t_0)e^{-it_0 H}u=O_{L^2}(\hbar^\infty). \end{equation}
\fstep{Conclusion}Combining \eqref{eq_initial_wf}, \eqref{eq_expectation_estimate} and \eqref{eq_at0_neg}, we have \[
\| \mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(0))u\|_{L^2}^2\leq O(\hbar^\infty). \] We recall that $(x(t), \xi(t))$ is a solution to Hamilton equation with respect to the free Hamiltonian \eqref{eq_free_hamiltonian}. Then \[
\frac{\mathrm{d}}{\mathrm{d} t}(\psi_{-1}(t, x(t), \xi(t)))=(\partial_t \psi_{-1} +\{ \psi_{-1}, h_0\})(t, x(t), \xi(t)). \] In particular $\psi_{-1}(t, x(t), \xi(t))=1$ for $t\geq T_0$ in this case. Thus \[
(\partial_t \psi_{-1} +\{ \psi_{-1}, h_0\})(t, x(t), \xi(t))=0, \quad \forall t\geq T_0. \] Hence $\psi_0(t, x(t), \xi(t))$ is a solution to the initial value problem $\partial_t (\psi_0(t, x(t), \xi(t)))=0$, $\psi_0(T_0+1, x(T_0+1), \xi(T_0+1))=1$. Thus $\psi_0(t, x(t), \xi(t))=1$ for all $t\geq T_0$. In particular we have $\psi_0(0, x(0), \xi(0))=1$. Since flows are families of diffeomorphisms generally, we have $\psi_0(0, x, \xi)=1$ near $(x(0), \xi(0))$. Hence by Proposition \ref{prop_wf_quantization}, we obtain $(x_0, \xi_0)\not\in\rmop{WF} (e^{-it_0 H}u)$. \end{proof}
\section{Classical mechanics}\label{sect_classical_mechanics}
The only purpose of this section is the proof of the existence of asymptotic angle and momentum:
\begin{theo}\label{theo_classical_estimate}
Assume Assumption \ref{assu_classical}. Let $\Psi^{-1}(r(t), \theta(t), \rho(t), \eta(t))$ be a nontrapping classical orbit (i.e., $r(t)\to\infty$ as $t\to \infty$) with respect to the free Hamiltonian \eqref{eq_free_hamiltonian}. Then the asymptotic angle and momentum $(\rho_\infty, \theta_\infty, \eta_\infty):=\lim_{t\to\infty} (\rho(t), \theta(t), \eta(t))\in (0, \infty)\times T^*S$ exists. Moreover, there exists a constant $C>0$ such that the estimate
\[ \rho_\infty t-C \leq r(t) \leq \rho_\infty t+C\]
holds for all sufficiently large $t$. \end{theo}
The proof of Theorem \ref{theo_classical_estimate} is separated to several steps. In the following, $(r(t), \theta(t), \rho(t), \eta(t))$ is a classical orbit satisfying the assumption of Theorem \ref{theo_classical_estimate}.
We record the explicit form of the radial component of the Hamilton equation with respect to the free Hamiltonian \eqref{eq_free_hamiltonian}: \begin{equation}
\label{eq_hamilton_equation_radial}
\frac{\mathrm{d} r}{\mathrm{d} t}=c(r, \theta)^{-2}\rho, \quad \frac{d\rho}{\mathrm{d} t}=\frac{\partial_r c}{c^3}\rho^2-\frac{1}{2}\frac{\partial}{\partial r}h^*(r, \theta, \eta) \end{equation} We also note the energy conservation law $h_0 (x(t), \xi (t))=E_0$. The total energy $E_0$ is positive by the nontrapping condition.
\begin{lemm}
\label{lemm_asymp_radial}
The asymptotic radial momentum $\rho_\infty:=\lim_{t\to\infty}\rho(t)$ exists and equals to $\sqrt{2E_0}$. \end{lemm}
\begin{proof}
\initstep\step We prove that $\rho(t)>0$ for sufficiently large $t$. The classical Mourre type estimate \eqref{eq_classical_mourre} implies
\begin{equation}\label{eq_frho_0}
\frac{\mathrm{d}}{\mathrm{d} t}(f(r(t))\rho(t)) \geq f^\prime (r(t))(2E_0-Cr(t)^{-1-\mu})
\end{equation} By $f^\prime>0$, the nontrapping condition $r(t)\to \infty$ and $E_0>0$, the right hand side of \eqref{eq_frho_0} is positive for sufficiently large $t$. Thus $f(r(t))\rho (t) \geq f(r(T))\rho (T)$ for $t\geq T\gg 1$.
We have to find a large $T$ such that $f(r(T))\rho (T)>0$. Suppose that there exists $T^\prime>0$ such that $\rho (T)\leq 0$ for all $T\geq T^\prime$. Then the Hamilton equation \eqref{eq_hamilton_equation_radial} implies \[
r(T)=r(T^\prime)+\int_{T^\prime}^T c(r(t), \theta (t))^{-2}\rho (t)\, \mathrm{d} t\leq r(T^\prime)\leq 0 \] for all $T\geq T^\prime$. This contradicts to the nontrapping condition $r(T)\to \infty$. Thus, for any $T^\prime>0$, there exists $T\geq T^\prime$ such that $\rho(T)>0$. Hence we obtain $\rho (t)\geq f(r(T))\rho (T)/f(r(t))>0$ for all $t\geq T$.
\step We employ the classical Mourre type estimate \eqref{eq_classical_mourre} again. For $\rho >0$ and $r\gg 1$, we have \[
\{f\rho, h_0\}\geq f^\prime (r)(2h_0-Cr^{-1-\mu})\geq \frac{2h_0-Cr^{-1-\mu}}{c(r, \theta)^{-1}\sqrt{2h_0}}\{ f, h_0\} \] by $c^{-1}\rho \leq \sqrt{2h_0}$. Thus \begin{equation}
\label{eq_dt_rho_below}
\frac{\mathrm{d}}{\mathrm{d} t}(f(r(t))\rho (t))\geq \frac{2E_0-Cr(t)^{-1-\mu}}{c(r(t), \theta(t))^{-1}\sqrt{2E_0}}\frac{\mathrm{d}}{\mathrm{d} t}(f(r(t))). \end{equation} Take an arbitrary small $\varepsilon>0$. Since \[
\lim_{t\to\infty} \frac{2E_0-Cr(t)^{-1-\mu}}{c(r(t), \theta(t))^{-1}\sqrt{2E_0}}
=\sqrt{2E_0}, \] \eqref{eq_dt_rho_below} implies \[
\frac{\mathrm{d}}{\mathrm{d} t}(f(r(t))\rho (t))\geq ( \sqrt{2E_0}-\varepsilon)\frac{\mathrm{d}}{\mathrm{d} t}(f(r(t))) \] for sufficiently large $t$. This differential inequality shows \[
f(r(t))\rho(t)\geq ( \sqrt{2E_0}-\varepsilon)f(r(t))+C \] for some constant $C>0$ and sufficiently large $t$. Dividing both sides by $f(r(t))>0$ and taking a infimum limit as $t\to \infty$, we have \[
\liminf_{t\to\infty} \rho (t)\geq \sqrt{2E_0}-\varepsilon \] by $f(r)\geq C^{-1}r^{c_0}$ and $r(t)\to \infty$ as $t\to\infty$. Since $\varepsilon>0$ is arbitrary, we can take a limit $\varepsilon\to 0$ and obtain $\liminf_{t\to\infty} \rho (t)\geq 2E_0$. Combining this with $\rho \leq c(r, \theta)\sqrt{2h_0}$ and $\lim_{r\to\infty}c(r, \theta)=1$ (Assumption \ref{assu_classical} \ref{assu_sub_short_range}), we obtain \[
\sqrt{2E_0}\leq \liminf_{t\to\infty} \rho(t)\leq \limsup_{t\to \infty} \rho (t)\leq \limsup_{t\to\infty} c(r(t), \theta (t))\sqrt{2E_0}=\sqrt{2E_0}. \qedhere \] \end{proof}
\begin{lemm}\label{lemm_rho1}
We have an asymptotic behavior
\begin{equation}\label{eq_lemm_rho1}
\rho(t)=\rho_\infty+O(t^{-1-\delta}) \quad (t\to\infty)
\end{equation}
for any $0<\delta<\min\{ 2c_0-1, \mu\}$. \end{lemm}
\begin{proof}
We define $a(r, \rho):=f(r)^2(\sqrt{2h_0}-\rho)$. A direct calculation shows that
\begin{equation}\label{eq_rho_1}
\{ a, h_0 \}
=2\frac{ c^{-2}f^\prime}{f}\rho a-f^2\{\rho, h_0\}.
\end{equation}
A simple calculation shows that classical Mourre estimate \eqref{eq_classical_mourre} is equivalent to \begin{equation}
\label{eq_classical_mourre_alt}
\{ \rho, h_0\} \geq \frac{f^\prime (r)}{f(r)}\left(2h_0-c^{-2}\rho^2-Cr^{-1-\mu}\right). \end{equation}
By \eqref{eq_classical_mourre_alt}, we have
\begin{equation}\label{eq_rho_2}
\begin{split}
\{ \rho , h_0\}
&\geq \frac{f^\prime}{f}(2h_0-c^{-2}\rho^2-Cr^{-1-\mu}) \\
&=\frac{c^{-2}f^\prime}{f}(\sqrt{2h_0}+\rho)(\sqrt{2h_0}-\rho)-C\frac{f^\prime r^{-1-\mu}}{f}+\frac{2h_0 f^\prime}{f}(1-c^{-2}) \\
&\geq \frac{c^{-2}f^\prime}{f^3}(\sqrt{2h_0}+\rho)a-C\frac{f^\prime r^{-1-\mu}}{f}(1+h_0).
\end{split}
\end{equation}
Here we used the short range condition $|c-1|=O(r^{-1-\mu})$ (Assumption \ref{assu_classical} \ref{assu_sub_short_range}). Combining \eqref{eq_rho_1} and \eqref{eq_rho_2}, we have
\begin{equation}\label{eq_rho_3}
\begin{split}
\{ a, h_0 \}
&\leq \frac{c^{-2}f^\prime}{f}\underbrace{(2\rho-\sqrt{2h_0}-\rho)a}_{=-f^{-2}a^2\leq 0}+ Cff^\prime r^{-1-\mu} (1+h_0)\\
&\leq Cff^\prime r^{-1-\mu} (1+h_0)
=\frac{C(1+h_0)}{2c^{-2}\rho}\{ f^2 r^{-1-\mu}, h_0\}+(1+\mu)f^2 r^{-2-\mu}.
\end{split}
\end{equation}
In the following we only consider $(r, \theta, \rho, \eta)$ on the energy surface $\{h_0=E_0\}$ such that $\rho$ is sufficiently close to $\rho_\infty=\sqrt{2E_0}$. Then \eqref{eq_rho_3} becomes
\begin{equation}
\label{eq_rho_4}
\{ a, h_0 \}\leq C\{ f^2 r^{-1-\mu}, h_0\}+Cf(r)^2 r^{-2-\mu}.
\end{equation}
Fix a large $R>0$. Since $r\mapsto f(r)/r^{c_0}$ is monotonically increasing by \eqref{eq_ineq_f_logbdd} and
\[
\frac{\mathrm{d}}{\mathrm{d} r}\left( \frac{f(r)}{r^{c_0}}\right)=\frac{f^\prime - c_0 f r^{-1}}{r^{c_0}}\geq 0,
\]
we have the inequality $f(r)/f(R)\leq (r/R)^{c_0}$ for $R\geq r$. Thus \eqref{eq_rho_4} becomes
\[
\{ a, h_0 \}\leq C\{ f^2 r^{-1-\mu}, h_0\}+Cf(R)^2 R^{-c_0}r^{-2-\mu+c_0}.
\]
Now we substitute $(r, \theta, \rho, \eta)=(r(t), \theta (t), \rho (t), \eta(t))$. Then we have
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d} t}(a(r(t), \rho(t)))
&\leq C\frac{\mathrm{d}}{\mathrm{d} t}(f(r(t))^2 +r(t)^{-1-\mu})+Cf(R)^2 R^{-c_0} r(t)^{-2-\mu+c_0}
\end{align*}
for large $t$ such that $r(t)\leq R$. Note that $t\mapsto r(t)$ is monotonically increasing for large $t$ by $\rho_\infty>0$ and the Hamilton equation \eqref{eq_hamilton_equation_radial}. Integrating both sides in $[T, t]$ and substituting $R=r(t)$, we obtain
\[
a(r(t), \rho (t))\leq Cf(r(t))^2 r(t)^{-1-\mu}+Cf(r(t))^2 r(t)^{-c_0}\int_T^t r(s)^{-2-\mu+c_0}\, \mathrm{d} s +C.
\]
Recall $a(r, \rho)=f(r)^2(\sqrt{2h_0}-\rho)$. Then we have
\begin{equation}
\label{eq_rho1_duhamel}
\sqrt{2E_0}-\rho (t)\leq Cr(t)^{-1-\mu}C r(t)^{-c_0}\int_T^t r(s)^{-2-\mu+c_0}\, \mathrm{d} s +Cf(r(t))^{-2}.
\end{equation}
Since $f(r)\geq C^{-1}r^{c_0}$ by \eqref{eq_ineq_f_logbdd} and
\begin{align*}
&r(t)^{-c_0}\int_T^t r(s)^{-2-\mu+c_0}\, \mathrm{d} s \\
&=r(t)^{-c_0}\int_T^t \frac{r(s)^{-2-\mu+c_0}\mathrm{d} r/ds(s)}{c(r(s), \theta (s))^{-2}\rho (s)}\, \mathrm{d} s
\leq Cr(t)^{-c_0}\int_T^t r(s)^{-2-\mu+c_0}\frac{\mathrm{d} r}{ds}(s)\, \mathrm{d} s \\
&\leq
\begin{cases}
Cr(t)^{-1-\mu}+Cr(t)^{-c_0} & \text{if } \mu+1\neq c_0, \\
Cr(t)^{-1-\mu}\log r(t) & \text{if } \mu+1=c_0
\end{cases}
\\
&\leq Cr(t)^{-1-\delta}
\end{align*}
for $0<\delta<\min \{ \mu, 2c_0-1\}$, \eqref{eq_rho1_duhamel} becomes
\[
\sqrt{2E_0}-\rho (t)\leq Cr(t)^{-1-\mu}+Cr(t)^{-1-\delta}+Cr(t)^{-2c_0}\leq Cr(t)^{-1-\delta}.
\]
We can replace $r(t)$ to $t$ since
\begin{align*}
&\lim_{t\to\infty}\frac{r(t)}{t} \\
&=\lim_{t\to\infty} \frac{t-T}{t}\int_0^1 c(r(T+(t-T)s), \theta (T+(t-T)s))^{-2}\rho (T+(t-T)s)\, \mathrm{d} s \\
&=\rho_\infty
\end{align*}
by the Hamilton equation \eqref{eq_hamilton_equation_radial} and the Lebesgue dominated convergence theorem. Thus $\sqrt{2E_0}-\rho (t)\leq O(t^{-1-\delta})$.
The converse inequality is easily proved by the estimate
\[
\rho(t)\leq c\sqrt{2E_0}\leq (1+Cr(t)^{-1-\mu})\sqrt{2E_0}\leq \sqrt{2E_0}+O(t^{-1-\mu}). \qedhere
\] \end{proof}
\begin{proof}[Proof of Theorem \ref{theo_classical_estimate}]
We already proved the existence of the asymptotically radial momentum $\rho_\infty=\lim_{t\to\infty}\rho(t)$ in Lemma \ref{lemm_asymp_radial}. Noting the integrability of $t^{-1-\delta}$ in $[1, \infty)$ and integrating both sides of \eqref{eq_lemm_rho1}, we obtain
\[
\rho_\infty t-C \leq r(t)\leq \rho_\infty+C.
\]
In the following we prove the existence of asymptotic angle and angular momentum $(\theta_\infty, \eta_\infty):=\lim_{t\to\infty}(\theta (t), \eta(t))\in T^*S$. Let $d_S$ be the distance function on $S$ associated with the Riemannian metric $h(1, \theta, \mathrm{d} \theta)$. For $t\geq s$, we have
\begin{equation}\label{eq_theta_cauchy} d_S(\theta(t), \theta(s))\leq \int_s^t \left|\frac{\mathrm{d} \theta}{\mathrm{d} t}(\sigma)\right|_{h(1)}\, \mathrm{d} \sigma\leq \int_s^t f(r(\sigma))^{-1}\left|\frac{\mathrm{d} \theta}{\mathrm{d} t}(\sigma)\right|_{h(r(\sigma))}\, \mathrm{d} \sigma.
\end{equation}
The angle component of the Hamilton equation is
\begin{equation}\label{eq_hamilton_equation_angle}
\frac{\mathrm{d} \theta_j}{\mathrm{d} t}(t)=\sum_{k=1}^{n-1} h^{jk}(r(t), \theta (t))\eta_k (t),
\end{equation}
which implies
\[
\left|\frac{\mathrm{d} \theta}{\mathrm{d} t}(t)\right|_{h(r)}=|\eta|_{h^*(r)}.
\]
By this relation and the energy conservation law, we have
\[
\left|\frac{\mathrm{d} \theta}{\mathrm{d} t}(\sigma)\right|_{h(r(\sigma))}=|\eta(\sigma)|_{h^*(r(\sigma))}=\sqrt{2E_0-c^{-2}\rho^2}
\]
by \eqref{eq_ineq_model_bdd}. We apply Lemma \ref{lemm_rho1} and obtain
\[\sqrt{2E_0-c^{-2}\rho^2}\leq \sqrt{Ct^{-1-\delta}}=Ct^{-(1+\delta)/2}.
\]
Combining this with $f(r)\geq C^{-1}r^{c_0}$, which follows from \eqref{eq_ineq_f_logbdd}, \eqref{eq_theta_cauchy} becomes
\[
d_S(\theta(t), \theta(s))\leq C\int_s^t r(\sigma)^{-c_0}\sigma^{-(1+\delta)/2}\, \mathrm{d} \sigma \leq C\int_s^t \sigma^{-c_0-(1+\delta)/2}\, \mathrm{d} \sigma.
\]
Since $c_0+(1+\delta)/2>1$ by $c_0>1/2$, the integrand $\sigma^{-c_0^(1+\delta)/2}$ is integrable in $[1, \infty)$. Thus
\[
d_S(\theta(t), \theta(s))\to 0 \quad (t\geq s \to \infty).
\]
Hence the limit
\[
\theta_\infty:=\lim_{t\to\infty}\theta (t)
\]
exists by the completeness of compact Riemannian manifolds.
Finally we consider the $\eta$ component. Take local coordinates near $\theta_\infty$. We take sufficiently large $T>0$ such that $\theta(t)$ is in the coordinate neighborhood for all $t\geq T$. In the associated canonical coordinates, we have
\begin{align*}\left|\frac{\mathrm{d} \eta_i}{\mathrm{d} t}\right|
&\leq Cr^{-1-\mu}+Ch^*(r, \theta, \eta)= Cr^{-1-\mu}+C(2E_0-c^{-2}\rho^2) \\
&\leq Ct^{-1-\mu}+C(2E_0+Ct^{-1-\mu}-(\rho_\infty^2-Ct^{-1-\delta}))=Ct^{-1-\delta}.
\end{align*}
Thus
\[ |\eta_i(t)-\eta_i(s)|\leq C\int_s^t \sigma^{-1-\delta}\, \mathrm{d} \sigma=C(s^{-\delta}-t^{-\delta})\to 0 \quad (t\geq s \to \infty)\]
for $t\geq s\geq T$. Hence the limit $\eta_{\infty, i}:=\lim_{t\to\infty}\eta_i(t)$ exists. We pull $(\eta_{\infty, 1}, \ldots, \eta_{\infty, n-1})$ back by the canonical coordinates and obtain the asymptotic angular momentum $\eta_\infty\in T^*_{\theta_\infty}S$. \end{proof}
\section{Pseudodifferential operators on manifolds}\label{sect_psido_manifolds}
\subsection{Symbol classes}\label{subs_cylindrical_class}
We first introduce a suitable symbol class for analyzing the symbols defined as \eqref{eq_defi_op_at}. In order to deduce global properties of pseudodifferential operators ($L^2$ boundedness for example), we need to control behavior of symbols near infinity ($r\to \infty$).
\begin{defi}
Let $m\in \mathbb{R}$. A function $a\in C^\infty (T^*M)$ belongs to $S_\mathrm{cyl}^m(T^*M)$ if it satisfies the following conditions.
\begin{itemize}
\item For every local coordinate $\varphi: U\to V$, the push forward $\tilde\varphi_*a$ by the induced canonical coordinate $\tilde\varphi: T^*U\to T^*V$ belongs to $S^m_\mathrm{loc}(T^*V)$. Here $S^m_\mathrm{loc}(T^*V)\subset C^\infty (V\times \mathbb{R}^n)$ stands for the set of all functions which satisfy
\[
\jbracket{\xi}^{-m+|\beta|}\partial_x^\alpha \partial_\xi^\beta a\in L^\infty (K\times \mathbb{R}^n)
\]
for all compact subsets $K\subset V$ and all multiindices $\alpha, \beta\in \mathbb{Z}_{\geq 0}^n$.
\item For any polar coordinate $\varphi: U\to V$ in the end $E$, the push forward $\tilde\varphi_*a(r, \theta, \rho, \eta)$ by the induced canonical coordinate $\tilde\varphi: T^*U\to T^*V$ satisfies
\[
\jbracket{\xi}^{-m+|\beta|}\partial_{r, \theta}^\alpha \partial_\xi^\beta (\tilde \varphi_* a) \in L^\infty ([\delta, \infty)\times K^\prime\times \mathbb{R}^n)
\]
for all $\delta>0$, compact subsets $K^\prime\subset \mathbb{R}^{n-1}$ with $\mathbb{R}_+\times K^\prime \subset V$, and all multiindices $\alpha, \beta\in \mathbb{Z}_{\geq 0}^n$. Here $\xi=(\rho, \eta)$.
\end{itemize} \end{defi}
\begin{rema*}
We denote the subscript ``$\mathrm{cyl}$'' in $S^m_\mathrm{cyl}(T^*M)$ since the symbol in $S^m_\mathrm{cyl}(T^*M)$ can be regarded as a natural symbol class on $E \simeq \mathbb{R}_+\times S$ with a cylindrical metric $\mathrm{d} r^2+\mathrm{d} \theta^2$. \end{rema*}
It is useful to introduce a terminology for describing supports of symbols up to $O(\hbar^\infty)$. We define it following \cite{Ito06}.
\begin{defi}\label{defi_support_modulo}
An $\hbar$-dependent symbol $a(\hbar; x, \xi)\in S^m_\mathrm{cyl}(T^*M)$ satisfies $\rmop{supp}a\subset K$ modulo $O(\hbar^\infty)$ if there exists a possibly $\hbar$-dependent symbol $\tilde a(\hbar; x, \xi)\in S^m_\mathrm{cyl}(T^*M)$ such that $\rmop{supp} \tilde a\subset K$ and $a-\tilde a=O_{S^0_\mathrm{cyl}}(\hbar^\infty)$. \end{defi}
We explain a motivation to consider the support modulo $O(\hbar^\infty)$ and also recall facts on a symbol calculus on Euclidean spaces. Let $S^m(T^*\mathbb{R}^n)$ be the Kohn-Nirenberg symbol class \[
S^m(T^*\mathbb{R}^n):=\{ a\in C^\infty (T^*\mathbb{R}^n) \mid |\partial_x^\alpha \partial_\xi^\beta a(x, \xi)|\leq C_{\alpha\beta}\jbracket{\xi}^{m-|\beta|}\}. \] Similarly to Definition \ref{defi_support_modulo}, we call $\rmop{supp}a\subset K$ modulo $O(\hbar^\infty)$ for $a(\hbar; x, \xi)\in S^m(T^*\mathbb{R}^n)$ if there exists a symbol $\tilde a(\hbar; x, \xi)$ such that $\rmop{supp}\tilde a\subset K$ and $a-\tilde a=O_{S^0(T^*\mathbb{R}^n)}(\hbar^\infty)$. What to recall are the composition of symbols and the changing variables.
\fstep{Composition of symbols}For $a\in S^{m_1}(T^*\mathbb{R}^n)$ and $b\in S^{m_2}(T^*\mathbb{R}^n)$, we can calculate a symbol $(a\# b)(\hbar; x, \xi)\in S^{m_1+m_2}(T^*\mathbb{R}^n)$ such that \[
a^\mathrm{w}(x, \hbar D)b^\mathrm{w}(x, \hbar D)=(a\# b)^\mathrm{w}(x, \hbar D) \] and \begin{align*}
&(a\# b)(\hbar; x, \xi) \\
&=\sum_{k=0}^N \frac{\hbar^k}{k!}\left(\frac{D_x\cdot D_{\xi^\prime}-D_{x^\prime} \cdot D_\xi}{2i}\right)^k (a(x, \xi)b(x^\prime, \xi^\prime))\biggr|_{\substack{x^\prime=x \\ \xi^\prime=\xi}} \\
&\quad +O_{S^{m_1+m_2-N-1}(T^*\mathbb{R}^n)}(\hbar^{N+1}) \end{align*} for any integer $N\geq 0$ (see Theorem 9.5 in \cite{Zworski12}). Each term \[
\left(\frac{D_x\cdot D_{\xi^\prime}-D_{x^\prime} \cdot D_\xi}{2i}\right)^k (a(x, \xi)b(x^\prime, \xi^\prime))\biggr|_{\substack{x^\prime=x \\ \xi^\prime=\xi}} \] is supported in $\rmop{supp}(ab)$. Thus if we define a symbol $c\in S^{m_1+m_2}(T^*\mathbb{R}^n)$ as an asymptotic sum \[
c(\hbar; x, \xi)\sim \sum_{k=0}^\infty \frac{\hbar^k}{k!}\left(\frac{D_x\cdot D_{\xi^\prime}-D_{x^\prime} \cdot D_\xi}{2i}\right)^k (a(x, \xi)b(x^\prime, \xi^\prime))\biggr|_{\substack{x^\prime=x \\ \xi^\prime=\xi}} \] by the Borel theorem, then $\rmop{supp}c \subset \rmop{supp}(ab)$ and $a\# b-c=O_{S^0(T^*\mathbb{R}^n)}(\hbar^\infty)$. Hence we have $\rmop{supp}(a\# b)\subset \rmop{supp} (ab)$ modulo $O(\hbar^\infty)$.
\fstep{Changing variables}For a suitable diffeomorphism $\gamma: \mathbb{R}^n\to \mathbb{R}^n$ and a symbol $a\in S^m(T^*\mathbb{R}^n)$, we have \[
\gamma_* a^\mathrm{w}(x, \hbar D)\gamma^*=\tilde a^\mathrm{w}(x, \hbar D) \] for some symbol $\tilde a(\hbar; x, \xi)\in S^m(T^*\mathbb{R}^n)$ which has an asymptotic expansion \[
\tilde a(\hbar; x, \xi)\sim \sum_{j=0}^\infty \hbar^j \tilde a_j(x, \xi), \quad \tilde a_j\in S^{m-j}(T^*\mathbb{R}^n) \] and \[
\tilde a_0(x, \xi)=a(\gamma (x), \mathrm{d} \gamma (x)^{-1*}\xi)=:\tilde\gamma^*a(x, \xi) \] and $\rmop{supp}a_j \subset \rmop{supp}a_0$ for all $j\geq 0$. This implies $\rmop{supp}\tilde a\subset \rmop{supp}\tilde\gamma^*a$ modulo $O(\hbar^\infty)$. Here $\mathrm{d} \gamma (x)^{-1*}\xi=\eta$ if and only if \[
\xi_j=\sum_{k=1}^n \frac{\partial\gamma_k}{\partial x_j}(x)\eta_k. \] Furthermore, since we consider pseudodifferential operators acting on half-densities (we will explain them in Section \ref{subs_psido_half_densities}), we have $\tilde a_1=0$. Thus \[
\tilde a=\tilde\gamma^*a+O_{S^{m-2}(T^*\mathbb{R}^n)}(\hbar^2). \] For more details, see Theorem 9.9 and Theorem 9.10 in \cite{Zworski12} or Proposition E.10 in \cite{Dyatlov-Zworski19}.
\subsection{Pseudodifferential operators acting on half-densities}\label{subs_psido_half_densities} Before definition of pseudodifferential operators, we recall basic facts on half-densities on manifolds. For a manifold $M$, a line bundle $\pi: \Omega^{1/2}(M)\to M$ is defined as follows and call sections of the line bundle $\Omega^{1/2}(M)\to M$ half-densities of $M$:
\fstep{Fiber}A fiber $\pi^{-1}(x)$ is a complex vector space spanned by functions of the form \[
|\omega|^{1/2}: (v_1, \ldots, v_n)\in (T^*M)^n\longmapsto |\omega (v_1, \ldots, v_n)|^{1/2}\in \mathbb{R}, \quad \omega\in \Lambda^n T^*_xM. \]
\fstep{Local trivialization}Each local coordinates $\varphi=(x_1, \ldots, x_n): U\to V$ on $M$ induces a local line bundle isomorphism \[
(x, v|\mathrm{d} x_1 \wedge \cdots \wedge \mathrm{d} x_n|^{1/2})\in \pi^{-1}(U)\overset{\simeq}{\longmapsto} (x, v)\in U\times \mathbb{C}. \]
\fstep{}We denote the space of all smooth compactly supported half-densities by $C_c^\infty (M; \Omega^{1/2})$.
We employ two manipulations for half-densities.
\fstep{Inner product}Similarly to the definition of integration of differential forms, we define \begin{equation}\label{eq_inner_half_densities}
\jbracket{u, v}:=\sum_{\iota\in I}\int_{\mathbb{R}^n} ((\kappa_\iota \tilde u \overline{\tilde v})\circ \varphi_\iota^{-1})(x)\, \mathrm{d} x. \end{equation} Here \begin{itemize}
\item $\{\varphi_\iota: U_\iota\to V_\iota\}_{\iota\in I}$ is a locally finite atlas;
\item $\{\kappa_\iota\in C^\infty(M)\}_{\iota\in I}$ is a partition of unity subordinate to $\{ U_\iota \}_{\iota\in I}$;
\item $u=\tilde u |\mathrm{d} x_1\wedge \cdots \wedge \mathrm{d} x_n|^{1/2}$ and $v=\tilde v |\mathrm{d} x_1\wedge \cdots \wedge \mathrm{d} x_n|^{1/2}$ in $U_\iota$ are compactly supported half-densities. \end{itemize} \eqref{eq_inner_half_densities} is independent of the choice of an atlas and a partition of unity.
The inner product \eqref{eq_inner_half_densities} induces an $L^2$-norm $\|u\|_{L^2}:=\jbracket{u, u}^{1/2}$. We define $L^2(M; \Omega^{1/2})$ as the completion of $C_c^\infty (M; \Omega^{1/2})$ with respect to the norm $\|\cdot \|_{L^2}$.
\fstep{Pull back}For a smooth map $f: M\to M$, we define a pull back $f^*u$ for $u\in C_c^\infty (M; \Omega^{1/2})$ as \[
f^*u (x)(v_1, \ldots, v_n):=u(df(x)(v_1), \ldots, df (x)(v_n)) \] for $x\in M$ and $v_1, \ldots , v_n \in T_xM$. If $f(x_1, \ldots, x_n)=(y_1, \ldots, y_n)$ locally, then \[
f^* (\tilde u |\mathrm{d} y|^{1/2})=(\tilde u\circ f)(x)\left|\det \frac{\partial f}{\partial x}(x)\right|^{1/2}|\mathrm{d} x|^{1/2}. \]
All pull back manipulations by diffeomorphisms are unitary operators with respect to the inner product \eqref{eq_inner_half_densities}.
If $f: M\to M$ be a diffeomorphism, then we define a push forward of half-densities as $f_*:=(f^{-1})^*$.
\subsection{Properties of pseudodifferential operators}
For the definition of a quantization procedure, we take a finite atlas $\{ \varphi_\iota: U_\iota \to V_\iota \}$ on $M$ as below. \begin{enumerate}
\item We cover the compact subset $r^{-1}(0)$ by finite atlas $\{\varphi_\iota: U_\iota \to V_\iota \}_{\iota\in I_K}$. Here $U_\iota\subset M$, $V_\iota\subset \mathbb{R}^n$ and $\# I_K<\infty$.
\item We cover the compact manifold $S$ by finite atlas $\{\varphi^\prime_\iota: U^\prime_\iota \to V^\prime_\iota \}_{\iota\in I_\infty}$. Here $U^\prime_\iota\subset S$, $V^\prime_\iota\subset \mathbb{R}^{n-1}$ and $\# I_\infty<\infty$. We set $U_\iota:=\mathbb{R}_+\times U^\prime_\iota$, $V_\iota:=\mathbb{R}_+\times V^\prime_\iota$ and $\varphi_\iota:=\rmop{id}\times \varphi^\prime_\iota: U_\iota \to V_\iota$.
\item We define $I:=I_K \cup I_\infty$ (assuming that $I_K \cap I_\infty =\varnothing$). \end{enumerate}
Furthermore, we take a partition of unity $\{\kappa_\iota\in C^\infty (M)\}_{\iota\in I}$ subordinate to $\{\varphi_\iota \}_{\iota\in I}$ such that the following statements hold. \begin{itemize}
\item For $\iota\in I_K$, $\kappa_\iota\in C_c^\infty (U_\iota)$.
\item For $\iota\in I_\infty$, $\kappa_\iota$ is a cylindrical function (see Definition \ref{defi_conical}) with $\rmop{supp}\kappa_\iota \subset U_\iota$.
\item $\sum_{\iota\in I} \kappa_\iota=1$. \end{itemize}
In the following we fix the atlas $\{\varphi_\iota\}_{\iota\in I}$ and the partition of unity $\{\kappa_\iota\}_{\iota\in I}$.
\begin{defi}[(Non-canonical) quantization]\label{defi_quantization}
We fix cylindrical functions $\chi_\iota\in C^\infty(M)$ with $\rmop{supp} \chi_\iota \subset U_\iota$ and $\rmop{supp} \chi_\iota =1$ near $\rmop{supp}\kappa_\iota$. For a symbol $a\in S^m_\mathrm{cyl}(T^*M)$ and a function $u\in C_c^\infty (M; \Omega^{1/2})$, we define a quantization as
\begin{equation}\label{eq_psido_cyl} \mathop{\mathrm{Op}}\nolimits_\hbar (a)u:=\sum_{\iota\in I}\chi_\iota \varphi_\iota^*(\tilde\varphi_{\iota*}(\kappa_\iota a))^\mathrm{w}(x, \hbar D)\varphi_{\iota*}(\chi_\iota u).
\end{equation}
Here pseudodifferential operators acting on half-densities on Euclidean spaces are defined as
\[
a^\mathrm{w}(x, \hbar D)(\tilde u |\mathrm{d} x|^{1/2}):=\frac{1}{(2\pi\hbar)^n}\int_{\mathbb{R}^{2n}} a\left(\frac{x+y}{2}, \xi\right) e^{i\xi \cdot (x-y)/\hbar}\tilde u(y)\, \mathrm{d} y\mathrm{d} \xi |\mathrm{d} x|^{1/2}.
\] \end{defi}
Examples which we keep in mind are the quantization of polynomials in momentum variables. In polar coordinates, the quantization of polynomials in $S^m_\mathrm{cyl}(T^*M)$ is a sum of $a_\beta (r, \theta)D_r^{\beta_0}D_\theta^{\beta^\prime}$ with bounded $a_\alpha(r, \theta)$ in the sense that \[
|\partial_r^{\alpha_0}\partial_\theta^{\alpha^\prime}a_\beta (r, \theta)|\leq C_{\alpha\beta}. \]
\begin{rema*}
If $(M, g)$ is a Riemannian manifold and $\varphi: U\to V$ are local coordinates, then $\varphi^* a^\mathrm{w}(x, \hbar D)\varphi_*$ is
\begin{align*}
&\varphi^*a^\mathrm{w}(x, \hbar D)\varphi_*(\tilde u |\mathrm{vol}_g|^{1/2}) \\
&:=\frac{g(x)^{-1/4}}{(2\pi\hbar)^n}\int_{\mathbb{R}^{2n}} a\left(\frac{x+y}{2}, \xi\right) e^{i\xi \cdot (x-y)/\hbar}\tilde u(y)g(y)^{1/4}\, \mathrm{d} y\mathrm{d} \xi |\mathrm{vol}_g|^{1/2}.
\end{align*}
Here $g(x)$ is defined by the relation $\mathrm{vol}_g(x)=g(x)^{1/2}\mathrm{d} x$. The difference between pseudodifferential operators acting on half-densities and those acting on functions is the existence of the factor $g(x)^{-1/4}g(y)^{1/4}$. \end{rema*}
We employ composition and commutator of pseudodifferential operators, boundedness on the $L^2$ space and the sharp G\aa rding inequality in proof of the main theorem.
\begin{theo}\label{theo_psido_composition}
Let $m_1, m_2\in \mathbb{R}$. For $a\in S^{m_1}_\mathrm{cyl}(T^*M)$, $b\in S^{m_2}_\mathrm{cyl}(T^*M)$, the following statements hold.
\begin{enumerate}
\item The composition $\mathop{\mathrm{Op}}\nolimits_\hbar (a)\mathop{\mathrm{Op}}\nolimits_\hbar (b)$ is represented by some symbol $c(\hbar; x, \xi)\in S^{m_1+m_2-2}_\mathrm{cyl}(T^*M)$ as
\begin{equation}\label{eq_composition}
\mathop{\mathrm{Op}}\nolimits_\hbar (a) \mathop{\mathrm{Op}}\nolimits_\hbar (b)=\mathop{\mathrm{Op}}\nolimits_\hbar \left(ab+\frac{i\hbar}{2}\{a, b\}+\hbar^2 c\right)+O_{L^2\to L^2}(\hbar^\infty).
\end{equation}
The symbol $c(\hbar; x, \xi)$ satisfies $\rmop{supp}c(\hbar; x, \xi)\subset \rmop{supp}(ab)$ mod $O(\hbar^\infty)$.
\item In particular, the commutator $[\mathop{\mathrm{Op}}\nolimits_\hbar (a), \mathop{\mathrm{Op}}\nolimits_\hbar (b)]$ is represented by some $c\in S^{m_1+m_2-2}_\mathrm{cyl}(T^*M)$ as
\begin{equation}\label{eq_commutator}
[\mathop{\mathrm{Op}}\nolimits_\hbar (a), \mathop{\mathrm{Op}}\nolimits_\hbar (b)]=\mathop{\mathrm{Op}}\nolimits_\hbar (i\hbar\{a, b\}+\hbar^2 c)+O_{L^2\to L^2}(\hbar^\infty).
\end{equation}
The symbol $c=c (\hbar; x, \xi)$ satisfies $\rmop{supp}c(\hbar; x, \xi)\subset \rmop{supp}(ab)$ mod $O(\hbar^\infty)$.
\end{enumerate} \end{theo}
\begin{rema*}
One can prove that the $O_{L^2\to L^2}(\hbar^\infty)$ term in \eqref{eq_composition} and \eqref{eq_commutator} has smooth integral kernels. However, we do not use it in this paper. \end{rema*}
\begin{theo}
\label{theo_L2_bdd_cyl}
For any symbol $a\in S^0_\mathrm{cyl}(T^*M)$, the operator $\mathop{\mathrm{Op}}\nolimits_\hbar (a)$ is bounded on $L^2(M; \Omega)$.
Furthermore, if the symbol $a$ also depends on some parameter $\tau\in \Omega$ and are uniformly bounded in $S^0_\mathrm{cyl}(T^*M)$, then the operator norm $\|\mathop{\mathrm{Op}}\nolimits_\hbar (a)\|_{L^2\to L^2}$ is uniformly bounded with respect to $\tau\in\Omega$. \end{theo}
\begin{proof}
Each terms $\chi_\iota \varphi_\iota^*(\tilde\varphi_{\iota*}(\kappa_\iota a))^\mathrm{w}(x, \hbar D)\varphi_{\iota*}\chi_\iota$ in the definition \eqref{eq_psido_cyl} of pseudodifferential operators is bounded on $L^2(M; \Omega)$ by the Calder\'{o}n-Vaillancourt theorem for $\tilde\varphi_{\iota*}(\kappa_\iota a)\in S^0 (T^*\mathbb{R}^n)$ and the unitarity of the pull back $\varphi^*$ and the push forward $\varphi_*$. Thus the finite sum \eqref{eq_psido_cyl} over $\iota\in I$ is also a bounded operator on $L^2(M; \Omega^{1/2})$. \end{proof}
\begin{theo}[Sharp G\aa rding inequality]\label{theo_sharp_garding}
For every $a\in S^0_\mathrm{cyl}(T^*M)$ with $\rmop{Re} a\geq 0$, there exists a real symbol $b(\hbar; x, \xi)\in S^0_\mathrm{cyl}(T^*M)$ such that the inequality
\begin{equation}\label{eq_sharp_garding}
\rmop{Re} \mathop{\mathrm{Op}}\nolimits_\hbar (a)\geq -\hbar \mathop{\mathrm{Op}}\nolimits_\hbar (b)+O_{L^2\to L^2}(\hbar^\infty)
\end{equation}
holds and $\rmop{supp}b \subset \rmop{supp} a$ mod $O(\hbar^\infty)$.
Furthermore, if the symbol $a$ also depends on some parameter $\tau\in \Omega$ and are uniformly bounded in $S^0_\mathrm{cyl}(T^*M)$, then the $O_{L^2\to L^2}(\hbar^\infty)$ in \eqref{eq_sharp_garding}, and the symbol $b\in S^0_\mathrm{cyl}(T^*M)$ itself are uniformly bounded with respect to $\tau\in\Omega$. \end{theo} We prove Theorem \ref{theo_psido_composition} and Theorem \ref{theo_sharp_garding} in Subsection \ref{subs_proof_psido}.
In order to treat $[A_\hbar (t), H]$ in \eqref{eq_positive_heisenberg_derivative}, we represent the semiclassical Laplacian \[
-\hbar^2\triangle_g (u|\mathrm{vol}_g|^{1/2}):=\rmop{div}(\rmop{grad} u)|\mathrm{vol}_g|^{1/2} \] as a pseudodifferential operator:
\begin{theo}\label{theo_laplacian_psido}
We have
\[
-\hbar^2\triangle_g=\mathop{\mathrm{Op}}\nolimits_\hbar (|\xi|_{g^*}^2+\hbar^2 V_g),
\]
where $V_g\in C^\infty (M)$ is defined as
\[
V_g(x):=\sum_{\iota\in I}\sum_{j, k=1}^n\left(\frac{1}{4}\partial_{x_j}\partial_{x_k}(\kappa_\iota g_\iota^{jk})+g_\iota^{-1/4}\partial_{x_j}(\kappa_\iota g_\iota^{jk}\partial_{x_k}g_\iota^{1/4})\right),
\]
$g_\iota^{jk}$, $g_\iota$ are defined on $U_\iota$ as
\[
(g_\iota^{jk}):=(g^\iota_{jk})^{-1}, \quad
g_\iota:=\sqrt{\det (g^\iota_{jk})}, \quad
\text{where } g=\sum_{j, k=1}^n g^\iota_{jk}\mathrm{d} x_j \mathrm{d} x_k.
\]
Furthermore, $V_g$ belongs to the symbol class $S^0_\mathrm{cyl}(T^*M)$. \end{theo}
\begin{proof}
We decompose $-\hbar^2\triangle_g$ into
\begin{equation}\label{eq_laplacian_decomposition}
-\hbar^2 \triangle_g u=\sum_{\iota\in I} \chi_\iota \rmop{div} (\kappa_\iota \rmop{grad} (\chi_\iota u)).
\end{equation}
Direct calculation of $(\tilde \varphi_{\iota*}(\kappa_\iota |\xi|_{g^*}^2))^\mathrm{w}(x, \hbar D)$ shows that
\[
\varphi_\iota^*(\tilde \varphi_{\iota*}(\kappa_\iota |\xi|_{g^*}^2))^\mathrm{w}(x, \hbar D)\varphi_{\iota *}u=-\rmop{div} (\kappa_\iota \rmop{grad}u)-V_\iota u
\]
and
\begin{equation}\label{eq_geometric_potential_local}
V_\iota(x):=\frac{1}{4}\partial_{x_j}\partial_{x_k}(\kappa_\iota g_\iota^{jk})
+g_\iota^{-1/4}\partial_{x_j}(\kappa_\iota g_\iota^{jk}\partial_{x_k}g_\iota^{1/4}).
\end{equation}
Thus, by \eqref{eq_laplacian_decomposition}, we have
\[
-\hbar^2 \triangle_g u=\sum_{\iota \in I} \chi_\iota \varphi_\iota^*(\tilde \varphi_{\iota*}(\kappa_\iota |\xi|_{g^*}^2))^\mathrm{w}(x, \hbar D)\varphi_{\iota *}(\chi_\iota u)
+\sum_{\iota \in I} V_\iota \chi_\iota^2 u.
\]
$V_\iota \chi_\iota^2$ by $\chi_\iota =1$ on $\rmop{supp}V_\iota$ and $V_g=\sum_{\iota\in I}V_\iota$ implies that
\[
\sum_{\iota \in I} V_\iota \chi_\iota^2 u=Vu=\sum_{\iota \in I}\chi_\iota \varphi_\iota^* (\tilde\varphi_{\iota*}(\kappa_\iota V))^\mathrm{w}\varphi_{\iota*}(\chi_\iota u)=\mathop{\mathrm{Op}}\nolimits_\hbar (V)u.
\]
We have to show that $V_g\in S^0_\mathrm{cyl}(T^*M)$. The problem is the behavior of derivatives of $g^{jk}_\iota$ and $g_\iota$ for $\iota\in I_\infty$. By \eqref{eq_metric_polar}, we have
\begin{align*}
g^{jk}_\iota&=
\begin{cases}
c(r, \theta)^{-2} & \text{if } j=k=1, \\
h^{j-1, k-1}_\iota (r, \theta) & \text{if } j, k\geq 2, \\
0 & \text{otherwise},
\end{cases} \\
g_\iota &=c(r, \theta)^2 h_\iota (r, \theta),
\end{align*}
where
\[
(h^{jk}_\iota):=(h^\iota_{jk})^{-1}, \quad h_\iota:=\det (h^\iota_{jk}), \quad h(r, \theta, \mathrm{d} \theta)=\sum_{j, k=1}^{n-1} h^\iota_{jk}(r, \theta)\mathrm{d} \theta_j \mathrm{d} \theta_k.
\]
Assumption \ref{assu_classical} \ref{assu_sub_short_range} (in particular $c\to 1$ as $r\to \infty$) and Assumption \ref{assu_higher_derivative} implies the boundedness of
\[
|\partial_{r, \theta}^\alpha \partial_{x_j}\partial_{x_k}(\kappa_\iota g_\iota^{jk})|
\]
and
\[
|\partial_{r, \theta}^\alpha (g_\iota^{-1/4}\partial_{x_j}(\kappa_\iota g_\iota^{jk}\partial_{x_k}g_\iota^{1/4}))|
\]
in \eqref{eq_geometric_potential_local}. This shows that $V_\iota\in S^0_\mathrm{cyl}(T^*M)$ for $\iota\in I_\infty$. \end{proof}
\begin{rema*}
$V_g(x)$ depends on choices of atlas on $M$. \end{rema*}
\subsection{Proof of Theorem \ref{theo_psido_composition} and Theorem \ref{theo_sharp_garding}}\label{subs_proof_psido}
It is useful to introduce a notation of pseudodifferential operators associated with locally defined symbols.
\begin{defi}\label{defi_local_psido}
For $a_\iota\in S^m_\mathrm{cyl}(T^*\mathbb{R}^n)$ and $u\in C_c^\infty (M; \Omega^{1/2})$, we define
\[
\Ophloc (a_\iota)u:=\chi_\iota \varphi_\iota^* a_\iota^\mathrm{w}(x, \hbar D)\varphi_{\iota*}(\chi_\iota u).
\] \end{defi}
The operators in Definition \ref{defi_local_psido} are represented by a quantization of globally defined symbols.
\begin{lemm}\label{lemm_psido_local_global}
Assume that symbols $a_\iota\in S^m(T^*\mathbb{R}^n)$ satisfy $\rmop{supp}a_\iota \subset \pi^{-1}(\rmop{supp} \varphi_{\iota*} \kappa_\iota)$ for all $\iota\in I$, where $\pi: T^*M \to M$ is the natural projection. Then there exists $a(\hbar; x, \xi)\in S^m_\mathrm{cyl}(T^*M)$ such that
\begin{equation}\label{eq_psido_local_global}
\sum_{\iota\in I}\Ophloc (a_\iota)=\mathop{\mathrm{Op}}\nolimits_\hbar (a)+O_{L^2\to L^2}(\hbar^\infty).
\end{equation}
This symbol $a\in S^m_\mathrm{cyl}(T^*M)$ satisfies $\rmop{supp}a\subset \rmop{supp}a_0$ modulo $O(\hbar^\infty)$ where
\[
a_0(x, \xi)=\sum_{\iota\in I} \tilde \varphi_\iota^* a_\iota (x, \xi).
\]
Furthermore, if the symbols $a_\iota$ also depend on some parameter $\tau\in \Omega$ and are uniformly bounded in $S^m(T^*\mathbb{R}^n)$, then the $O_{L^2\to L^2}(\hbar^\infty)$ in \eqref{eq_psido_local_global} are uniformly bounded with respect to $\tau\in\Omega$. \end{lemm}
\begin{proof}
The explicit form of $\mathop{\mathrm{Op}}\nolimits_\hbar (a_0)$ is
\begin{align}
\mathop{\mathrm{Op}}\nolimits_\hbar (a_0)
&=\sum_{U_\iota \cap U_{\iota^\prime}\neq \varnothing}
\chi_\iota \varphi_\iota^* (\tilde\varphi_{\iota*}(\kappa_\iota \tilde\varphi_{\iota^\prime}^*a_{\iota^\prime}))^\mathrm{w}(x, \hbar D)(\varphi_{\iota*}\chi_\iota)\varphi_{\iota*} \nonumber\\
&=\sum_{U_\iota \cap U_{\iota^\prime}\neq \varnothing}
\chi_\iota \varphi_{\iota^\prime}^* (\kappa_\iota a_{\iota^\prime}+O_{S^{m-2}}(\hbar^2))^\mathrm{w}(x, \hbar D)(\varphi_{\iota^\prime *}\chi_\iota)\varphi_{\iota^\prime *} \nonumber\\
&=\sum_{\iota^\prime \in I}
\varphi_{\iota^\prime}^* (a_{\iota^\prime}+O_{S^{m-2}}(\hbar^2))^\mathrm{w}(x, \hbar D)\varphi_{\iota^\prime *}+O_{L^2\to L^2}(\hbar^\infty) \nonumber\\
&=\sum_{\iota\in I} \Ophloc \left( a_\iota-\hbar b_{1, \iota}\right)+O_{L^2\to L^2}(\hbar^\infty) \label{eq_a0_local_global_II}
\end{align}
by changing variables of pseudodifferential operators and the assumption $\rmop{supp}a_\iota \subset \rmop{supp}\varphi_{\iota*}\kappa_\iota$. Here $b_{1, \iota}=b_{1, \iota}(\hbar; x, \xi)\in S^{m-1}(T^*\mathbb{R}^n)$ has an asymptotic expansion
\[
b_{1, \iota}(\hbar; x, \xi)\sim \sum_{j=0}^\infty \hbar^j b_{1j, \iota}(x, \xi), \quad b_{1j, \iota}\in S^{m-j-1}(T^*\mathbb{R}^n)
\]
with $\rmop{supp} b_{1j, \iota}\subset \rmop{supp}\varphi_{\iota*}(\kappa_\iota a_0)$.
We repeat the same argument for $b_{10, \iota}(x, \xi)$. If we set
\[
a_1(x, \xi):=-\sum_{\iota\in I} \tilde\varphi_\iota^* b_{10, \iota}(x, \xi),
\]
then we have
\begin{equation}\label{eq_a1_local_global_II}
\mathop{\mathrm{Op}}\nolimits_\hbar (a_1)=\sum_{\iota\in I} \Ophloc \left( b_{10, \iota}-\hbar c_{2, \iota}\right)+O_{L^2\to L^2}(\hbar^\infty).
\end{equation}
Here $c_{2, \iota}(\hbar; x, \xi)\in S^{m-2}(T^*\mathbb{R}^n)$ has an asymptotic expansion
\[
c_{2, \iota}(\hbar; x, \xi)\sim \sum_{j=0}^\infty \hbar^j c_{2j, \iota}(x, \xi), \quad c_{2j, \iota}\in S^{m-j-2}(T^*\mathbb{R}^n)
\]
with $\rmop{supp} c_{2j, \iota}\subset \rmop{supp}\varphi_{\iota*}(\kappa_\iota a_0)$.
Summing up \eqref{eq_a0_local_global_II} and \eqref{eq_a1_local_global_II}$\times \hbar$, we obtain
\[
\mathop{\mathrm{Op}}\nolimits_\hbar (a_0+\hbar a_1)=\sum_{\iota\in I} \Ophloc (a_\iota-\hbar^2 b_{2, \iota})+O_{L^2\to L^2}(\hbar^\infty),
\]
where
\[
b_{2, \iota}(\hbar; x, \xi):=\hbar^{-1}(b_{1, \iota}-b_{10, \iota})+c_{2, \iota} \in S^{m-2}(T^*\mathbb{R}^n).
\]
$b_{2, \iota}(\hbar; x, \xi)$ has an asymptotic expansion
\[
b_{2, \iota}(\hbar; x, \xi)\sim \sum_{j=0}^\infty \hbar^j b_{2j, \iota}(x, \xi), \quad b_{2j, \iota}\in S^{m-j-2}(T^*\mathbb{R}^n)
\]
with $\rmop{supp} b_{2j, \iota}\subset \rmop{supp}\varphi_{\iota*}(\kappa_\iota a_0)$.
We repeat this argument and construct $a_j\in S^{m-j}_\mathrm{cyl}(T^*M)$ such that
\[
\mathop{\mathrm{Op}}\nolimits_\hbar \left( \sum_{j=0}^N \hbar^j a_j\right)
=\sum_{\iota\in I} \Ophloc (a_\iota-\hbar^{N+1} b_{N+1, \iota})+O_{L^2\to L^2}(\hbar^\infty)
\]
for all $N\in \mathbb{Z}_{\geq 0}$, where $b_{N+1, \iota}(\hbar; x, \xi)\in S^{N+1-j}(T^*\mathbb{R}^n)$ has an asymptotic expansion
\[
b_{N+1, \iota}(\hbar; x, \xi)\sim \sum_{j=0}^\infty \hbar^j b_{N+1, j, \iota}(x, \xi), \quad b_{N+1, j, \iota}\in S^{m-j-N-1}(T^*\mathbb{R}^n)
\]
with $\rmop{supp} b_{N+1, j, \iota}\subset \rmop{supp}\varphi_{\iota*}(\kappa_\iota a_0)$.
The desired symbol $a(\hbar; x, \xi)$ is defined as an asymptotic expansion
\[
a(\hbar; x, \xi)\sim \sum_{j=0}^\infty \hbar^j a_j(x, \xi)
\]
by Borel's theorem. \end{proof}
\begin{proof}[Proof of Theorem \ref{theo_psido_composition}]
For $u\in C_c^\infty (M; \Omega^{1/2})$ We decompose $\mathop{\mathrm{Op}}\nolimits_\hbar (a)\mathop{\mathrm{Op}}\nolimits_\hbar (b)$ into
\begin{equation}\label{eq_ophab}
\mathop{\mathrm{Op}}\nolimits_\hbar (a)\mathop{\mathrm{Op}}\nolimits_\hbar (b)
=\sum_{U_\iota\cap U_{\iota^\prime}\neq \varnothing}
A_\iota(\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime},
\end{equation}
where
\begin{equation}\label{eq_defi_aii}
A_\iota:=\chi_\iota \varphi_\iota^* (\tilde \varphi_{\iota*}(\kappa_\iota a))^\mathrm{w}(x, \hbar D)
\end{equation}
and
\begin{equation}\label{eq_defi_bii}
B_{\iota\iota^\prime}u:=\varphi_{\iota*}\varphi_{\iota^\prime}^* (\tilde \varphi_{\iota^\prime *}(\kappa_{\iota^\prime}b))^\mathrm{w}(x, \hbar D)(\varphi_{\iota^\prime *}(\chi_{\iota^\prime}u)).
\end{equation}
Take cylindrical functions $\chi_\iota^\prime\in C^\infty (M)$ such that $\rmop{supp}\chi_\iota^\prime\subset U_\iota$ and $\chi_\iota^\prime=1$ near $\rmop{supp}\chi_\iota$.
We treat $A_\iota$. In local coordinates,
\begin{align*}
(\varphi_{\iota*}\kappa_\iota)(\tilde \varphi_{\iota*}(\chi_\iota^\prime a))^\mathrm{w}
&=((\varphi_{\iota*}\kappa_\iota)\# (\tilde\varphi_{\iota*}(\chi_\iota^\prime a)))^\mathrm{w} \\
&=\left( \tilde\varphi_{\iota*}\left(\kappa_\iota a+\frac{i\hbar}{2}\{ \kappa_\iota, a\}\right)+O_{S^{m_1-2}}(\hbar^2)\right)^\mathrm{w}.
\end{align*}
Thus
\begin{equation}\label{eq_aii_decomposition}
A_\iota
=\underbrace{\kappa_\iota \varphi_\iota^* (\tilde\varphi_{\iota*}(\chi_\iota^\prime a))^\mathrm{w}}_{=:A_\iota^\prime}
-\frac{i\hbar}{2}\underbrace{\chi_\iota \varphi_\iota^* (\tilde\varphi_{\iota*}\{ \kappa_\iota, a\}+\hbar c_\iota^\prime)^\mathrm{w}}_{=:A_\iota^{\prime\prime}}.
\end{equation}
Here $c_\iota^\prime (\hbar; x, \xi)\in S^{m_1-2}(T^*\mathbb{R}^n)$ satisfies $\rmop{supp}c_\iota^\prime \subset \rmop{supp}\rmop{supp} \tilde\varphi_{\iota*}(\kappa_\iota a)$ modulo $O(\hbar^\infty)$.
We calculate $A_\iota^\prime (\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime}$ and $A_\iota^{\prime\prime} (\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime}$ respectively.
\fstep{Calculation of $\bm{A_\iota^\prime (\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime}}$}We apply the changing variables for Weyl quantization acting on half densities to $B_{\iota\iota^\prime}$ and obtain
\begin{equation}\label{eq_aipb_wip}
\begin{split}
(\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime}
&=(\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))\varphi_{\iota*}\varphi_{\iota^\prime}^* (\tilde \varphi_{\iota^\prime *}(\chi_\iota^\prime\kappa_{\iota^\prime}b))^\mathrm{w}(x, \hbar D)(\varphi_{\iota^\prime *}(\chi_\iota^\prime\chi_{\iota^\prime}))\varphi_{\iota^\prime *} \\
&\quad+O_{L^2\to L^2}(\hbar^\infty) \\
&=(\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))b_{\iota\iota^\prime}^\mathrm{w}(x, \hbar D)(\varphi_{\iota*}(\chi_\iota^\prime \chi_{\iota^\prime}))\varphi_{\iota*}+O_{L^2\to L^2}(\hbar^\infty),
\end{split}
\end{equation}
where
\[
b_{\iota\iota^\prime}(x, \xi)
=(\tilde\varphi_{\iota^\prime}\circ \tilde\varphi_\iota^{-1})^*(\tilde\varphi_{\iota^\prime *}( \chi_\iota^\prime\kappa_{\iota^\prime}b))+\hbar^2 q^\prime_{\iota\iota^\prime}
=\tilde\varphi_{\iota *}( \chi_\iota^\prime \kappa_{\iota^\prime}b)+\hbar^2 q^\prime_{\iota\iota^\prime}
\]
and $q^\prime_{\iota\iota^\prime}(\hbar; x, \xi)\in S^{m_2-2}(T^*\mathbb{R}^n)$ satisfies $\rmop{supp} q^\prime_{ \iota\iota^\prime}\subset \rmop{supp}\tilde\varphi_{\iota*}(\chi_\iota \kappa_{\iota^\prime}b)$ modulo $O(\hbar^\infty)$.
Hence by \eqref{eq_aii_decomposition}, we have \begin{equation}\label{eq_aii1_bii}
\begin{split}
&A_\iota^\prime (\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime} \\
&=\kappa_\iota \varphi_\iota^* (\tilde\varphi_{\iota*}(\chi_\iota^\prime a))^\mathrm{w}
(\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))
(\tilde\varphi_{\iota *}( \chi_\iota^\prime \kappa_{\iota^\prime}b)+\hbar^2 q^\prime_{\iota\iota^\prime})^\mathrm{w}\varphi_{\iota*}(\chi_\iota^\prime \chi_{\iota^\prime})\varphi_{\iota*} \\
&\quad+O_{L^2\to L^2}(\hbar^\infty) \\
&=\kappa_\iota \varphi_\iota^* ((\tilde\varphi_{\iota*}(\chi_\iota^\prime a))\# (\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))\# (\tilde\varphi_{\iota *}( \chi_\iota^\prime \kappa_{\iota^\prime}b)+\hbar^2 q^\prime_{\iota\iota^\prime}))^\mathrm{w}\varphi_{\iota*}(\chi_\iota^\prime \chi_{\iota^\prime})\varphi_{\iota*} \\
&\quad+O_{L^2\to L^2}(\hbar^\infty) \\
&=\kappa_\iota \varphi_\iota^* \biggl(\tilde\varphi_{\iota*}\biggl(\chi_\iota \kappa_{\iota^\prime}ab
+\frac{i\hbar}{2}(\{a, b\} \chi_\iota \kappa_{\iota^\prime}+\{a, \chi_\iota \kappa_{\iota^\prime}\} b+\{\chi_\iota, b\}\kappa_{\iota^\prime}a)\biggr)
+\hbar^2 \tilde q^\prime_{\iota\iota^\prime}\biggr)^\mathrm{w}
\\
&\quad (\varphi_{\iota*}(\chi_\iota^\prime \chi_{\iota^\prime}))\varphi_{\iota*}+O_{L^2\to L^2}(\hbar^\infty) \\
&=\kappa_\iota \varphi_\iota^* \biggl(\tilde\varphi_{\iota*}\biggl(\chi_\iota \kappa_{\iota^\prime}ab
+\frac{i\hbar}{2}(\{a, b\} \chi_\iota \kappa_{\iota^\prime}+\{a, \kappa_{\iota^\prime}\} \chi_\iota b)\biggr)
+\hbar^2 \tilde q^\prime_{\iota\iota^\prime}\biggr)^\mathrm{w}
(\varphi_{\iota*}\chi_\iota^\prime )\varphi_{\iota*}
\\
&\quad +O_{L^2\to L^2}(\hbar^\infty).
\end{split} \end{equation} Here $\tilde q^\prime_{\iota\iota^\prime}(\hbar; x, \xi)\in S^{m_2-2}(T^*\mathbb{R}^n)$ satisfies $\rmop{supp} \tilde q^\prime_{ \iota\iota^\prime}\subset \rmop{supp}\tilde\varphi_{\iota*}(\chi_\iota \kappa_{\iota^\prime}ab)$ modulo $O(\hbar^\infty)$. For fixed $\iota\in I$, we sum \eqref{eq_aii1_bii} over $\iota^\prime \in I$ such that $U_\iota\cap U_{\iota^\prime}\neq \varnothing$ and obtain
\begin{equation}\label{eq_aii1_bii_result}
\begin{split}
&\sum_{\iota^\prime: U_\iota \cap U_{\iota^\prime}\neq \varnothing}A_\iota^\prime (\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime} \\
&=\kappa_\iota \varphi_\iota^* \biggl(\tilde\varphi_{\iota*}\biggl(\chi_\iota ab+\frac{i\hbar}{2}\{a, b\} \chi_\iota\biggr)+\hbar^2 \tilde q^\prime_\iota\biggr)^\mathrm{w}(\varphi_{\iota*}\chi_\iota^\prime)\varphi_{\iota*} \\
&\quad+O_{L^2\to L^2}(\hbar^\infty).
\end{split}
\end{equation}
Here $\tilde q^\prime_\iota:=\sum_{\iota^\prime: U_\iota \cap U_{\iota^\prime}\neq \varnothing}\tilde q^\prime_{\iota\iota^\prime}$.
Since $\tilde q^\prime_\iota=\chi_\iota^2 \tilde q^\prime_\iota+O_{S^0}(\hbar^\infty)$, we can find a symbol $\tilde c^\prime_\iota (\hbar; x, \xi)\in S^{m_1+m_2-2}(T^*\mathbb{R}^n)$ which satisfies \begin{align*}
&(\varphi_{\iota*}\kappa_\iota)\left(\tilde\varphi_{\iota*}\left(\chi_\iota ab+\frac{i\hbar}{2}\{a, b\} \chi_\iota\right)+\hbar^2 \tilde q^\prime_\iota \right)^\mathrm{w} \\
&=(\varphi_{\iota*}\chi_\iota)\left(\tilde\varphi_{\iota*}\left(\kappa_\iota ab+\frac{i\hbar}{2}\{a, b\} \kappa_\iota+\frac{i\hbar}{2}\{\kappa_\iota, ab\} \right)+\hbar^2 c^\prime_\iota\right)^\mathrm{w}(\varphi_{\iota*}\chi_\iota) \\
&\quad+O_{L^2\to L^2}(\hbar^\infty) \end{align*} and $\rmop{supp}c^\prime_{j\iota}\subset \rmop{supp}\tilde\varphi_{\iota*}(\kappa_\iota ab)$ modulo $O(\hbar^\infty)$. Hence \eqref{eq_aii1_bii_result} becomes \begin{align*}
&\sum_{\iota^\prime: U_\iota \cap U_{\iota^\prime}\neq \varnothing}A_\iota^\prime (\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime} \\
&=\chi_\iota\varphi_\iota^*\left(\tilde\varphi_{\iota*}\left(\kappa_\iota ab+\frac{i\hbar}{2}\{a, b\} \kappa_\iota+\frac{i\hbar}{2}\{\kappa_\iota, ab\} \right) +\hbar^2 c^\prime_\iota \right)^\mathrm{w}(\varphi_{\iota*}\chi_\iota)\varphi_{\iota*} \\
&\quad+O_{L^2\to L^2}(\hbar^\infty). \end{align*} Summing up this over $\iota\in I$ and obtain \begin{equation}
\label{eq_aii1_bii_final}
\begin{split}
&\sum_{U_\iota \cap U_{\iota^\prime}\neq \varnothing}A_\iota^\prime (\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime} \\
&=\mathop{\mathrm{Op}}\nolimits_\hbar \left( ab+\frac{i\hbar}{2}\{a, b\} \right)
+\sum_{\iota\in I} \Ophloc \left( \frac{i\hbar}{2}\tilde\varphi_{\iota*}\{\kappa_\iota, ab\}+\hbar^2 c^\prime_\iota\right) \\
&\quad+O_{L^2\to L^2}(\hbar^\infty).
\end{split} \end{equation} Since the support of $\tilde\varphi_{\iota*}\{\kappa_\iota, ab\}$ and $c^\prime_\iota$ is included in $\rmop{supp}\tilde\varphi_{\iota*}(\kappa_\iota ab)$, we can apply Lemma \ref{lemm_psido_local_global} for \eqref{eq_aii1_bii_final} and find a symbol $c^\prime(\hbar; x, \xi)\in S^{m_1+m_2-2}_\mathrm{cyl}(T^*M)$ which satisfies \[
\sum_{\iota\in I} \Ophloc \left( \frac{i\hbar}{2}\tilde\varphi_{\iota*}\{\kappa_\iota, ab\}+\hbar^2 c^\prime_\iota\right)=\hbar \mathop{\mathrm{Op}}\nolimits_\hbar (c^\prime)+O_{L^2\to L^2}(\hbar^\infty), \] $\rmop{supp}c^\prime_j\subset \rmop{supp}(ab)$ modulo $O(\hbar^\infty)$ and \[
c^\prime_0(x, \xi)=\sum_{\iota\in I} \frac{i}{2}\{\kappa_\iota , ab\}=0. \]
Thus \eqref{eq_aii1_bii_final} becomes \begin{equation}
\label{eq_aii1_bii_final_2}
\begin{split}
&\sum_{U_\iota \cap U_{\iota^\prime}\neq \varnothing}A_\iota^\prime (\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime} \\
&=\mathop{\mathrm{Op}}\nolimits_\hbar \left( ab+\frac{i\hbar}{2}\{a, b\} +\hbar^2 (\hbar^{-1}c^\prime)\right)
+O_{L^2\to L^2}(\hbar^\infty).
\end{split} \end{equation}
\fstep{Calculation of $\bm{A_\iota^{\prime\prime} (\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime}}$} It is enough to calculate the principal term of $A_{\iota\iota^\prime}^{\prime\prime} B_{\iota\iota^\prime}$ in \eqref{eq_aii_decomposition} since $A_{\iota\iota^\prime}^{\prime\prime}$ has a coefficient $\hbar$. By changing variables of the Weyl quantization acting on half-densities, we have \[
A_\iota^{\prime\prime}=\varphi_{\iota^\prime}^* \left( \tilde\varphi_{\iota^\prime *} (\chi_{\iota^\prime}^\prime\{ \kappa_\iota, a\})+O_{S^{m_1-2}}(\hbar)\right)^\mathrm{w}(\varphi_{\iota^\prime}\circ \varphi_\iota^{-1})_*+O_{L^2\to L^2}(\hbar^\infty). \] Hence \begin{align*}
&A_{\iota\iota^\prime}^{\prime\prime} (\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime} \\
&=\varphi_{\iota^\prime}^* \left( \tilde\varphi_{\iota^\prime *} (\chi_{\iota^\prime}^\prime\{ \kappa_\iota, a\})+O_{S^{m_1-2}}(\hbar)\right)^\mathrm{w}
(\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))
\left(\tilde \varphi_{\iota^\prime *}(\kappa_{\iota^\prime}b)\right)^\mathrm{w} (\varphi_{\iota^\prime *}\chi_{\iota^\prime})\varphi_{\iota^\prime*} \\
&\quad +O_{L^2\to L^2}(\hbar^\infty) \\
&=\chi_{\iota^\prime}\varphi_{\iota^\prime}^*(\tilde\varphi_{\iota^\prime*} (\kappa_{\iota^\prime}b\{ \kappa_\iota, a\})+\hbar c^{\prime\prime}_{\iota\iota^\prime})^\mathrm{w}\varphi_{\iota^\prime}(\chi_{\iota^\prime}u)+O_{L^2\to L^2}(\hbar^\infty). \end{align*} $c^{\prime\prime}_{\iota\iota^\prime}(\hbar; x, \xi)\in S^{m_1+m_2-2-j}(T^*\mathbb{R}^n)$ satisfies $\rmop{supp} c^{\prime\prime}_{\iota\iota^\prime}\subset \rmop{supp}\tilde\varphi_{\iota*}(\kappa_\iota \kappa_{\iota^\prime} ab)$ modulo $O(\hbar^\infty)$. We sum them up over $\iota\in I$ such that $U_\iota\cap U_{\iota^\prime}\neq \varnothing$. Then the terms including $\{\kappa_\iota, a\}$ vanish and we obtain
\begin{equation}\label{eq_aii2_bii_final}
\sum_{\iota: U_\iota\cap U_{\iota^\prime}\neq \varnothing}A_{\iota\iota^\prime}^{\prime\prime} (\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime}
=\hbar\chi_{\iota^\prime}\varphi_{\iota^\prime}^*(c^{\prime\prime}_{\iota^\prime})^\mathrm{w}(\varphi_{\iota^\prime*}\chi_{\iota^\prime})\varphi_{\iota^\prime*}+O_{L^2\to L^2}(\hbar^\infty), \end{equation} where $c^{\prime\prime}_{\iota^\prime}:=\sum_{\iota: U_\iota\cap U_{\iota^\prime}\neq \varnothing}c^{\prime\prime}_{\iota\iota^\prime}$. The sum of \eqref{eq_aii2_bii_final} over $\iota^\prime \in I$ is \begin{equation}\label{eq_aii2_bii_final_2}
\sum_{U_\iota\cap U_{\iota^\prime}\neq \varnothing}A_{\iota\iota^\prime}^{\prime\prime} (\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime}
=\hbar\sum_{\iota^\prime \in I}\Ophloc[\iota^\prime] (c^{\prime\prime}_{\iota^\prime})+O_{L^2\to L^2}(\hbar^\infty). \end{equation} Since $\rmop{supp}c^{\prime\prime}_{\iota^\prime}\subset \rmop{supp} \tilde\varphi_{\iota^\prime *}(\kappa_{\iota^\prime}ab)$, we can apply Lemma \ref{lemm_psido_local_global} and find a symbol $c^{\prime\prime} (\hbar; x, \xi)\in S^{m_1+m_2-2}_\mathrm{cyl}(T^*M)$ which satisfies \[
\sum_{\iota^\prime \in I}\Ophloc[\iota^\prime] (c^{\prime\prime}_{\iota^\prime})
=\mathop{\mathrm{Op}}\nolimits_\hbar (c^{\prime\prime})+O_{L^2\to L^2}(\hbar^\infty) \] and $\rmop{supp}c^{\prime\prime}_j \subset \rmop{supp} (ab)$ modulo $O(\hbar^\infty)$. Hence \eqref{eq_aii2_bii_final_2} becomes \begin{equation}
\label{eq_aii2_bii_final_3}
\sum_{U_\iota\cap U_{\iota^\prime}\neq \varnothing}A_{\iota\iota^\prime}^{\prime\prime} (\varphi_{\iota*}(\chi_\iota \chi_{\iota^\prime}))B_{\iota\iota^\prime}
=\hbar \mathop{\mathrm{Op}}\nolimits_\hbar (c^{\prime\prime})+O_{L^2\to L^2}(\hbar^\infty). \end{equation}
\fstep{Conclusion}\eqref{eq_aii_decomposition}, \eqref{eq_aii1_bii_final_2} and \eqref{eq_aii2_bii_final_3} imply \begin{equation}\label{eq_psido_composition_wip}
\begin{split}
&\mathop{\mathrm{Op}}\nolimits_\hbar (a)\mathop{\mathrm{Op}}\nolimits_\hbar (b) \\
&=\mathop{\mathrm{Op}}\nolimits_\hbar \left(ab+\frac{i\hbar}{2}\{a, b\}+\hbar^2 (\hbar^{-1}c^\prime)-\frac{i\hbar}{2}(\hbar c^{\prime\prime})\right)+O_{L^2\to L^2}(\hbar^\infty) \\
&=\mathop{\mathrm{Op}}\nolimits_\hbar \left(ab+\frac{i\hbar}{2}\{a, b\}+\hbar^2 c\right)+O_{L^2\to L^2}(\hbar^\infty)
\end{split} \end{equation} where \[
c(\hbar; x, \xi):=\hbar^{-1}c^\prime(\hbar; x, \xi)-\frac{i}{2}c^{\prime\prime}(\hbar; x, \xi). \] The symbol $c\in S^{m_1+m_2-2}_\mathrm{cyl}(T^*M)$ has the desired properties. \end{proof}
Next we prove the sharp G\aa rding inequality (Theorem \ref{theo_sharp_garding}). We begin with the case of Euclidean spaces.
\begin{theo}[Sharp G\aa rding inequality on Euclidean spaces]\label{theo_sharp_garding_euclid}
For all $a\in S^0(\mathbb{R}^n)$ with $\rmop{Re}a\geq 0$, there exists a symbol $b=b(\hbar)\in S^0(\mathbb{R}^n)$ such that the following statements hold:
\begin{itemize}
\item The inequality
\begin{equation}\label{eq_sharp_garding_euclid}
\rmop{Re}a^\mathrm{w}(x, \hbar D)\geq -\hbar \rmop{Re}b^\mathrm{w}(x, \hbar D)
\end{equation}
holds.
\item $\rmop{supp}b\subset \rmop{supp}a$ modulo $O(\hbar^\infty)$.
\end{itemize}
Furthermore, if the symbol $a$ also depends on some parameter $\tau\in \Omega$ and are uniformly bounded in $S^0(T^*\mathbb{R}^n)$, then the symbol $b\in S^0(T^*\mathbb{R}^n)$ itself are uniformly bounded with respect to $\tau\in\Omega$. \end{theo}
For investigation of the support of $b(\hbar; x, \xi)$ in \eqref{theo_sharp_garding_euclid}, we recall the FBI transform and its fundamental properties.
\begin{prop}\label{prop_fbi_fundamental}
We define an FBI transform $Fu$ of $u\in \mathscr{S}(\mathbb{R}^n)$ as \[
F u(x, \xi):=\frac{2^{n/4}}{(2\pi \hbar)^{3n/4}}\int_{\mathbb{R}^n}e^{-|x-y|^2/2\hbar+i\xi\cdot(x-y)/\hbar}u(y)\, \mathrm{d} y. \] Then the following statements hold. \begin{enumerate}
\renewcommand{(\alph{enumi})}{(\roman{enumi})}
\item $F$ is continuously extended to a linear isometry from $L^2(\mathbb{R}^n)$ to $L^2(\mathbb{R}^{2n})$.
\item For $b\in S^0(T^*\mathbb{R}^n)$, we define
\[
p_b(x, \xi):=\left(\frac{1}{\pi\hbar}\right)^n\int_{\mathbb{R}^{2n}} e^{-|x-y|^2/\hbar - |\xi-\eta|^2/\hbar}b(y, \eta)\, \mathrm{d} y\mathrm{d} \eta.
\]
Then $p_b\in S^0(T^*\mathbb{R}^n)$ and
\begin{equation}\label{eq_weyl_antiwick}
F^* M_b F =p_b^\mathrm{w}(x, \hbar D).
\end{equation}
Here $M_b: u\mapsto b u$ is the multiplication operator by $b$. \end{enumerate} \end{prop}
\begin{rema*}
$F^*M_bF$ is so-called anti-Wick quantization of the symbol $b$. \end{rema*}
\begin{proof}
A direct calculation shows (i) and the relation \eqref{eq_weyl_antiwick} (see \cite{Martinez02} or Chapter 13 in \cite{Zworski12} for details). We have to prove $p_b\in S^0(T^*\mathbb{R}^n)$ if $b\in S^0(T^*\mathbb{R}^n)$. The facts $\partial_x e^{-|x-y|^2/\hbar}=-\partial_y e^{-|x-y|^2/\hbar}$, $\partial_\xi e^{-|\xi-\eta|^2/\hbar}=-\partial_\eta e^{-|\xi-\eta|^2/\hbar}$ and integration by parts show that
\[
\partial_x^\alpha\partial_\xi^\beta p_b(x, \xi)=p_{\partial_x^\alpha\partial_\xi^\beta b}(x, \xi).
\]
Thus the estimate
\begin{align*}
|\partial_x^\alpha\partial_\xi^\beta p_b(x, \xi)|
&\leq \frac{|b|_{0, \alpha, \beta}}{(\pi\hbar)^n} \int_{\mathbb{R}^{2n}} e^{-|x-y|^2/\hbar - |\xi-\eta|^2/\hbar}\jbracket{\eta}^{-|\beta|}\, \mathrm{d} y\mathrm{d} \eta \\
&\leq C|b|_{0, \alpha, \beta}\jbracket{\xi}^{-|\beta|},
\end{align*}
where
\[
|b|_{0, \alpha, \beta}:=\sup_{(x, \xi)\in T^*\mathbb{R}^n} \jbracket{\xi}^{|\beta|}|\partial_x^\alpha \partial_\xi^\beta b(x, \xi)|.
\]
This shows that $p_b\in S^0(T^*\mathbb{R}^n)$. \end{proof}
\begin{proof}[Proof of Theorem \ref{theo_sharp_garding_euclid}]
We define a symbol $b(\hbar; x, \xi)\in S^0(T^*\mathbb{R}^n)$ as
\[
b(\hbar; x, \xi)=\hbar^{-1}(p_a(\hbar; x, \xi)-a(x, \xi)).
\]
Then by $\rmop{Re}a\geq 0$, we have
\begin{align*}
&\rmop{Re}\jbracket{a^\mathrm{w}(x, \hbar D)u, u}_{L^2(\mathbb{R}^n)} \\
&=\jbracket{M_{\rmop{Re}a} F u, F u}_{L^2(\mathbb{C}^n)}-\hbar \rmop{Re}\jbracket{b^\mathrm{w}(\hbar; x, \hbar D) u, u}_{L^2(\mathbb{R}^n)} \\
&\geq -\hbar \rmop{Re}\jbracket{b^\mathrm{w}(\hbar; x, \hbar D) u, u}_{L^2(\mathbb{R}^n)}. \end{align*}
By a calculation by the Taylor theorem, we obtain
\begin{align*}
p_a(x, \xi)&=\sum_{j=0}^N\frac{1}{j!}\left(\frac{\hbar}{4}\right)^j \triangle_{x, \xi}^j a(x, \xi)+\hbar^{N+1}q_{N+1}(\hbar; x, \xi), \\
q_{N+1}(\hbar; x, \xi)&:=\sum_{|\alpha|+|\beta|=2N+2}\pi^{-n}\int_{\mathbb{R}^{2n}} \mathrm{d} y\mathrm{d} \eta \, e^{-|y|^2-|\eta|^2} y^\alpha \eta^\beta \\
&\quad\times \int_0^1 \mathrm{d} \tau \, \partial_x^\alpha \partial_\xi^\beta a(x+\hbar^{1/2}\tau y, \xi+\hbar^{1/2}\tau \eta)\end{align*}
Thus
\[
b(\hbar; x, \xi)=\sum_{j=0}^{N-1}\frac{1}{(j+1)!}\frac{\hbar^j}{4^{j+1}} \triangle_{x, \xi}^{j+1} a(x, \xi)+\hbar^N q_{N+1}(\hbar; x, \xi). \qedhere
\]
This implies $\rmop{supp}b\subset \rmop{supp}a$ modulo $O(\hbar^\infty)$. \end{proof}
\begin{proof}[Proof of Theorem \ref{theo_sharp_garding}]
Let $u\in C_c^\infty (M; \Omega^{1/2})$. Since
\begin{equation}\label{eq_expectation_decomposition}
\rmop{Re}\jbracket{ \mathop{\mathrm{Op}}\nolimits_\hbar (a)u, u}_{L^2}=\sum_{\iota\in I}\rmop{Re}\jbracket{ (\tilde\varphi_{\iota*}(\kappa_\iota a))^\mathrm{w}(x, \hbar D)(\varphi_{\iota*}(\chi_\iota u)), \varphi_{\iota*}(\chi_\iota u)}_{L^2},
\end{equation}
it is enough to investigate $(\tilde\varphi_{\iota*}(\kappa_\iota a))^\mathrm{w}(x, \hbar D)$ for each $\iota\in I$.
By Theorem \ref{theo_sharp_garding_euclid}, there exists $b_\iota=b_\iota(\hbar)\in S^0(T^*\mathbb{R}^n)$ such that
\begin{align*}
&\rmop{Re}\jbracket{ (\tilde\varphi_{\iota*}(\kappa_\iota a))^\mathrm{w}(x, \hbar D)\varphi_{\iota*}(\chi_\iota u), \varphi_{\iota*}(\chi_\iota u)}_{L^2} \\
&\geq -\hbar \jbracket{b_\iota^\mathrm{w}(x, \hbar D)\varphi_{\iota*}(\chi_\iota u), \varphi_{\iota*}(\chi_\iota u)}_{L^2}
\end{align*}
and $\rmop{supp}b_\iota \subset (\tilde\varphi_{\iota*}(\kappa_\iota a))$ modulo $O(\hbar^\infty)$.
\[
b_\iota (\hbar; x, \xi)\sim \sum_{j=0}^\infty\frac{1}{(j+1)!}\frac{\hbar^j}{4^{j+1}} \triangle_{x, \xi}^{j+1} (\tilde\varphi_{\iota*}(\kappa_\iota a))(x, \xi) \quad \text{in } S^0(T^*\mathbb{R}^n).
\]
Thus by \eqref{eq_expectation_decomposition}, we obtain
\begin{align*}
\rmop{Re}\jbracket{ \mathop{\mathrm{Op}}\nolimits_\hbar (a)u, u}_{L^2}
&\geq -\hbar \sum_{\iota\in I}\rmop{Re}\jbracket{ b_\iota^\mathrm{w}(x, \hbar D)(\varphi_{\iota*}(\chi_\iota u)), \varphi_{\iota*}(\chi_\iota u)}_{L^2} \\
&=-\hbar \jbracket{\sum_{\iota\in I} \Ophloc (b_\iota) u, u}_{L^2}.
\end{align*}
Since $\rmop{supp}b_\iota \subset \rmop{supp}(\tilde\varphi_{\iota*}(\kappa_\iota a))$ modulo $O(\hbar^\infty)$, we can apply Lemma \ref{lemm_psido_local_global} and obtain a symbol $b(\hbar; x, \xi)\in S^0_\mathrm{cyl}(T^*M)$ which satisfies
\[
\sum_{\iota\in I} \Ophloc (b_\iota)=\mathop{\mathrm{Op}}\nolimits_\hbar (b)+O_{L^2\to L^2}(\hbar^\infty)
\]
and $\rmop{supp}b\subset \rmop{supp}a$ modulo $O(\hbar^\infty)$. \end{proof}
\subsection{Non-canonical quantization and (radially homogeneous) wavefront sets}\label{subs_quantization_wf_hwf}
In this section we prove Theorem \ref{theo_hwf_quantization} and Proposition \ref{prop_wf_quantization}. As a preparation, we prove a lemma on the relation between a quantization of locally defined symbols and the quantization procedure $\mathop{\mathrm{Op}}\nolimits_\hbar$.
\begin{lemm}\label{lemm_local_quantization}
Let $\varphi: U\to V$ be polar coordinates on $M$ and $\chi\in C^\infty (M)$ be a cylindrical function supported in $U$. Then, for a symbol $b\in S^m(T^*\mathbb{R}^n)$, there exists $a(\hbar; x, \xi)\in S^m_\mathrm{cyl}(T^*M)$ which satisfies
\[
\chi\varphi^* b^\mathrm{w}(x, \hbar D)(\varphi_* \chi)\varphi_*
=\mathop{\mathrm{Op}}\nolimits_\hbar (a)+O_{L^2\to L^2}(\hbar^\infty)
\]
and has an asymptotic expansion
\[
a(\hbar; x, \xi)\sim \sum_{j=0}^\infty \hbar^j a_j(x, \xi), \quad a_j\in S^{m-j}_\mathrm{cyl}(T^*M)
\]
with $\rmop{supp}a_j \subset \rmop{supp}(\chi^2\tilde\varphi^* b)$ and $a_0(x, \xi)=\chi(x)^2\tilde\varphi^*b(x, \xi)$. \end{lemm}
\begin{proof}
We calculate the composition
\begin{align*}
\chi\varphi^* b^\mathrm{w}(x, \hbar D)(\varphi_* \chi)\varphi_*
= \varphi^* b_\chi^\mathrm{w}(x, \hbar D)\varphi_*.
\end{align*}
Here $b_\chi (\hbar; x, \xi)\in S^m(T^*\mathbb{R}^n)$ has an asymptotic expansion
\[
b_\chi (\hbar; x, \xi)\sim \sum_{j=0}^\infty \hbar^j b_{\chi, j}(x, \xi), \quad b_{\chi, j}\in S^{m-j}(T^*\mathbb{R}^n)
\]
with $\rmop{supp} b_{\chi, j}\subset (b(\varphi_*\chi))$ and $b_0(x, \xi)=\chi(x)^2b(x, \xi)$.
We decompose $\varphi^* b_\chi^\mathrm{w}\varphi_*$ into
\begin{equation}\label{eq_psido_local_decomposition}
\varphi^* b_\chi^\mathrm{w}(\hbar; x, \hbar D)\varphi_*=\sum_{\iota: U_\iota \cap U\neq \varnothing} \varphi^* ((\tilde\varphi_*\kappa_\iota)b_\chi)^\mathrm{w}(\hbar; x, \hbar D)\varphi_*.
\end{equation}
By the changing variables of pseudodifferential operators and $b_{\chi, 0}=\chi^2 \tilde\varphi^* b$, we have
\begin{equation}
\label{eq_psido_local_changing}
((\tilde\varphi_*\kappa_\iota)b_\chi)^\mathrm{w}(\hbar; x, \hbar D)
=\varphi_*\varphi_\iota^* (\tilde \varphi_{\iota*}(\kappa_\iota\chi^2 \tilde\varphi^*b)+\hbar c_\iota )^\mathrm{w}(\hbar; x, \hbar D) \varphi_{\iota*}\varphi^*,
\end{equation}
where $c_\iota (\hbar; x, \xi)\in S^{m-1}(T^*\mathbb{R}^n)$ and has an asymptotic expansion
\[
c_\iota (\hbar; x, \xi)\sim \sum_{j=0}^\infty \hbar^j c_{j, \iota}(x, \xi), \quad c_{j, \iota}\in S^{m-1-j}(T^*\mathbb{R}^n)
\]
with $\rmop{supp}c_{j, \iota} \subset \rmop{supp}\tilde\varphi_{\iota*} (\kappa_\iota \chi^2\tilde\varphi^*b)$.
Substituting \eqref{eq_psido_local_changing} to \eqref{eq_psido_local_decomposition}, we obtain
\begin{equation}\label{eq_psido_local_iota}
\begin{split}
&\varphi^* b_\chi^\mathrm{w}(\hbar; x, \hbar D)\varphi_* \\
&=\sum_{\iota: U_\iota \cap U\neq \varnothing} \varphi_\iota^* (\tilde \varphi_{\iota*}(\kappa_\iota \varphi^*b_\chi)+\hbar c_\iota )^\mathrm{w}(\hbar; x, \hbar D) \varphi_{\iota*}\\
&=\sum_{\iota\in I} \Ophloc (\tilde \varphi_{\iota*}(\kappa_\iota \varphi^*b_\chi)+\hbar c_\iota )+O_{L^2\to L^2}(\hbar^\infty). \\
\end{split} \end{equation}
Since the support of $\tilde \varphi_{\iota*}(\kappa_\iota \varphi^*b_\chi)+\hbar c_\iota$ is included in $\rmop{supp}\varphi_{\iota*}\kappa_\iota$, we can apply Lemma \ref{lemm_psido_local_global} for \eqref{eq_psido_local_iota} and obtain a symbol $a(\hbar; x, \xi)\in S^m_\mathrm{cyl}(T^*M)$ which satisfies
\[
\sum_{\iota\in I} \Ophloc (\tilde \varphi_{\iota*}(\kappa_\iota \varphi^*b)+\hbar c_\iota )
=\mathop{\mathrm{Op}}\nolimits_\hbar (a)+O_{L^2\to L^2}(\hbar^\infty)
\]
and has an asymptotic expansion
\[
a(\hbar; x, \xi)\sim \sum_{j=0}^\infty \hbar^j a_j(x, \xi), \quad a_j\in S^{m-j}_\mathrm{cyl}(T^*M)
\]
with $\rmop{supp}a_j \subset \rmop{supp}(\chi^2\tilde\varphi^* b)$ and $a_0(x, \xi)=\chi(x)^2\varphi^*b(x, \xi)$. \end{proof}
\begin{proof}[Proof of Theorem \ref{theo_hwf_quantization}]
Take a cylindrical function $\chi^\prime\in C^\infty (M)$ such that $\rmop{supp}\chi^\prime \subset U$ and $\chi^\prime=1$ near $\rmop{supp}\chi$. By Lemma \ref{lemm_local_quantization}, there exists a symbol $c(\hbar; x, \xi)\in S^0_\mathrm{cyl}(T^*M)$ which satisfies \begin{equation}\label{eq_hwf_time_difference}
\chi\varphi^*(\varphi_*\chi^\prime-a)^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar D_\theta)\varphi_*(\chi u)=\mathop{\mathrm{Op}}\nolimits_\hbar (c)u \end{equation} and has an asymptotic expansion \[
c(\hbar; x, \xi)\sim \sum_{j=0}^\infty \hbar^j c_j(\hbar; x, \xi), \quad c_j(\hbar; x, \xi)\in S^{-j}_\mathrm{cyl}(T^*M) \] with $\rmop{supp}c_j(\hbar) \subset \rmop{supp}\chi^2(1-\tilde\varphi_\iota^*a (\hbar r, \theta, \rho, \eta))$. We compose $A_\hbar (t_0)$ to the left hand side of \eqref{eq_hwf_time_difference} and obtain \begin{equation}\label{eq_hwf_time_difference_2}
A_\hbar (t_0)(\chi^2 u)-A_\hbar (t_0)\chi\varphi^*a^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar D_\theta)\varphi_*(\chi u)=A_\hbar (t_0)\mathop{\mathrm{Op}}\nolimits_\hbar (c)u \end{equation}
Taking $\delta_0, \delta_1, \ldots$ sufficiently small, we can assume that
\[
\rmop{supp} a(\hbar^{-1}t_0)\cap \rmop{supp}(\chi^\prime-\tilde\varphi^*a(\hbar r, \theta, \rho, \eta))=\varnothing
\]
and
\[
\rmop{supp}a(\hbar^{-1}t_0)\cap \rmop{supp}(1-\chi^2)=\varnothing.
\]
Then, by \eqref{eq_hwf_time_difference_2}, we obtain
\begin{align*}
&A_\hbar (t_0)-A_\hbar (t_0)\chi\varphi^* a^\mathrm{w}(x, \hbar D)(\varphi_* \chi)\varphi_* \\
&=A_\hbar (t_0)\mathop{\mathrm{Op}}\nolimits_\hbar (c)+A_\hbar (t_0)(1-\chi^2)=O_{L^2\to L^2}(\hbar^\infty). \qedhere
\end{align*} \end{proof}
\begin{proof}[Proof of Proposition \ref{prop_wf_quantization}]
Let $a\in C_c^\infty (T^*M)$ be a symbol such that $a=1$ near $(x_0, \xi_0)$ and $\mathop{\mathrm{Op}}\nolimits_\hbar (a)u=O_{L^2}(\hbar^\infty)$. Take a coordinate function $\varphi: U\to V$ near $x_0$. We take a cutoff function $\chi\in C_c^\infty (U)$ and a symbol $b\in C_c^\infty (T^*\mathbb{R}^n)$ such that $\chi=1$ near $x_0$, $b=1$ near $\tilde\varphi (x_0, \xi_0)$ and $a=1$ near $\rmop{supp}\tilde\varphi^*b$. By Lemma \ref{lemm_psido_local_global}, there exists a symbol $c(\hbar; x, \xi)\in S^0_\mathrm{cyl}(T^*M)$ such that
\[
\chi \varphi^* b^\mathrm{w}(x, \hbar D)(\varphi_*\chi)\varphi_*=\mathop{\mathrm{Op}}\nolimits_\hbar (c)+O_{L^2\to L^2}(\hbar^\infty)
\]
and $\rmop{supp}c\subset \rmop{supp}\chi^2\tilde\varphi^*b$ modulo $O(\hbar^\infty)$. Since $\rmop{supp} \chi^2\tilde\varphi^*b \cap \rmop{supp}(1-a)=\varnothing$, Theorem \ref{theo_psido_composition} shows that $\mathop{\mathrm{Op}}\nolimits_\hbar (c)\mathop{\mathrm{Op}}\nolimits_\hbar (1-a)=O_{L^2\to L^2}(\hbar^\infty)$. Thus
\begin{align*}
\chi \varphi^* b^\mathrm{w}(x, \hbar D)\varphi_*(\chi u)
&=\mathop{\mathrm{Op}}\nolimits_\hbar (c)u+O_{L^2}(\hbar^\infty) \\
&=\mathop{\mathrm{Op}}\nolimits_\hbar (c)\underbrace{\mathop{\mathrm{Op}}\nolimits_\hbar (a)u}_{=O_{L^2}(\hbar^\infty)}+\underbrace{\mathop{\mathrm{Op}}\nolimits_\hbar (c)\mathop{\mathrm{Op}}\nolimits_\hbar (1-a)}_{=O_{L^2\to L^2}(\hbar^\infty)}u+O_{L^2}(\hbar^\infty) \\
&=O_{L^2}(\hbar^\infty).
\end{align*}
This shows that $(x_0, \xi_0)\not\in \rmop{WF}(u)$. \end{proof}
\subsection{Radially homogeneous wavefront sets and homogeneous wavefront sets}\label{subs_cylindrical_homogeneous}
In this section, we prove Proposition \ref{prop_hwf_polar} and Corollary \ref{coro_cylindrical_homogeneous}.
\begin{proof}[Proof of Proposition \ref{prop_hwf_polar}]
\fstep{(i) $\bm{\Rightarrow}$ (ii)}Assume that $(x_0, \xi_0)\not\in \rmop{HWF}(u)$. By definition of homogeneous wavefront sets, there exists a symbol $a\in C_c^\infty (T^*\mathbb{R}^n)$ such that $a=1$ near $(x_0, \xi_0)$ and $\| a^\mathrm{w}(\hbar x, \hbar D)u\|_{L^2}=O(\hbar^\infty)$. We can assume that $\rmop{supp}a \subset \Gamma \times \mathbb{R}^n$ for small conic neighborhood $\Gamma$ of $x_0$. Let $\varphi: \Gamma \to \mathbb{R}_+\times V^\prime$ be polar coordinates. Take a cylindrical function $\chi\in C^\infty (\mathbb{R}^n)$ such that $\rmop{supp}\chi \subset \Gamma$ and $\chi=1$ near $\rmop{supp}a$. Then, by the changing variables of pseudodifferential operators (see Section \ref{subs_cylindrical_class}), we have
\begin{equation}\label{eq_hwf_polar_change}
(\chi-a)^\mathrm{w}(\hbar x, \hbar D)=\varphi^* b^\mathrm{w}(\hbar; r, \theta, \hbar D_r, \hbar D_\theta)\varphi_*,
\end{equation}
where $b(\hbar; r, \theta, \rho, \eta)\in C_c^\infty (T^*\mathbb{R}^n)$ satisfies
\begin{align}
\rmop{supp}b(\hbar)
&\subset \{ \tilde\varphi (x, \xi)\in T^*\mathbb{R}^n \mid (\hbar x, \xi)\in \rmop{supp}(\chi-a)\} \nonumber\\
&=\{ (r, \theta, \rho, \eta)\in T^*(\mathbb{R}_+\times V^\prime) \mid (\hbar r, \theta, \rho, \hbar \eta)\in \rmop{supp}\tilde\varphi_*(\chi-a)\} \label{eq_hwf_polar_wip}
\end{align}
modulo $O(\hbar^\infty)$. Here we employed the explicit form of $\tilde\varphi^{-1}$:
\begin{equation}\label{eq_lift_polar_explicit}
\tilde\varphi^{-1}(r, \theta, \rho, \eta)=\left(r\omega (\theta), \rho \omega (\theta)+\frac{1}{r}\sum_{j, k=1}^{n-1}h^{jk}(\theta) \eta_j \frac{\partial \omega}{\partial \theta_k}(\theta)\right),
\end{equation}
where $\varphi^{-1}(r, \theta)=r\omega(\theta)$, $\omega: V^\prime \to S^{n-1}$ is an embedding into the $(n-1)$-dimensional sphere $S^{n-1}$ and $(h^{jk}(\theta))_{j, k=1}^{n-1}$ is the inverse matrix of the positive definite symmetric matrix $(\partial_{\theta_j}\omega (\theta)\cdot \partial_{\theta_k}\omega(\theta))_{j, k=1}^{n-1}$ (equal to the metric tensor on the sphere).
We set $\tilde\varphi (x_0, \xi_0)=(r_0, \theta_0, \rho_0, \eta_0)$. Since $\chi-a=0$ near $(x_0, (\xi_0\cdot\hat x_0)\hat x_0)$, we can take a symbol $c(r, \theta, \rho, \eta)\in C_c^\infty (T^*(\mathbb{R}_+\times V^\prime))$ such that $\tilde\varphi_*(\chi-a)=0$ near the set
\[
\{ (r, \theta, \rho, \eta) \mid (r, \theta, \rho, \eta+\eta_0)\in \rmop{supp}c \}.
\]
Then
\[
\rmop{supp} c(\hbar r, \theta, \rho, \hbar \eta)\cap \rmop{supp}\tilde\varphi_*(\chi-a)(\hbar r, \theta, \rho, \hbar \eta)=\varnothing.
\]
Thus \eqref{eq_hwf_polar_change} implies
\begin{equation}\label{eq_hwf_polar_change_2}
c^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar^2 D_\theta)b^\mathrm{w}(\hbar; r, \theta, \hbar D_r, \hbar D_\theta)=O_{L^2\to L^2}(\hbar^\infty). \end{equation} Since \[
\rmop{supp}c (\hbar r, \theta, \rho, \hbar\eta)\cap \rmop{supp}((\varphi_*\chi)(\hbar r, \theta, \rho, \eta)-\varphi_*\chi)=\varnothing, \] \eqref{eq_hwf_polar_change_2} becomes \[
c^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar^2 D_\theta)(\varphi_*\chi)\varphi_*-\varphi_*a^\mathrm{w}(\hbar x, \hbar D)=O_{L^2\to L^2}(\hbar^\infty). \] Hence, since $a^\mathrm{w}(\hbar x, \hbar D)u=O_{L^2}(\hbar^\infty)$, we have \[
c^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar^2 D_\theta)(\varphi_*(\chi u))=O_{L^2}(\hbar^\infty). \]
\fstep{(ii) $\bm{\Rightarrow}$ (i)}We take polar coordinates $\varphi: U\to V$, cylindrical function $\chi\in C^\infty (\mathbb{R}^n)$ and $a\in C_c^\infty (T^*\mathbb{R})$ as in the statement (ii). Take a cylindrical function $\chi\in C^\infty (\mathbb{R}^n)$ such that $\rmop{supp}\chi\subset U$ and $\chi=1$ near $\rmop{supp} \tilde\varphi^*a$. By the changing variables of pseudodifferential operators, we have \begin{equation}\label{eq_hwf_polar_change_inv}
(\varphi_*\chi-a)^\mathrm{w}(\hbar r ,\theta, \hbar D_r, \hbar^2 D_\theta)=\varphi_* b^\mathrm{w}(\hbar; x, \hbar D)\varphi^*+O_{L^2\to L^2}(\hbar ^\infty), \end{equation} where $b(\hbar; x, \xi)\in S^0(T^*\mathbb{R}^n)$ satisfies \begin{equation}\label{eq_hwf_polar_change_inv_wip}
\begin{split}
\rmop{supp}b(\hbar)
&\subset \{ \tilde\varphi^{-1}(r, \theta, \rho, \eta) \mid (\hbar r, \theta, \rho, \hbar \eta)\in \rmop{supp}(\varphi_*\chi-a)\} \\
&=\{ (x, \xi) \mid (\hbar x, \xi)\in \rmop{supp}(\chi-\tilde\varphi^* a)\}
\end{split} \end{equation} modulo $O(\hbar^\infty)$ by \eqref{eq_lift_polar_explicit}. Thus we can take a symbol $c(x, \xi)\in C_c^\infty (T^*\mathbb{R}^n)$ such that $\chi-\tilde\varphi^*a=0$ near $\rmop{supp}c$. Then \[
\rmop{supp}c(\hbar x, \xi)\cap \rmop{supp}b(\hbar; x, \xi)=\varnothing. \] Thus \eqref{eq_hwf_polar_change_inv_wip} implies \begin{equation}\label{eq_hwf_polar_change_inv_2}
c^\mathrm{w}(\hbar x, \hbar D)b^\mathrm{w}(\hbar; x, \hbar D)=O_{L^2\to L^2}(\hbar^\infty). \end{equation} Since \[
\rmop{supp}c (\hbar x, \xi)\cap \rmop{supp}(\chi(\hbar x))=\varnothing, \] \eqref{eq_hwf_polar_change_inv_2} becomes \[
c^\mathrm{w}(\hbar x, \hbar D)\varphi^*-\varphi^*a^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar^2 D_\theta)=O_{L^2\to L^2}(\hbar^\infty). \] Hence, since $a^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar^2 D_\theta)\varphi_*(\chi u)=O_{L^2}(\hbar^\infty)$, we have \[
c^\mathrm{w}(\hbar x, \hbar D)u=O_{L^2}(\hbar^\infty). \qedhere \] \end{proof}
\begin{proof}[Proof of Corollary \ref{coro_cylindrical_homogeneous}]
Assume that $x_0\neq 0$ and $(x_0, (\xi_0\cdot \hat x_0)\hat x_0)\not\in \rmop{HWF}(u)$, where $\hat x_0:=x_0/|x_0|$. By Proposition \ref{prop_hwf_polar}, there exist polar coordinates $\varphi: U\to V$, cylindrical function $\chi\in C^\infty (\mathbb{R}^n)$ with $\rmop{supp}\chi \subset U$ and $\chi=1$ near the set $\{ \lambda x \mid \lambda\geq 1\}$, and $a\in C_c^\infty (V)$ with $a=1$ near $\Psi (x, \xi)$ such that \begin{equation}\label{eq_hwf_polar_appl}
\|a^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar^2 D_\theta)\varphi_*(\chi u)\|_{L^2(\mathbb{R}^n; \Omega^{1/2})}=O(\hbar^\infty) \end{equation} holds.
Since $\tilde\varphi(x_0, (\xi_0\cdot\hat x_0)\hat x_0)=(|x_0|, \hat x_0, \xi_0\cdot \hat x_0, 0)$, the symbol $a$ is identically equals to 1 near $(|x_0|, \hat x_0, \xi_0\cdot \hat x_0, 0)$. Thus we can take a symbol $c(r, \theta, \rho, \eta)\in C_c^\infty (T^*(\mathbb{R}_+\times V^\prime))$ such that $a=1$ near the set \[
\{ (r, \theta, \rho, \eta) \mid (r, \theta, \rho, \eta+\eta_0)\in \rmop{supp}c \}. \] Then \[
c^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar D_\theta)a^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar^2 D_\theta)
=c^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar D_\theta)+O_{L^2\to L^2}(\hbar^\infty) \] for sufficiently small $\hbar>0$. Thus \eqref{eq_hwf_polar_appl} implies \[
c^\mathrm{w}(\hbar r, \theta, \hbar D_r, \hbar D_\theta)\varphi_*(\chi u)=O_{L^2}(\hbar^\infty). \qedhere \] \end{proof}
\section{Estimates for Heisenberg derivatives}\label{sect_heisenberg_derivative}
\subsection{Estimates for symbols}
We begin with the estimate of $\psi_j=\varphi^*\tilde\psi_j$. Recall the definition \eqref{eq_defi_chi4} and \eqref{eq_defi_psi-1}.
\begin{lemm}\label{lemm_estimate_tildepsi}
For all multiindices $\alpha=(\alpha_0, \alpha^\prime), \beta=(\beta_0, \beta^\prime)\in \mathbb{Z}_{\geq 0}\times \mathbb{Z}_{\geq 0}^{n-1}$, the estimates
\[ \| \partial_r^{\alpha_0}\partial_\theta^{\alpha^\prime} \partial_\rho^{\beta_0}\partial_\eta^{\beta^\prime} \tilde\psi_j(t)\|_{L^\infty} \leq C_{j\alpha\beta} t^{-\alpha_0}, \quad
\| \partial_r^{\alpha_0}\partial_\theta^{\alpha^\prime} \partial_\rho^{\beta_0}\partial_\eta^{\beta^\prime}\partial_t \tilde\psi_j(t)\|_{L^\infty} \leq C_{j\alpha\beta} t^{-\alpha_0}\]
hold. \end{lemm}
\begin{rema*}
We will only use the boundedness of derivatives of $\tilde\psi_j$, and the decay $t^{-\alpha_0}$ is not necessary for a proof of our main theorem. However, since $r\sim t$ on the support of $\tilde\psi_j$, Lemma \ref{lemm_estimate_tildepsi} states that $\partial_r^{\alpha_0}\partial_\theta^{\alpha^\prime} \partial_\rho^{\beta_0}\partial_\eta^{\beta^\prime} \tilde\psi_j(t)=O(r^{-\alpha_0})$ and $\partial_r^{\alpha_0}\partial_\theta^{\alpha^\prime} \partial_\rho^{\beta_0}\partial_\eta^{\beta^\prime}\partial_t \tilde\psi_j(t)=O(r^{-\alpha_0})$. \end{rema*}
\begin{proof}
By the Leibnitz rule, it is enough to estimate each $\chi_{1j}, \chi_{2j}, \chi_{3j}, \chi_{4j}$ and their derivatives respectively.
\fstep{Estimate of $\bm{\chi_{1j}}$}A direct calculation shows that
\[
|\partial_r^{\alpha_0} \chi_{1j}(t, r)|=|\chi^{(\alpha_0)}|(4\delta_j t)^{-\alpha_0}\leq \| \chi^{(\alpha_0)}\|_{L^\infty} (4\delta_j t)^{-\alpha_0}.
\]
The time derivative of $\chi_{1j}$ is
\[ \partial_t \chi_{1j}=\frac{1}{4\delta_j t} \chi^\prime \left(\frac{|r-r(t)|}{4\delta_j t}\right)\left(-\frac{\mathrm{d} r}{\mathrm{d} t}(t)\rmop{sgn} (r-r(t))-\frac{|r-r(t)|}{t}\right). \]
The $r$ derivatives of the first term is estimated as
\begin{equation}\label{eq_r_derivative_tpsi} \left| \partial_r^{\alpha_0} \left(\chi^\prime \left(\frac{|r-r(t)|}{4\delta_j t}\right)\frac{\mathrm{d} r}{\mathrm{d} t}(t)\rmop{sgn} (r-r(t))\right)\right|\leq C\| \chi^{(\alpha_0+1)}\|_{L^\infty} (4\delta_j t)^{-\alpha_0}
\end{equation}
by the Hamilton equation \eqref{eq_hamilton_equation_radial} and the boundedness of $|\rho(t)|$ insured by Theorem \ref{theo_classical_estimate}. The second term of \eqref{eq_r_derivative_tpsi} is written as
\[
\chi^\prime \left(\frac{|r-r(t)|}{4\delta_j t}\right)\frac{|r-r(t)|}{t}=4\delta_j \tilde \chi \left(\frac{r-r(t)}{4\delta_j t}\right),
\]
where $\tilde \chi (x):=|x|\chi^\prime (|x|)\in C_c^\infty(\mathbb{R})$.
Thus a similar estimate to \eqref{eq_r_derivative_tpsi} shows that
\[
\left|\partial_r^{\alpha_0} \left(\chi^\prime \left(\frac{|r-r(t)|}{4\delta_j t}\right)\frac{|r-r(t)|}{t}\right)\right|\leq \| \tilde \chi \|_{L^\infty} (4\delta_j)^{-\alpha_0+1} t^{-\alpha_0}.
\]
Hence if $0<\delta_j\leq 1/4$, then $|\partial_r^{\alpha_0}\partial_t \chi_{1j}|\leq C_{j\alpha_0} t^{-\alpha_0-1}$.
\fstep{Estimate of $\bm{\partial^\alpha\chi_{2j}}$}A similar estimate to $\chi_{1j}$ shows
\[ |\partial_\theta^{\alpha^\prime} \chi_{2j}(t, \theta)|\leq \| \partial_\theta^{\alpha^\prime} \chi\|_{L^\infty} (\delta_j-t^{-\lambda})^{-|\alpha^\prime|}. \]
The time derivative of $\chi_{2j}$ is
\begin{equation}\label{eq_t_derivative_chi2}
\begin{split}
&\partial_t \chi_{2j}(t, \theta) \\
&=\frac{1}{\delta_j-t^{-\lambda}} \chi^\prime \left(\frac{|\theta-\theta(t)|}{\delta_j-t^{-\lambda}}\right)\left(-\frac{\mathrm{d} \theta}{\mathrm{d} t}(t)\cdot\frac{\theta-\theta(t)}{|\theta-\theta(t)|}-\frac{\lambda t^{-\lambda-1}|\theta-\theta(t)|}{\delta_j-t^{-\lambda}}\right).
\end{split}
\end{equation}
We set $F_k(x):=x_k\chi^\prime (|x|)/|x|\in C_c^\infty(\mathbb{R}^{n-1})$. Then we have
\[\chi^\prime \left(\frac{|\theta-\theta(t)|}{\delta_j-t^{-\lambda}}\right)\frac{\mathrm{d} \theta}{\mathrm{d} t}(t)\cdot\frac{\theta-\theta(t)}{|\theta-\theta(t)|}=\sum_{k, l=1}^{n-1}h^{kl}(r(t), \theta (t))F_k\left(\frac{\theta-\theta(t)}{\delta_j-t^{-\lambda}}\right)\eta_l(t)
\]
by the Hamilton equation \eqref{eq_hamilton_equation_angle}. We apply the boundedness of $|\eta(t)|$ with respect to the fiber metric $h^*(1, \theta, \partial_\theta)$ by Theorem \ref{theo_classical_estimate} and $h^*(r, \theta, \eta)\leq Ch^*(1, \theta, \eta)$ by \eqref{eq_ineq_f_logbdd} and \eqref{eq_ineq_model_bdd}. Then we obtain
\[
\left|\partial_\theta^{\alpha^\prime}\left(\sum_{k, l=1}^{n-1}h^{kl}(r(t), \theta (t))F_k\left(\frac{\theta-\theta(t)}{\delta_j-t^{-\lambda}}\right)\eta_l(t)\right)\right|
\leq C_{j\alpha^\prime}.
\]
For the second term of \eqref{eq_t_derivative_chi2}, if we set $\tilde \chi (x)=|x|\chi^\prime (|x|)\in C_c^\infty(\mathbb{R}^{n-1})$, then
\begin{align*}
\left|\partial_\theta^{\alpha^\prime} \left(\chi^\prime \left(\frac{|\theta-\theta(t)|}{\delta_j-t^{-\lambda}}\right)\frac{|\theta-\theta(t)|}{\delta_j-t^{-\lambda}}\right)\right|
=\left|\partial_\theta^{\alpha^\prime} \left( \tilde\chi \left(\frac{\theta-\theta(t)}{\delta_j-t^{-\lambda}}\right)\right)\right|
\leq C_{j\alpha^\prime}.
\end{align*}
Hence the $\theta$ derivative of \eqref{eq_t_derivative_chi2} is estimated as $|\partial_\theta^{\alpha^\prime}\partial_t \chi_{2j}|\leq C_{j\alpha^\prime}$.
\fstep{Estimate of $\bm{\partial^\alpha\chi_{3j}}$}By the same procedure as the estimate of $\chi_{2j}$, we have $|\partial_\rho^{\beta_0} \partial_t^a \chi_{3j}|\leq C_{j\beta_0}$ ($\beta_0\geq 0$, $a=0, 1$).
\fstep{Estimate of $\bm{\partial^\alpha\chi_{4j}}$}We have $|\partial_\eta^{\beta^\prime}\chi_{4j}|\leq C_{j\beta^\prime}$ by the same procedure as the estimate of $\partial_\theta^{\alpha^\prime}\chi_{2j}$. The $t$ derivative is
\begin{equation}\label{eq_t_derivative_chi4}
\partial_t \chi_{4j}=\frac{1}{\delta_j-t^{-\lambda}} \chi^\prime \left(\frac{|\eta-\eta(t)|}{\delta_j-t^{-\lambda}}\right)\left(-\frac{\mathrm{d} \eta}{\mathrm{d} t}(t)\cdot\frac{\eta-\eta(t)}{|\eta-\eta(t)|}-\lambda t^{-\lambda-1}|\eta-\eta(t)|\right).
\end{equation}
The $\eta$ derivative of the first term is estimated as
\begin{align*}
&\left|\partial_\eta^{\beta^\prime}\left(\chi^\prime \left(\frac{|\eta-\eta(t)|}{\delta_j-t^{-\lambda}}\right)\frac{\mathrm{d} \eta}{\mathrm{d} t}(t)\cdot\frac{\eta-\eta(t)}{|\eta-\eta(t)|}\right)\right| \\
&=\left|\frac{\mathrm{d} \eta}{\mathrm{d} t} (t)\cdot\partial_\eta^{\beta^\prime}\left(F\left(\frac{\eta-\eta(t)}{\delta_j-t^{-\lambda}}\right)\right)\right|\leq C\left|\frac{\mathrm{d} \eta}{\mathrm{d} t} (t)\right|. \end{align*}
By the angular momentum component of Hamilton equations
\begin{equation}\label{eq_hamilton_equation_angular_momentum}
\frac{\mathrm{d} \eta_j}{\mathrm{d} t} (t)=-\frac{1}{2}\frac{\partial h^{kl}}{\partial \theta_j}(r(t), \theta(t))\eta_k(t)\eta_l(t)
\end{equation}
and $|\eta(t)|\leq C$ by Theorem \ref{theo_classical_estimate}, we obtain
\[ \left|\partial_\eta^{\beta^\prime}\left(\chi^\prime \left(\frac{|\eta-\eta(t)|}{\delta_j -^{-\lambda} }\right)\frac{d \eta}{\mathrm{d} t}(t)\cdot\frac{\eta-\eta(t)}{|\eta-\eta(t)|}\right)\right|\leq C_{j\beta^\prime}. \]
The $\eta$ derivatives of the second term in \eqref{eq_t_derivative_chi4} are estimated as
\begin{align*}
&\left|\partial_\eta^{\beta^\prime}\left(\chi^\prime \left(\frac{|\eta-\eta(t)|}{\delta_j-t^{-\lambda}}\right)|\eta-\eta(t)|\right)\right| \\
&=(\delta_j-t^{-\lambda}) \left|\partial_\eta^{\beta^\prime} \left(\tilde\chi \left(\frac{\eta-\eta(t)}{\delta_j-t^{-\lambda}}\right)\right)\right|
\leq C_{j\beta^\prime}.
\end{align*}
Hence the derivatives of \eqref{eq_t_derivative_chi4} are estimated as $|\partial_\eta^{\beta^\prime}\partial_t \chi_{4j}|\leq C_{j\beta^\prime}$. \end{proof}
Next we prove the positivity and an $O(\jbracket{t}^{-1})$ decay as $t\to\infty$ of the Lagrange derivative $\partial_t \tilde\psi_j(t)+\{ \tilde\psi_j(t), h_0\}$. Both of them play a crucial role in estimates of Heisenberg derivatives in the proof of Theorem \ref{theo_positive_hd}.
\begin{lemm}\label{lemm_positive_lagrange_derivative}
Take sufficiently small $0<\delta_0<\delta_1<\cdots <2\delta_0$ and $0<\lambda<\min\{2c_0-1, \mu\}$ in \eqref{eq_defi_chi4}. Then the inequalities
\[
\partial_t \tilde\psi_j(t)+\{ \tilde\psi_j(t), h_0\}\geq 0
\]
and
\[
|\partial_r^{\alpha_0}\partial_\theta^{\alpha^\prime} \partial_\rho^{\beta_0}\partial_\eta^{\beta^\prime}(\partial_t \tilde\psi_j(t)+\{ \tilde\psi_j(t), h_0\})|\leq C_{j\alpha\beta}\jbracket{t}^{-1}
\]
hold for sufficiently large $t>0$. \end{lemm}
\begin{proof}
As in the proof of Lemma \ref{lemm_estimate_tildepsi}, we estimate each $\partial_t\chi_{jk}+\{ \chi_{jk}, h_0\}$ respectively. We also borrow following functions from the proof of Lemma \ref{lemm_estimate_tildepsi}:
\[
\tilde \chi (x):=|x|\chi^\prime (|x|)\in C_c^\infty(\mathbb{R}), \quad F_k(x):=x_k\chi^\prime (|x|)/|x|\in C_c^\infty(\mathbb{R}^{n-1}).
\]
\fstep{$\bm{\partial_t \chi_{1j}+\{ \chi_{1j}, h_0\}}$}We have
\begin{align}
&\partial_t \chi_{1j} +\{ \chi_{1j}, h_0\} \nonumber\\
&=\frac{|\chi^\prime|}{2\delta_j t} \left(\frac{|r-r(t)|}{t}+(c(r(t), \theta(t))^{-2}\rho(t)-c(r, \theta)^{-2}\rho)\rmop{sgn}(r-r(t))\right) \label{eq_lagrange_derivative_11}\\
&=-\frac{1}{t} \tilde\chi \left(\frac{r-r(t)}{2\delta_j t}\right)-\frac{c(r(t), \theta(t))^{-2}\rho(t)-c(r, \theta)^{-2}\rho}{2\delta_j t} F\left(\frac{r-r(t)}{2\delta_j t}\right). \label{eq_lagrange_derivative_12}
\end{align}
Here $F(x):=\chi^\prime (|x|)\rmop{sgn}x\in C_c^\infty (\mathbb{R})$. We recall the short range condition $c(r, \theta)=1+O(r^{-1-\mu})$ (Assumption \ref{assu_classical} \ref{assu_sub_short_range}). Since $|r-r(t)|\geq 4\delta_j t$ and $|\rho-\rho(t)|\leq 2\delta_j-2$ on the support of $(\partial\chi_{1j})\chi_{2j} \chi_{3j} \chi_{4j}$, we have
\begin{align*}
&|c(r(t), \theta(t))^{-2}\rho(t)-c(r, \theta)^{-2}\rho| \\
&\leq c(r(t), \theta(t))^{-2}|\rho (t)-\rho|+|\rho||c(r(t), \theta(t))^{-2}-c(r, \theta)^{-2}| \\
&\leq 2(1+Cr(t)^{-1-\mu})(\delta_j-)+C(1+\delta_j-)(r^{-1-\mu}+r(t)^{-1-\mu}) \\
&\leq 2\delta_j+Ct^{-\lambda},
\end{align*}
and thus, by \eqref{eq_lagrange_derivative_11},
\begin{align*}
\partial_t \chi_{1j} +\{ \chi_{1j}, h_0\}
\geq \frac{|\chi^\prime|}{2\delta_j t} \left(\frac{|r-r(t)|}{t}-|\rho(t)-\rho|\right)
\geq \frac{|\chi^\prime|}{2\delta_j t}(4\delta_j-2\delta_j-2)\geq 0.
\end{align*}
$r$ derivatives are estimated as
\begin{align*}
&|\partial_r^{\alpha_0} \partial_\rho^{\beta_0} (\partial_t \chi_{1j}(t, r) +\{ \chi_{1j}, h_0\})(t, r, \rho)| \\
&\leq
\begin{cases}
C(1+|\delta_j-t^{-\lambda}|)\delta_j^{-\alpha_0} t^{-\alpha_0-1} & \text{if } \beta_0=0, \\
C(\delta_j t)^{-\alpha_0-1} & \text{if } \beta_0=1, \\
0 & \text{if } \beta_0=2
\end{cases} \\
&\leq C_{j\alpha_0\beta_0}t^{-\alpha_0-1} \end{align*} by \eqref{eq_lagrange_derivative_12}.
\fstep{$\bm{\partial_t \chi_{2j}+\{ \chi_{2j}, h_0\}}$}By the Hamilton equation \eqref{eq_hamilton_equation_angle}, we have
\begin{align}
&\partial_t \chi_{2j}+\{\chi_{2j}, h_0\} \nonumber\\
&\begin{aligned}
&=\frac{|\chi^\prime|}{\delta_j-t^{-\lambda}} \biggl(\frac{\lambda |\theta-\theta (t)|}{\delta_j-t^{-\lambda}} \\
&\quad+\sum_{k, l=1}^{n-1}(h^{kl}(r(t), \theta (t))\eta_k (t)-h^{kl}(r, \theta)\eta_k)\frac{\theta_l -\theta_l (t)}{|\theta-\theta (t)|}\biggr)
\end{aligned} \label{eq_lagrange_derivative_21}\\
&
\begin{aligned}
&=-\frac{\lambda }{\delta_j-t^{-\lambda}} \tilde\chi \left(\frac{\theta-\theta(t)}{\delta_j-t^{-\lambda}}\right) \\
&\quad -\frac{1}{\delta_j-t^{-\lambda}}\sum_{k, l=1}^{n-1}(h^{kl}(r(t), \theta (t))\eta_k (t)-h^{kl}(r, \theta)\eta_k) F_l\left(\frac{\theta-\theta(t)}{\delta_j-t^{-\lambda}}\right).
\end{aligned}\label{eq_lagrange_derivative_22}
\end{align}
Since $|\theta-\theta (t)|\geq \delta_j -$, $|r-r(t)|\leq 8\delta_jt$ and $|\eta-\eta(t)|\leq 2(\delta_j-t^{-\lambda})\leq 2\delta_j$ on the support of $\chi_{1j}(\partial \chi_{2j})\chi_{3j} \chi_{4j}$, we obtain the following inequality from \eqref{eq_lagrange_derivative_21}:
\begin{align*}
&|h^*(r(t), \theta (t))\eta (t)-h^*(r, \theta)\eta| \\
&\leq |h^*(r(t), \theta (t))(\eta (t)-\eta)|+|(h^*(r(t), \theta(t))-h^*(r, \theta))\eta| \\
&\leq Cf(r(t))^{-2}+C(f(r(t))^{-2}+f(r)^{-2}) \\
&\leq Cf(r(t))^{-2}
\end{align*}
and thus
\begin{align*}
&\partial_t \chi_{2j}+\{\chi_{2j}, h_0\} \\
&\geq \frac{|\chi^\prime|}{\delta_j-t^{-\lambda}} \left(\lambda -|h^*(r(t), \theta (t))\eta (t)-h^*(r, \theta)\eta|\right) \\
&\geq \frac{|\chi^\prime|}{\delta_j-t^{-\lambda}} (\lambda -Cf(r(t))^{-2})\geq 0.
\end{align*}
Here we introduced a shorthand notation $\left(\sum_{l=1}^{n-1}h^{kl}(r, \theta)\eta_l\right)_{k=1}^{n-1}=h^*(r, \theta)\eta$.
We employ the estimates of classical orbits $r(t)\geq \rho_\infty t-C$, $|\eta(t)|\leq C$ by Theorem \ref{theo_classical_estimate}, $f(r)\geq C^{-1}r^{2c_0}$ by \eqref{eq_ineq_f_logbdd} and the assumption $0<\lambda<2c_0-1$, and we obtain
\[
\partial_t \chi_{2j}+\{\chi_{2j}, h_0\}\geq \frac{C}{\delta_j-t^{-\lambda}} (\lambda -Ct^{-2c_0})\geq 0.
\]
Furthermore we have
\[ |\partial_r^{\alpha_0}\partial_\theta^{\alpha^\prime} \partial_\eta^{\beta^\prime}(\partial_t \chi_{2j}+\{\chi_{2j}, h_0\})|
\leq C_{j\alpha\beta^\prime}t^{-1-\lambda} \] by differentiating \eqref{eq_lagrange_derivative_22}.
\fstep{$\bm{\partial_t \chi_{3j}+\{ \chi_{3j}, h_0\}}$}We have \begin{align*}
&\partial_t \chi_{3j} +\{ \chi_{3j}, h_0\} \\
&=\frac{|\chi^\prime|}{\delta_j-t^{-\lambda}} \biggl(\frac{\lambda |\rho-\rho (t)|}{\delta_j-t^{-\lambda}} \\
&\quad +(\partial_r h_0(r(t), \theta (t), \rho(t), \eta(t))-\partial_r h_0(r, \theta, \rho, \eta)) \frac{\rho -\rho (t)}{|\rho-\rho (t)|}\biggr) \\
&=-\frac{\lambda }{\delta_j-t^{-\lambda}} \tilde\chi \left(\frac{\rho-\rho (t)}{\delta_j-t^{-\lambda}}\right) \\
&\quad -\frac{\partial_r h_0(r(t), \theta (t), \rho(t), \eta(t))-\partial_r h_0(r, \theta, \rho, \eta)}{\delta_j-t^{-\lambda}} F\left(\frac{\rho-\rho (t)}{\delta_j-t^{-\lambda}}\right). \end{align*} As in the estimate of $\partial_t \chi_{2j}+\{ \chi_{2j}, h_0\}$, on the support of $\chi_{1j} \chi_{2j} (\partial \chi_{3j})\chi_{4j}$, we have the estimate \begin{align*}
&|\partial_r h_0(r(t), \theta (t), \rho(t), \eta(t))-\partial_r h_0(r, \theta, \rho, \eta)| \\
&\leq |\rho^2\partial_r c^{-2}/2|+|\partial_r h^*(r, \theta, \eta)| \\
&\leq C(r^{-1-\mu}\rho^2+r^{-2c_0}|\eta|^2) \end{align*} by Assumption \ref{assu_higher_derivative} and thus \[
\partial_t \chi_{3j}+\{ \chi_{3j}, h_0\}
\geq \frac{|\chi^\prime|}{\delta_j-t^{-\lambda}} \left(\lambda -C(r^{-1-\mu}+r(t)^{-1-\mu})\right)\geq 0
\]
and
\[ |\partial_r^{\alpha_0}\partial_\theta^{\alpha^\prime} \partial_\rho^{\beta_0}\partial_\eta^{\beta^\prime} (\partial_t \chi_{3j}+\{ \chi_{3j}, h_0\})|\leq C_{j\alpha\beta}(\lambda +t^{-1-\mu})\leq C_{j\alpha\beta}t^{-1-\lambda}.
\]
\fstep{$\bm{\partial_t \chi_{4j}+\{ \chi_{4j}, h_0\}}$}By the Hamilton equation \eqref{eq_hamilton_equation_angular_momentum}, we have \begin{align*}
&\partial_t \chi_{4j}+\{ \chi_{4j}, h_0\} \\
&=\frac{|\chi^\prime|}{\delta_j-t^{-\lambda}} \biggl(\frac{\lambda |\eta-\eta (t)|}{\delta_j-t^{-\lambda}} \\
&\quad -\sum_{k=1}^{n-1} (\partial_{\theta_k}h_0(r(t), \theta (t), \rho(t), \eta(t))-\partial_{\theta_k}h_0(r, \theta, \rho, \eta))\frac{\eta_k -\eta_k (t)}{|\eta-\eta (t)|}\biggr) \\
&=-\frac{\lambda }{\delta_j-t^{-\lambda}}\tilde\chi \left(\frac{\eta-\eta (t)}{\delta_j- t^{-\lambda}}\right) \\
&\quad-\sum_{k=1}^{n-1}\frac{\partial_{\theta_k}h_0(r(t), \theta (t), \rho(t), \eta(t))-\partial_{\theta_k}h_0(r, \theta, \rho, \eta)}{\delta_j-t^{-\lambda}} F_k\left(\frac{\eta-\eta (t)}{\delta_j-t^{-\lambda}}\right). \end{align*} On the support of $\chi_{1j} \chi_{2j} \chi_{3j} (\partial \chi_{4j})$, we have \[
|\partial_{\theta_k}h_0(r, \theta, \rho, \eta)|
\leq |\rho^2\partial_{\theta_k}c^{-2}|+|\partial_{\theta_k}h^*(r, \theta, \eta)|\leq C(r^{-1-\mu}\rho^2+r^{-2c_0}|\eta|^2) \] and thus \[
\partial_t \chi_{4j}+\{ \chi_{4j}, h_0\}
\geq \frac{|\chi^\prime|}{\delta_j-t^{-\lambda}} \left(\lambda -C(t^{-1-\mu}+t^{-2c_0})\right)\geq 0
\]
and
\[ |\partial_r^{\alpha_0}\partial_\theta^{\alpha^\prime} \partial_\rho^{\beta_0}\partial_\eta^{\beta^\prime} (\partial_t \chi_{4j}+\{ \chi_{4j}, h_0\})|\leq C_{j\alpha\beta}t^{-\lambda-1}. \qedhere
\] \end{proof}
\subsection{Proof of Theorem \ref{theo_symbol_aim}}
We prove Theorem \ref{theo_symbol_aim} in this section.
\begin{theo}\label{theo_positive_hd}
There exist constants $c_1, c_2, c_3\ldots>0$ such that, if we set
\[
F_k(t):=\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(t))^*\mathop{\mathrm{Op}}\nolimits_\hbar(\psi_0(t))+t\sum_{j=1}^k c_j \hbar^j \mathop{\mathrm{Op}}\nolimits_\hbar (\psi_j(t)),
\]
for $k\geq 1$, then the inequality
\begin{equation}\label{eq_a0_from_below}
\partial_t F_k(t)-i\hbar [F_k(t), H]
\geq O_{L^2\to L^2}(\hbar^{k+1})
\end{equation}
holds for all $\hbar\in (0, 1]$ uniformly in $t\geq 0$. \end{theo}
\begin{proof}
\initstep\step We first prove the existence of a real symbol $b_0(\hbar; t, x, y)\in S^0_\mathrm{cyl}(T^*M)$ which satisfies
\begin{equation}\label{eq_hd_step1}
\begin{split}
&\partial_t (\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(t))^*\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(t)))-i\hbar [\mathop{\mathrm{Op}}\nolimits_\hbar(\psi_0(t))^*\mathop{\mathrm{Op}}\nolimits_\hbar(\psi_0(t)), H] \\
&\geq -\hbar \mathop{\mathrm{Op}}\nolimits_\hbar (b_0(t))+O_{L^2\to L^2}(\hbar^\infty)
\end{split}
\end{equation}
and has an asymptotic expansion
\begin{equation}\label{eq_asymptotic_hd_step1}
b_0(\hbar; t, x, \xi)\sim \sum_{j=0}^\infty \hbar^j b_{0j}(t, x, \xi), \quad b_{0j}(t, x, \xi)\in S^{-j}_\mathrm{cyl}(T^*M)
\end{equation}
with $\rmop{supp}b_{0j}(t)\subset \rmop{supp}\psi_0(t)$.
By the Leibnitz rule, we have
\begin{align*}
&\partial_t (\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(t))^*\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(t)))-i\hbar [\mathop{\mathrm{Op}}\nolimits_\hbar(\psi_0(t))^*\mathop{\mathrm{Op}}\nolimits_\hbar(\psi_0(t)), H] \\
&=2\rmop{Re} \mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(t))^*(\partial_t \mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(t))-i\hbar [\mathop{\mathrm{Op}}\nolimits_\hbar(\psi_0(t)), H]).
\end{align*}
We employ Theorem \ref{theo_psido_composition} and Theorem \ref{theo_laplacian_psido}, and we take a symbol $b(\hbar; t, x, \xi)\in S^0_\mathrm{cyl}(T^*M)$ which satisfies
\begin{align*}
&2\rmop{Re} \mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(t))^*(\partial_t \mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(t))-i\hbar^{-1} [\mathop{\mathrm{Op}}\nolimits_\hbar(\psi_0(t)), H]) \\
&=\mathop{\mathrm{Op}}\nolimits_\hbar \left(\partial_t |\psi_0(t)|^2+\{|\psi_0(t)|^2, h_0\}+\hbar b(t)\right)+O_{L^2\to L^2}(\hbar^\infty)
\end{align*}
and has an asymptotic expansion
\[
b(\hbar; t, x, \xi)\sim \sum_{j=0}^\infty \hbar^j b_j(t, x, \xi), \quad b_j(t, x, \xi)\in S^{-j}_\mathrm{cyl}(T^*M)
\]
with $\rmop{supp}b_j(t)\subset \rmop{supp}\psi_0(t)$. Since $\partial_t |\psi_0(t)|^2+\{ |\psi_0(t)|^2, h_0\}\in S^0_\mathrm{cyl}(T^*M)$ and $\partial_t |\psi_0(t)|^2+\{ |\psi_0(t)|^2, h_0\}\geq 0$, we apply Theorem \ref{theo_sharp_garding} and obtain
\[
\mathop{\mathrm{Op}}\nolimits_\hbar \left(\partial_t |\psi_0(t)|^2+\{|\psi_0(t)|^2, h_0\}\right)
\geq -\hbar \mathop{\mathrm{Op}}\nolimits_\hbar (c)+O_{L^2\to L^2}(\hbar^\infty),
\]
where $c=c(\hbar; t, x, \xi)$ has an asymptotic expansion
\[
c(\hbar; t, x, \xi)\sim \sum_{j=0}^\infty \hbar^j c_j(t, x, \xi), \quad c_j(t, x, \xi)\in S^{-j}_\mathrm{cyl}(T^*M)
\]
with $\rmop{supp}c_j (t)\subset \rmop{supp}\psi_0(t)$. We obtain \eqref{eq_hd_step1} and \eqref{eq_asymptotic_hd_step1} by setting $b_0=b+c$ and $b_{0j}=b_j+c_j$.
\step We secondly prove that, if we take a sufficiently large constant $c_1>0$ and set $F_1(t):=\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(t))^*\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(t))+c_1\hbar t \mathop{\mathrm{Op}}\nolimits_\hbar (a_1(t))$, then we have
\begin{equation}
\label{eq_hd_step2}
\partial_t F_1(t)-i\hbar [F_1(t), H]\geq -\hbar^2 \mathop{\mathrm{Op}}\nolimits_\hbar (b_1(t))+O_{L^2\to L^2}(\hbar^\infty),
\end{equation}
where $b_1=b_1(\hbar; t, x, \xi)\in S^0_\mathrm{cyl}(T^*M)$ has an asymptotic expansion
\begin{equation}\label{eq_asymptotic_hd_step2}
b_1(\hbar; t, x, \xi)\sim \sum_{j=0}^\infty \hbar^j b_{1j}(t, x, \xi), \quad b_{1j}(t, x, \xi)\in S^{-j}_\mathrm{cyl}(T^*M)
\end{equation}
with $\rmop{supp}b_{1j} (t)\subset \rmop{supp}a_1(t)$.
The left hand side of \eqref{eq_hd_step2} is equal to
\begin{align*}
&\partial_t (\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(t))^*\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(t)))-i\hbar [\mathop{\mathrm{Op}}\nolimits_\hbar(\psi_0(t))^*\mathop{\mathrm{Op}}\nolimits_\hbar(\psi_0(t)), H] \\
&+c_1\hbar t (\partial_t \mathop{\mathrm{Op}}\nolimits_\hbar (a_1(t))-i\hbar [\mathop{\mathrm{Op}}\nolimits_\hbar(a_1(t)), H]) \\
&+c_1\hbar \mathop{\mathrm{Op}}\nolimits_\hbar (a_1(t)).
\end{align*}
The first term is estimated by \eqref{eq_hd_step1}. Since $\partial_t a_1(t)+\{ a_1(t), h_0\}=O_{S^0_\mathrm{cyl}(T^*M)}(\jbracket{t}^{-1})$ and $\partial_t a_1(t)+\{ a_1(t), h_0\}\geq 0$ by Lemma \ref{lemm_positive_lagrange_derivative}, we apply Theorem \ref{theo_sharp_garding} for the second term and obtain a symbol $b(\hbar; t, x, \xi)\in S^0_\mathrm{cyl}(T^*M)$ which satisfies
\[
\mathop{\mathrm{Op}}\nolimits_\hbar \left(\partial_t a_1(t)+\{a_1(t), h_0\}\right)
\geq -\hbar \jbracket{t}^{-1}\mathop{\mathrm{Op}}\nolimits_\hbar (b^\prime (t))+O_{L^2\to L^2}(\hbar^\infty)
\]
and has an asymptotic expansion
\[
b^\prime (\hbar; t, x, \xi)\sim \sum_{j=0}^\infty \hbar^j b^\prime_j(t, x, \xi), \quad b_j(t, x, \xi)\in S^{-j}_\mathrm{cyl}(T^*M)
\]
with $\rmop{supp}b^\prime_j (t)\subset \rmop{supp}a_1(t)$. Hence we have
\begin{equation}
\label{eq_hd_step2_wip}
\begin{split}
&\partial_t F_1(t)-i\hbar [F_1(t), H] \\
&\geq
-\hbar \mathop{\mathrm{Op}}\nolimits_\hbar (b_0(t))-c_1\hbar^2 t\jbracket{t}^{-1}\mathop{\mathrm{Op}}\nolimits_\hbar (b^\prime (t))+c_1\hbar \mathop{\mathrm{Op}}\nolimits_\hbar (a_1(t))
+O_{L^2\to L^2}(\hbar^\infty).
\end{split}
\end{equation}
Since $\rmop{supp} b_0(t)\subset \rmop{supp}\psi_0(t)$ mod $O(\hbar^\infty)$ by \eqref{eq_asymptotic_hd_step1} and $a_1(t)=1$ near $\rmop{supp}\psi_0(t)$, we can take a constant $c_1>0$ such that
\begin{equation}\label{eq_hd_step2_minor}
- \mathop{\mathrm{Op}}\nolimits_\hbar (b_0(t))+c_1 \mathop{\mathrm{Op}}\nolimits_\hbar (a_1(t))
\geq -\hbar \mathop{\mathrm{Op}}\nolimits_\hbar (b^{\prime\prime} (t))+O_{L^2\to L^2}(\hbar^\infty)
\end{equation}
where $b^{\prime\prime} (\hbar; t, x, \xi)\in S^0_\mathrm{cyl}(T^*M)$ has an asymptotic expansion
\[
b^{\prime\prime} (\hbar; t, x, \xi)\sim \sum_{j=0}^\infty \hbar^j b^{\prime\prime}_j(t, x, \xi), \quad b^{\prime\prime}_j(t, x, \xi)\in S^{-j}_\mathrm{cyl}(T^*M)
\]
with $\rmop{supp}b^{\prime\prime}_j (t)\subset \rmop{supp}a_1(t)$. We set $b_1(t):=b^{\prime\prime} (t)+c_1t\jbracket{t}^{-1}b^\prime (t)$. Then \eqref{eq_hd_step2_wip} and \eqref{eq_hd_step2_minor} implies \eqref{eq_hd_step2} and \eqref{eq_asymptotic_hd_step2} with $b_{1j}(t):=b^{\prime\prime}_j (t)+c_1t\jbracket{t}^{-1}b^\prime_j (t)$.
\step We repeat the procedure in Step 2 and obtain positive constants $c_2, c_3, \ldots >0$ and $b_k(\hbar; t, x, \xi)\in S^0_\mathrm{cyl}(T^*M)$ such that, if we set
\[
F_k(t):=\mathop{\mathrm{Op}}\nolimits_\hbar (\psi_0(t))^*\mathop{\mathrm{Op}}\nolimits_\hbar(\psi_0(t))+t\sum_{j=1}^k c_j \hbar^j \mathop{\mathrm{Op}}\nolimits_\hbar (\psi_j(t)),
\]
then
\[
\partial_t F_k(t)-i\hbar [F_k(t), H]\geq -\hbar^{k+1} \mathop{\mathrm{Op}}\nolimits_\hbar (b_k(t))+O_{L^2\to L^2}(\hbar^\infty)
\]
and $b_k=b_k(\hbar; t, x, \xi)\in S^0_\mathrm{cyl}(T^*M)$ has an asymptotic expansion
\[
b_k(\hbar; t, x, \xi)\sim \sum_{j=0}^\infty \hbar^j b_{kj}(t, x, \xi), \quad b_{kj}(t, x, \xi)\in S^{-j}_\mathrm{cyl}(T^*M)
\]
with $\rmop{supp}b_{kj} (t)\subset \rmop{supp}a_k(t)$. In particular, since $\|\mathop{\mathrm{Op}}\nolimits_\hbar (b_k(t))\|_{L^2\to L^2}$ is uniformly bounded in $t\geq 0$ and $0<\hbar \leq 1$, we obtain the desired inequality \eqref{eq_a0_from_below}. \end{proof}
\begin{proof}[Proof of Theorem \ref{theo_symbol_aim}]
(i) is an immediate consequence of Lemma \ref{lemm_estimate_tildepsi} and the definition \eqref{eq_transport} of $\psi_j(t, x, \xi)$.
(ii) Take symbols $\psi_j(t)$ in Theorem \ref{theo_positive_hd} and define $\tilde a(\hbar; t, x, \xi)\in S^{-2}_\mathrm{cyl}(T^*M)$ by an asymptotic expansion
\[
\tilde a(\hbar; t, x, \xi)\sim \sum_{j=1}^\infty c_{j-1} \hbar^j \psi_j(\hbar^{-1}t).
\]
We set
\[
A_\hbar (t):=\mathop{\mathrm{Op}}\nolimits_\hbar (a_0(\hbar^{-1}t))^*\mathop{\mathrm{Op}}\nolimits_\hbar(a_0(\hbar^{-1}t))+t\mathop{\mathrm{Op}}\nolimits_\hbar (\tilde a(\hbar; t))
\]
Then, by Theorem \ref{theo_positive_hd}, we have
\begin{align*}
\partial_t A_\hbar (t)-i[A_\hbar (t), H]
&=\hbar^{-1}(\partial_t F_k(\hbar^{-1}t)-i\hbar [F_k(\hbar^{-1}t), H])+O_{L^2\to L^2}(\hbar^k) \\
&\geq O_{L^2\to L^2}(\hbar^k)
\end{align*}
for all $k\geq 0$ uniformly in $t\in [0, t_0]$. \end{proof}
\appendix \section{Escape functions}\label{subs_escape_function}
In this appendix, we construct a diffeomorphism $\Psi: E \to \mathbb{R}_+ \times S$ in Assumption \ref{assu_manifold_with_end_0} by employing an escape function. In this paper, we employ the terminology ``escape function'' in the following sense.
\begin{defi}\label{defi_escape_function}
A continuous function $r\in C (M; [0, \infty))$ on $M$ is an \textit{escape function} if
\begin{enumerate}
\renewcommand{(\alph{enumi})}{(\roman{enumi})}
\item $r(M)=[0, \infty)$;
\item the preimage $r^{-1}([0, R])$ is compact for all $R\geq 0$;
\item $r(x)$ is $C^\infty$ in $r^{-1}(\mathbb{R}_+)$ and $\mathrm{d} r(x)\neq 0$ for all $x\in r^{-1}(\mathbb{R}_+)$. Here $\mathbb{R}_+:=(0, \infty)$.
\end{enumerate}
We set $E:=r^{-1}(\mathbb{R}_+)$ and $S:=r^{-1}(1)$. \end{defi}
Let $g$ be a Riemannian metric on $M$ and $r\in C(M; [0, \infty))$ be an escape function on $M$. Then $M$ has a natural orthogonal decomposition into radial variable and angular variable:
\begin{prop}\label{prop_radial_angular_decomposition}
Let $r\in C(M; [0, \infty))$ be an escape function and $g$ be a Riemannian metric on $M$. We set $S=r^{-1}(1)$ and $E=r^{-1}((0, \infty))$ as in Definition \ref{defi_escape_function}. Then the vector field $\rmop{grad} r/|\rmop{grad} r|_g^2$ generates the flow $\{ \psi_t: E\to E\}_{t\geq 0}$ on $E$ with the following properties.
\begin{enumerate}
\renewcommand{(\alph{enumi})}{(\roman{enumi})}
\item $r(\psi_t(x))=r(x)+t$ for $x\in E$.
\item The mapping
\begin{equation}\label{eq_diffeo_end}
\Psi: E \longrightarrow \mathbb{R}_+ \times S, \quad
\Psi (x):=(r(x), \psi_{r(x)-1}^{-1}(x))
\end{equation}
is a diffeomorphism with the inverse function
\[
\Psi^{-1}(r, \theta)=\psi_{r-1}(\theta).
\]
\item The decomposition $TE \simeq T\mathbb{R}_+ \oplus TS$ induced by \eqref{eq_diffeo_end} is orthogonal.
\end{enumerate} \end{prop}
\begin{proof} If we prove that the vector field generates the flow $\{ \psi_t: E\to E\}_{t\geq 0}$, then the properties from (i) to (iii) are proved easily.
(i) The definition of $\psi_t$ implies \[
\frac{\mathrm{d}}{\mathrm{d} t}(r(\psi_t(x)))=\jbracket{\mathrm{d} r(\psi_t), \frac{\mathrm{d} \psi_t}{\mathrm{d} t}}=\frac{1}{|\rmop{grad} r(\psi_t)|_g^2}\times \underbrace{\jbracket{\mathrm{d} r(\psi_t), \rmop{grad} r(\psi_t)}}_{=|\rmop{grad} r(\psi_t)|_g^2}=1. \] Thus \[
r(\psi_t(x))=r(x)+\int_0^t \frac{\mathrm{d}}{\mathrm{d} t}(r(\psi_t(x)))\, \mathrm{d} t=r(x)+t. \]
(ii) The smoothness and the form of inverse mapping are obvious from the definition of $\Psi: E\to \mathbb{R}_+\times S$.
(iii) If $\gamma_\mathrm{rad}(t):=\Psi^{-1}(r+t, \theta)$ and $v_\mathrm{ang}\in T_\theta S$, then \[
g\left( \frac{\mathrm{d} \gamma_\mathrm{rad}}{\mathrm{d} t}(0), v_\mathrm{ang}\right)
=g\left( \frac{\rmop{grad} r}{|\rmop{grad} r|^2}, v_\mathrm{ang}\right)=0 \] by the fact that gradient vectors intersect level sets orthogonally.
Thus the problem is that the integral curve $t\mapsto \psi_t(x)$ is defined for all $t\geq 0$.
Fix $x\in E$ and consider the set
\[ B:=\left\{\, b\in (0, \infty) \,\middle|\,
\begin{aligned}
&\exists \gamma \in C^\infty ([0, b]; E) \text{ s.t. } \gamma \text{ is an integral curve of } \\
&\rmop{grad} r/|\rmop{grad} r|_g^2 \text{ with initial point } x
\end{aligned}
\,\right\}. \]
If one prove
\begin{enumerate}
\renewcommand{(\alph{enumi})}{(\alph{enumi})}
\item $B\neq \varnothing$,
\item that $B$ is an open subset of $(0, \infty)$ and
\item that $B$ is a closed subset of $(0, \infty)$,
\end{enumerate}
then $B=(0, \infty)$ by the connectedness of $(0, \infty)$.
(a) By the existence of the solutions to ordinary differential equations and $\mathrm{d} r\neq 0$ near $x$, there exists an integral curve $\gamma: [-\varepsilon, \varepsilon]\to E$ of $\rmop{grad} r/|\rmop{grad} r|_g^2$ with initial point $x$. Thus $B\neq \varnothing$.
(b) Let $b\in B$. Then there exists an integral curve $\gamma: [0, b]\to E$ of $\rmop{grad} r/|\rmop{grad} r|_g^2$ with the initial point $x$. Since $\gamma(b)\in E$, the vector field $\rmop{grad} r/|\rmop{grad} r|_g^2$ can be defined near $\gamma(b)$. Thus there exists an integral curve $\beta: [-\varepsilon, \varepsilon]\to M$ ($0<\varepsilon \ll 1$) of $\rmop{grad} r/|\rmop{grad} r|_g^2$ with the initial point $\gamma(b)$. Since $\gamma (t)=\beta(t-b)$ ($b-\varepsilon<t\leq b$) by the uniqueness of solutions to ordinary differential equations, we can extend $\gamma$ to \[ \Gamma (t):= \begin{cases}
\gamma(t) & \text{if } 0\leq t \leq b, \\
\beta(t-b) & \text{if } b<t<b+\varepsilon. \end{cases}\]
This $\Gamma$ is an integral curve of $\rmop{grad} r/|\rmop{grad} r|_g^2$ defined for $t\in [0, b+\varepsilon]$ with the initial point $x$. Hence $(b-\varepsilon, b+\varepsilon)\subset B$.
(c) It is enough to prove that if $\gamma: [0, b)\to E$ is an integral curve of $\rmop{grad} r/|\rmop{grad} r|_g^2$, then the limit $\lim_{t\to b-0}\gamma (t)\in E$ exists. We denote by $d(x, y)$ the distance associated with the Riemannian metric $g$. Since
\[ d(\gamma(s), \gamma(t))\leq \int_s^t \left| \frac{\mathrm{d} \gamma}{\mathrm{d} \tau}(\tau)\right|_g\, \mathrm{d} \tau \leq |t-s| \underbrace{\max_{y\in r^{-1}([0, 1+b])} |X(y)|_g}_{\substack{\text{exists by compactness of} \\ r^{-1}([0, b+1])}}
\] for $0\leq s\leq t<b$ by the definition of the distance, the compactness of $r^{-1}([0, 1+b])$ implies the existence of the limit $\gamma(b):=\lim_{t\to b-0}\gamma (t)$. We have $\gamma (b)\in E$ since \[
r(\gamma (t))=r(x)+\int_0^t \jbracket{\mathrm{d} r, \frac{\rmop{grad} r}{|\rmop{grad} r|_g^2}}\, \mathrm{d} t=r(x)+t\geq r(x) \] for $0\leq t<b$ and thus \[
r(\gamma (b))=\lim_{t\to b-0} r(\gamma (t))\geq r(x)>0. \] Hence $b\in B$. \end{proof}
\end{document} | arXiv |
Equidimensionality
In mathematics, especially in topology, equidimensionality is a property of a space that the local dimension is the same everywhere.[1]
Definition (topology)
A topological space X is said to be equidimensional if for all points p in X, the dimension at p, that is dim p(X), is constant. The Euclidean space is an example of an equidimensional space. The disjoint union of two spaces X and Y (as topological spaces) of different dimension is an example of a non-equidimensional space.
Definition (algebraic geometry)
Main article: Equidimensional scheme
A scheme S is said to be equidimensional if every irreducible component has the same Krull dimension. For example, the affine scheme Spec k[x,y,z]/(xy,xz), which intuitively looks like a line intersecting a plane, is not equidimensional.
Cohen–Macaulay ring
An affine algebraic variety whose coordinate ring is a Cohen–Macaulay ring is equidimensional.[2]
References
1. Wirthmüller, Klaus. A Topology Primer: Lecture Notes 2001/2002 (PDF). p. 90. Archived (PDF) from the original on 29 June 2020.
2. Sawant, Anand P. Hartshorne's Connectedness Theorem (PDF). p. 3. Archived from the original (PDF) on 24 June 2015.
Dimension
Dimensional spaces
• Vector space
• Euclidean space
• Affine space
• Projective space
• Free module
• Manifold
• Algebraic variety
• Spacetime
Other dimensions
• Krull
• Lebesgue covering
• Inductive
• Hausdorff
• Minkowski
• Fractal
• Degrees of freedom
Polytopes and shapes
• Hyperplane
• Hypersurface
• Hypercube
• Hyperrectangle
• Demihypercube
• Hypersphere
• Cross-polytope
• Simplex
• Hyperpyramid
Dimensions by number
• Zero
• One
• Two
• Three
• Four
• Five
• Six
• Seven
• Eight
• n-dimensions
See also
• Hyperspace
• Codimension
Category
| Wikipedia |
\begin{document}
\begin{center} \LARGE{\textbf{A Combination of Downward Continuation and Local Approximation for Harmonic Potentials}} \\[3ex]\normalsize C. Gerhards\footnote{Geomathematics Group, University of Kaiserslautern, PO Box 3049, 67663 Kaiserslautern \\e-mail: [email protected]} \\[5ex] \end{center}
\textbf{Abstract.} This paper presents a method for the approximation of harmonic potentials that combines downward continuation of globally available data on a sphere $\Omega_R$ of radius $R$ (e.g., a satellite's orbit) with locally available data in a subregion $\Gamma_r$ of the sphere $\Omega_r$ of radius $r<R$ (e.g., the spherical Earth's surface). The approximation is based on a two-step algorithm motivated by spherical multiscale expansions: First, a convolution with a scaling kernel $\Phi_N$ deals with the downward continuation from $\Omega_R$ to $\Omega_r$, while in a second step, the result is locally refined by a convolution on $\Omega_r$ with a wavelet kernel $\tilde{\Psi}_N$. The kernels $\Phi_N$ and $\tilde{\Psi}_N$ are optimized in such a way that the former behaves well for the downward continuation while the latter shows a good localization in $\Gamma_r$.
The concept is indicated for scalar as well as vector potentials. \\
\textbf{Key Words.} Harmonic potentials, downward continuation, spatial localization, spherical basis functions. \\
\textbf{AMS Subject Classification.} 31B20, 41A35, 42C15, 65D15, 86-08, 86A22
\section{Introduction}
Recent satellite missions monitoring the Earth's gravity and magnetic field supply a large amount of data with a fairly good global coverage. They are complemented by local/regional measurements at or near the Earth's surface. While satellite data is well-suited for the reconstruction of large-scale structures, it fails for spatially localized features (due to the involved downward continuation). The opposite is true for locally/regionally available ground data. It is well-suited to capture local phenomena but fails for global trends. Therefore, in order to obtain high-resolution gravitational models, such as EGM2008 (cf. \cite{pavlis}), or geomagnetic models, such as NGDC-720\footnote{http://geomag.org/models/ngdc720.html}, it becomes necessary to combine both types of data. The upcoming Swarm satellite mission, e.g., aims at reducing the (spectral) gap between satellite data and local/regional data at or near the Earth's surface (cf. \cite{swarm}) by supplying improved data from a constellation of three satellites. Making use of this improved situation requires methods that address the different properties of satellite and ground data.
In order to deal with local/regional data sets, various types of localizing spherical basis functions have been developed during the last years and decades. Among them are spherical splines (e.g., \cite{freeden81}, \cite{shure82}), spherical cap harmonics (e.g., \cite{haines85}, \cite{thebault}), and Slepian functions (e.g., \cite{plattner13}, \cite{plattner13b}, \cite{simons06}, \cite{simons10}). Spherical multiscale methods go a bit further and allow a scale-dependent adaptation of scaling and wavelet kernels (see, e.g., \cite{dahlke}, \cite{freewind}, \cite{hol}, and \cite{sweldens} for the early development). They are particularly well-suited to combine global and local/regional data sets of different resolution and have been applied intensively to problems in geomagnetism and gravity field modeling, e.g., in \cite{bayer01}, \cite{chambodut}, \cite{freeger}, \cite{freeschrei06}, \cite{freewind}, \cite{ger12}, \cite{ger13}, \cite{hol03}, \cite{klees07}, \cite{mai05}, \cite{mayer06}, and \cite{michel01}. Matching pursuits as described, e.g., in \cite{mallat} have been adapted more recently to meet the requirements of geoscientific problems (cf. \cite{michel12}, \cite{fischer13}). Their dictionary structure allows the inclusion of a variety of global and spatially localizing basis functions, of which adequate functions are selected automatically dependent on the given data.
However, when combining satellite data on a sphere $\Omega_R$ and local/regional data on a sphere $\Omega_r$ of radius $r<R$, not only methods that are able to deal with the local/regional aspect become necessary but also those that deal with the ill-posedness of downward continuation of data on $\Omega_R$. Typically, those two problems are treated separately. (Spherical) downward continuation itself has been studied intensively, e.g., in \cite{bauer13}, \cite{freeden99}, \cite{freeden01}, \cite{schneider98}, \cite{pereverzev10}, \cite{naumov}, \cite{perev99} (for the more mathematical aspects) and \cite{cooper}, \cite{maimay03}, \cite{trompat}, \cite{tziavos} (for a stronger focus on the geophysical application). A particular approach to regularize downward continuation is given by multiscale methods (see, e.g., \cite{freeden99}, \cite{freeden01}, \cite{schneider98}, \cite{maimay03}, \cite{perev99} for the particular case of spherical geometries).
Yet, it seems that no approach intrinsically combines the two problems, especially regarding that downward continuation is required for the data on $\Omega_R$ but not for the local/regional data on $\Omega_r$.
It is the goal of this paper, motivated by some of the previous multiscale methods, to introduce a two-step approximation reflecting such an intrinsic combination. More precisely, in the first step only data on $\Omega_R$ is used and downward continued by convolution with a scaling kernel $\Phi_N$. In the second step, the approximation is refined by convolving the local/regional data on $\Omega_r$ with a spatially localizing wavelet kernel $\tilde{\Psi}_N$. The connection of the two steps is given by the construction of the kernels $\Phi_N$, $\tilde{\Psi}_N$: Both kernels are designed in such a way that they simultaneously minimize a functional that contains a penalty term for the downward continuation and a penalty term for spatial localization. Thus, it is not the goal to first get a best possible approximation from satellite data only and then refine this approximation with local/regional data. It is rather to find a balance between the data on $\Omega_R$ and the data on $\Omega_r$ that in some sense leads to a best overall approximation.
\begin{figure}
\caption{The given data situation.}
\label{fig:datasit}
\end{figure}
\subsection{Brief Description of the Approach}
In the exterior of the Earth, the gravity and the crustal magnetic field can be described by a harmonic potential $U$. From satellite measurements we obtain data $F_1$ on a spherical orbit $\Omega_R=\{x\in\mathbb{R}^3:|x|=R\}$ and from ground or near-ground measurements data $F_2$ in a subregion $\Gamma_r$ of the spherical Earth surface $\Omega_r$ of radius $r<R$ (cf. Figure \ref{fig:datasit}). The problem to solve is \begin{align} \Delta U=0,\quad&\textnormal{ in }\Omega_r^{ext},\label{eqn:11} \\U=F_1,\quad&\textnormal{ on }\Omega_R,\label{eqn:12} \\U=F_2,\quad&\textnormal{ on }\Gamma_r,\label{eqn:13} \end{align}
with $\Omega_r^{ext}=\{x\in\mathbb{R}^3:|x|>r\}$ denoting the space exterior to the sphere $\Omega_r$. Of interest to us is the restriction $U^+=U|_{\Gamma_r}$, i.e., the potential in the subregion $\Gamma_r$ of the Earth's surface. The knowledge of $F_1$ on $\Omega_R$ already supplies all information necessary to obtain $U^+$. However, since it is only available by measurements in discrete points on $\Omega_R$, possible noise and the involved downward continuation render $F_1$ only suitable to approximate the coarser structures of $U^+$. Additional measurements of $F_2$ in $\Gamma_r$ improve the situation.
Throughout this paper, we use an approximation $U_N$ of $U^+$ of the form \begin{align}\label{eqn:firstapprox} {U_N={T}_N[F_1]+{\tilde{W}}_N[F_2].} \end{align} It is motivated by spherical multiscale representations as introduced in \cite{schneider98} and \cite{freewind}: $T_N$ reflects a regularized version of the downward continuation operator, acting as a scaling transform on $\Omega_R$ with the convolution kernel \begin{align}\label{eqn:phin} {\Phi_N(x,y)=\sum_{n=0}^N\sum_{k=1}^{2n+1}\Phi_N^\wedge(n)\frac{1}{r}Y_{n,k}\left(\xi\right)\frac{1}{R}Y_{n,k}\left(\eta\right).} \end{align}
We frequently use $\xi$ and $\eta$ to abbreviate the unit vectors $\frac{x}{|x|}$ and $\frac{y}{|y|}$, respectively, and write $r=|x|$, $R=|y|$. Furthermore, $\{Y_{n,k}\}_{n=0,1,\ldots;k=1,\ldots,2n+1}$ denotes a set of orthonormal spherical harmonics of degree $n$ and order $k$. In order to refine the approximation with local data we use the operator $\tilde{W}_N$, which acts as a wavelet transform on $\Gamma_r$ with the convolution kernel \begin{align}\label{eqn:psin} {\tilde{\Psi}_N(x,y)=\sum_{n=0}^{\lfloor\kappa N\rfloor}\sum_{k=1}^{2n+1}\tilde{\Psi}_N^\wedge(n)\frac{1}{r}Y_{n,k}\left(\xi\right)\frac{1}{r}Y_{n,k}\left(\eta\right), } \end{align} where $\kappa>1$ is a fixed constant (reflecting the higher resolution desired for the refinement). The coefficients ${\Phi}_N^\wedge(n)$ and $\tilde{\Psi}_N^\wedge(n)$ are typically called 'symbols' of the corresponding kernels. They are coupled by the relation $\tilde{\Psi}_N^\wedge(n)=\tilde{\Phi}_N^\wedge(n)-{\Phi}_N^\wedge(n)\big(\frac{r}{R}\big)^n$ (see Section \ref{subsec:comb} for details), where $\tilde{\Phi}_N^\wedge(n)$ has been introduced as an auxiliary symbol. This coupling guarantees a smooth transition from the use of global satellite data on $\Omega_R$ to local data in $\Gamma_r$. The optimization of the kernels is done by simultaneously choosing symbols $\Phi_N^\wedge(n)$, $\tilde{\Phi}_N^\wedge(n)$ that minimize a functional $\mathcal{F}$ reflecting the desired properties.
The general setting and notation as well as the choice of the functional $\mathcal{F}$ are described in Sections \ref{sec:sett} and \ref{sec:min}. Convergence results for the approximation are supplied in Section \ref{sec:theo} and numerical tests in Section \ref{sec:num}. In Section \ref{sec:vect}, we transfer the concept to a vectorial setting, where the gradient $\nabla U$ is approximated from vectorial data on $\Omega_R$ and $\Gamma_r$. This is of interest, e.g., for the crustal magnetic field where the actual sought-after quantity is the vectorial magnetic field $b=\nabla U$.
\section{General Setting}\label{sec:sett}
As mentioned in the introduction, $\{Y_{n,k}\}_{n=0,1,\ldots;k=1,\ldots,2n+1}$ denotes a set of orthonormal spherical harmonics of degree $n$ and order $k$. Aside from the space $L^2(\Omega_r)$ of square-integrable functions on $\Omega_r$, we also need the Sobolev space $\mathcal{H}_s(\Omega_r)$, $s\geq 0$. It is defined by \begin{align}\label{eqn:sobspace}
\mathcal{H}_s(\Omega_r)=\left\{F\in L^2(\Omega_r):\|F\|_{\mathcal{H}_s(\Omega_r)}^2=\sum_{n=0}^\infty\sum_{k=1}^{2n+1}\Big(n+\textnormal{\footnotesize $\frac{1}{2}$}\Big)^{2s} \big|F_r^\wedge(n,k)\big|^2<\infty\right\}, \end{align} where $F_r^\wedge(n,k)$ denotes the Fourier coefficient of degree $n$ and order $k$, i.e., \begin{align}
F_r^\wedge(n,k)=\int_{\Omega_r}F(y)\frac{1}{r}Y_{n,k}\left(\frac{y}{|y|}\right)d\omega(y). \end{align} A further notion that we need is \begin{align}\label{polnj1} \textnormal{Pol}_{N}=\left\{K(x,y)=\sum_{n=0}^{N}\sum_{k=1}^{2n+1}K^\wedge(n) Y_{n,k}\left(\xi\right)Y_{n,k}\left(\eta\right):K^\wedge(n)\in\mathbb{R}\right\}, \end{align}
the space of all band-limited zonal kernels with maximal degree $N$ (as always, $\xi$, $\eta$ denote the unit vectors $\frac{x}{|x|}$ and $\frac{y}{|y|}$, respectively). The kernels $\Phi_N$ and $\tilde{\Psi}_N$ from \eqref{eqn:phin} and \eqref{eqn:psin} are members of such spaces. Zonal means that $K$ only depends on the scalar product $\xi\cdot\eta$, more precisely, \begin{align}\label{eqn:Kzonal} K(x,y)=\sum_{n=0}^{N}\frac{2n+1}{4\pi}K^\wedge(n) P_n\left(\xi\cdot\eta\right), \end{align} with $P_n$ being the Legendre polynomial of degree $n$ (the expressions \eqref{polnj1} and \eqref{eqn:Kzonal} are connected by the spherical addition theorem). Thus, instead of $K(\cdot,\cdot)$ acting on $\Omega_r\times\Omega_R$ or $\Omega_r\times\Omega_r$, it can also be regarded as a function $K(\cdot)$ acting on the interval $[-1,1]$. In both cases we just write $K$.
\subsection{Downward Continuation}\label{subsec:dc}
We return to Equations \eqref{eqn:11}--\eqref{eqn:13} in order to derive the approximation \eqref{eqn:firstapprox}, reminding that we are interested in $U^+=U|_{\Omega_r}$ (or $U^+=U|_{\Gamma_r}$, respectively). We start by considering only the equations \eqref{eqn:11} and \eqref{eqn:12}, leading to the reconstruction of $U^+$ from knowledge of $F_1$ on $\Omega_R$, $r<R$. Opposed to this, the determination of $F_1$ from $U^+$ is known as upward continuation. The operator $T^{up}:L^2(\Omega_r)\to L^2(\Omega_R)$, given by \begin{align}\label{eqn:TrR} F_1(x)={T}^{up}[U^+](x)=\int_{\Omega_r}K^{up}(x,y) U^+(y)d\omega(y),\quad x\in\Omega_R, \end{align} with \begin{align}\label{eqn:KrR} K^{up}(x,y)&=\sum_{n=0}^\infty\sum_{k=1}^{2n+1}\sigma_n\frac{1}{R}Y_{n,k}\left(\xi\right)\frac{1}{r}Y_{n,k}\left(\eta\right) \end{align} and $\sigma_n=\left(\frac{r}{R}\right)^n$, describes this process. The downward continuation operator $T^{down}$ and acts in the following way: \begin{align}\label{eqn:TRr} U^+(x)={T}^{down}[F_1](x)=\int_{\Omega_R}K^{down}(x,y) U^+(y)d\omega(y),\quad x\in\Omega_r, \end{align} with \begin{align}\label{eqn:KRr} K^{down}(x,y)&=\sum_{n=0}^\infty\sum_{k=1}^{2n+1}\frac{1}{\sigma_n}\frac{1}{r}Y_{n,k}\left(\xi\right)\frac{1}{R}Y_{n,k}\left(\eta\right). \end{align} One way to deal with the unboundedness of $T^{down}$ is a multiscale representation where $T^{down}$ is approximated by a sequence of bounded operators. Following the course of \cite{freeden99} and \cite{schneider98}, we assume $\Phi_N$ to be a scaling kernel of the form \eqref{eqn:phin} with truncation index $\lfloor\kappa N\rfloor$ and symbols $\Phi_N^\wedge(n)$ that satisfy \begin{itemize} \item[(a)] $\lim_{N\to\infty}\Phi_N^\wedge(n)=\frac{1}{\sigma_n}$, uniformly with respect to $n=0,1,\ldots$,
\item[(b)] $\sum_{n=0}^\infty \frac{2n+1}{4\pi} \big|\Phi_N^\wedge(n)\big|< \infty$, for all $N=0,1,\ldots$. \end{itemize} The bounded scaling transform $T_N:L^2(\Omega_R)\to L^2(\Omega_r)$ is then defined via \begin{align}\label{eqn:TjRr} {{T}_N[F_1](x)=\int_{\Omega_R}\Phi_N(x,y) F_1(y)d\omega(y),\quad x\in\Omega_r,} \end{align} and represents an approximation of $T^{down}[F_1]$. This operator can be refined further by use of the wavelet transform \begin{align}\label{eqn:WjRr} {W}_N[F_1](x)=\int_{\Omega_R}\Psi_N(x,y) F_1(y)d\omega(y),\quad x\in\Omega_r, \end{align} where the kernel $\Psi_N$ is of the form \eqref{eqn:psin} with symbols $\Psi_N^\wedge(n)=\Phi_{\lfloor\kappa N\rfloor}^\wedge(n)-\Phi_N^\wedge(n)$. An approximation of $T^{down}[F_1]$ at the higher scale $\lfloor\kappa N\rfloor$, for some fixed $\kappa>1$, is then given by \begin{align}\label{eqn:multirep1} {T}_{\lfloor\kappa N\rfloor}[F_1](x)={T}_N[F_1](x)+W_N[F_1](x),\quad x\in\Omega_r. \end{align} It has to be noted that the kernel $\Psi_N$ and the wavelet transform $W_N$ lack a tilde (as opposed to representations \eqref{eqn:firstapprox} and \eqref{eqn:psin}). This indicates that we have not taken data $F_2$ in $\Gamma_r$ into account yet. Operators and kernels with a tilde mean that information is mapped from $\Gamma_r$ to $\Gamma_r$ while a lack of the tilde typically indicates the mapping of information from $\Omega_R$ to $\Omega_r$ (or $\Gamma_r$, respectively).
\subsection{Combination of Downward Continuation and Local Data}\label{subsec:comb}
In order to incorporate data $F_2$ in $\Gamma_r$ by use of a wavelet transform, it is necessary to rewrite \eqref{eqn:WjRr}. Observing that $F_1$ and $F_2$ are only specific expressions of $U$ on the spheres $\Omega_R$ and $\Omega_r$, respectively, we find \begin{align}\label{eqn:Wjrr} &\int_{\Omega_R}\Psi_N(x,y) F_1(y)d\omega(y)=\int_{\Omega_R}\Psi_N(x,y) U(y)d\omega(y) \\&=\int_{\Omega_r}\tilde{\Psi}_N(x,y) U(y)d\omega(y)=\int_{\Omega_r}\tilde{\Psi}_N(x,y) F_2(y)d\omega(y),\quad x\in\Omega_r,\nonumber \end{align} where $\tilde{\Psi}_N$ is of the form \eqref{eqn:psin} with $\tilde{\Psi}_N^\wedge(n)=\sigma_n{\Psi}_N^\wedge(n)=\Phi_{\lfloor\kappa N\rfloor}^\wedge(n)\sigma_n-\Phi_N^\wedge(n)\sigma_n$ (the factor $\sigma_n$ stems from the downward continuation that occurs in the second equality of \eqref{eqn:Wjrr}). We slightly modify the symbol $\tilde{\Psi}_N^\wedge(n)$ by use of the auxiliary symbol $\tilde{\Phi}_N^\wedge(n)$, so that it reads \begin{align} {\tilde{\Psi}_N^\wedge(n)=\tilde{\Phi}_N^\wedge(n)-\Phi_N^\wedge(n)\sigma_n.} \end{align} This has the effect that now two parameters are available, namely $\Phi_N^\wedge(n)$, which reflects the behaviour of the operator $T_N$ responsible for the downward continuation, and $\tilde{\Phi}_N^\wedge(n)$, which offers a chance to control the localization of $\tilde{\Psi}_N$ and the behaviour of $\tilde{W}_N$ to a certain amount. The auxiliary symbol needs to satisfy \begin{itemize} \item[(a')] $\lim_{N\to\infty}\tilde{\Phi}_N^\wedge(n)=1$, uniformly with respect to $n=0,1,\ldots$,
\item[(b')] $\sum_{n=0}^\infty \frac{2n+1}{4\pi} \big|\tilde{\Phi}_N^\wedge(n)\big|< \infty$, for all $N=0,1,\ldots$. \end{itemize} Remembering that $F_2$ is only available locally in $\Gamma_r$ and paying tribute to \eqref{eqn:Wjrr}, we define the wavelet transform \begin{align}\label{eqn:Wjloc} {\tilde{W}_N[F_2](x)=\int_{\mathcal{C}_r(x,\rho)}\tilde{\Psi}_N(x,y) F_2(y)d\omega(y),\quad x\in\tilde{\Gamma}_r.} \end{align}
$\mathcal{C}_r(x,\rho)$ denotes the spherical cap $\{y\in\Omega_r:1-\frac{x}{|x|}\cdot\frac{y}{|y|}<\rho\}$ with radius $\rho\in(0,2)$ and center $x\in\Omega_r$. The subset $\tilde{\Gamma}_r\subset\Gamma_r$ is chosen such that $\mathcal{C}_r(x,\rho)\subset\Gamma_r$ for every $x\in\tilde{\Gamma}_r$ and some $\rho\in(0,2)$ that is fixed in advance. The restriction to spherical caps is somewhat artificial and serves the sole purpose of simplifying the optimization of $\tilde{\Psi}_N$ (for the actual numerical evaluation of $U^+$ later on, we integrate over all of $\Gamma_r$ to make use of all available data). Summing up, the relations \eqref{eqn:TjRr}--\eqref{eqn:Wjloc} motivate \begin{align}\label{eqn:firstapprox2} {U_N={T}_N[F_1]+{\tilde{W}}_N[F_2]} \end{align} as an approximation of $U^+$ in $\tilde{\Gamma}_r$ (compare \eqref{eqn:firstapprox} in the introduction).
\section{The Minimizing Functional}\label{sec:min}
From now on we assume contaminated input data $F_1^{\ensuremath{\varepsilon}_1}=F_1+\ensuremath{\varepsilon}_1E_1$ and $F_2^{\varepsilon_2}=F_2+\varepsilon_2E_2$ with deterministic noise $E_1\in L^2(\Omega_R)$, $E_2\in L^2(\Omega_r)$, and $\ensuremath{\varepsilon}_1,\ensuremath{\varepsilon}_2>0$. The approximation \eqref{eqn:firstapprox2} of $U^+$ in $\tilde{\Gamma}_r$ is then modified by \begin{align}\label{eqn:firstapprox3} {U_N^\ensuremath{\varepsilon}={T}_N[F_1^{\ensuremath{\varepsilon}_1}]+{\tilde{W}}_N[F_2^{\ensuremath{\varepsilon}_2}],} \end{align}
where $\ensuremath{\varepsilon}$ stands short for $(\ensuremath{\varepsilon}_1,\ensuremath{\varepsilon}_2)^T$. It is the aim of this paper to find kernels $\Phi_N$ and $\tilde{\Psi}_N$ (determined by the symbols $\Phi_N^\wedge(n)$ and $\tilde{\Psi}_N^\wedge(n)=\tilde{\Phi}_N^\wedge(n)-\Phi_N^\wedge(n)\sigma_n$, respectively) that keep the error $\|U^+-U_N^\ensuremath{\varepsilon}\|_{L^2(\tilde{\Gamma}_r)}$ small and allow some adaptations to possible a-priori knowledge on $F_1^{\ensuremath{\varepsilon}_1}$ and $F_2^{\ensuremath{\varepsilon}_2}$ (such as noise level of the measurements or data density). We start with the estimate \begin{align}
&\|U^+ - U_N^{\varepsilon}\|_{L^2(\tilde{\Gamma}_r)}\label{eqn:errorest2}
\\&\leq \|U^+ - {T}_{N}[F_1]-\tilde{W}_{N}[F_2]\|_{L^2(\tilde{\Gamma}_r)}+\|{T}_{N}[F_1-F_1^{\varepsilon_1}]+\tilde{W}_{N}[F_2-F_2^{\varepsilon_2}]\|_{L^2(\tilde{\Gamma}_r)}\nonumber
\\&\leq \|(1-{T}_{N}{T}^{up}-\tilde{W}_{N})[U^+]\|_{L^2(\tilde{\Gamma}_r)}+\varepsilon_1\|{T}_{N}[E_1]\|_{L^2(\tilde{\Gamma}_r)}+\varepsilon_2\|\tilde{W}_{N}[E_2]\|_{L^2(\tilde{\Gamma}_r)}.\nonumber \end{align} The first term on the right hand side can be split up further in the following way: \begin{align}
& \|(1-{T}_{N}{T}^{up}-\tilde{W}_{N})[U^+]\|_{L^2(\tilde{\Gamma}_r)}\label{eqn:errorest22}
\\&\leq \frac{1}{2}\|(1-{T}_{N}{T}^{up})[U^+]\|_{L^2(\tilde{\Gamma}_r)}+ \frac{1}{2}\left\|\int_{\Omega_r}\tilde{\Psi}_N(\cdot,y)U^+(y)d\omega(y)\right\|_{L^2(\tilde{\Gamma}_r)}\nonumber
\\&\quad\, +\frac{1}{2}\left\|U^+(x)-\int_{\Omega_r}\tilde{\Phi}_N(\cdot,y)U^+(y)d\omega(y)\right\|_{L^2(\tilde{\Gamma}_r)}+ \left\|\int_{\Omega_r\setminus\mathcal{C}_r(\cdot,\rho)}\tilde{\Psi}_N(\cdot,y)U^+(y)d\omega(y)\right\|_{L^2(\tilde{\Gamma}_r)}.\nonumber \end{align} The last term on the right hand side of \eqref{eqn:errorest22} simply compensates the extension of the integration region in the two preceding terms from $\mathcal{C}_r(x,\rho)$ to all of $\Omega_r$. We continue with \begin{equation}\label{eqn:errorest23} \begin{aligned}
& \|(1-{T}_{N}{T}^{up}-\tilde{W}_{N})[U^+]\|_{L^2(\tilde{\Gamma}_r)}
\\&\leq \frac{1}{2}\left\|\sum_{n=0}^\infty\sum_{k=1}^{2n+1}(1-{\Phi}_N^\wedge(n)\sigma_n)\big(U_r^+\big)^\wedge(n,k)\frac{1}{r}Y_{n,k}\right\|_{L^2(\Omega_r)}
\\&\quad+\frac{1}{2}\left\|\sum_{n=0}^\infty\sum_{k=1}^{2n+1}\tilde{\Psi}_{N}^\wedge(n)\big(U_r^+\big)^\wedge(n,k)\frac{1}{r}Y_{n,k}\right\|_{L^2(\Omega_r)}
\\&\quad+\frac{1}{2}\left\|\sum_{n=0}^\infty\sum_{k=1}^{2n+1}(1-\tilde{\Phi}_N^\wedge(n))\big(U_r^+\big)^\wedge(n,k)\frac{1}{r}Y_{n,k}\right\|_{L^2(\Omega_r)}
\\&\quad+\left\|\,\int_{\Omega_r\setminus\mathcal{C}_r(\rho,\cdot)}\tilde{\Psi}_{N}(\cdot,y) U^+(y)d\omega(y)\right\|_{L^2(\Omega_r)}
\\&\leq\sup_{n=0,1,\ldots}\frac{\big|1-\tilde{\Phi}_N^\wedge(n)\big|}{2\big(n+\frac{1}{2}\big)^s}\|U^+\|_{\mathcal{H}_s(\Omega_r)}+\sup_{n=0,1,\ldots}\frac{\big|1-{\Phi}_{N}^\wedge(n)\sigma_n\big|}{2\big(n+\frac{1}{2}\big)^s}\|U^+\|_{\mathcal{H}_s(\Omega_r)}
\\&\quad+\frac{1}{2}\sup_{n=0,1,\ldots}\big|\tilde{\Psi}_{N}^\wedge(n)\big|\|U^+\|_{L^2(\Omega_r)}+2\sqrt{2}\pi r^2\big\|\tilde{\Psi}_{N}\big\|_{L^2([-1,1-\rho])}\|U^+\|_{L^2(\Omega_r)}. \end{aligned} \end{equation}
For the last estimate on the right hand side, we observe that, due to the zonality of the kernels, $\sup_{x\in\Omega_r}\|\tilde{\Psi}_{N}(x,\cdot)\|_{L^2(\Omega_r\setminus\mathcal{C}_r(x,\rho))}$ coincides with $\sqrt{2\pi} r\|\tilde{\Psi}_{N}\|_{L^2([-1,1-\rho])}$, where \begin{align}\label{eqn:1dint}
\big\|\tilde{\Psi}_{N}\big\|_{L^2([-1,1-\rho])}^2=\int_{-1}^{1-\rho}|\tilde{\Psi}_{N}(t)|^2dt. \end{align}
Similar estimates can be obtained for the terms $\varepsilon_1\|{T}_{N}[E_1]\|_{L^2(\tilde{\Gamma}_r)}$ and $\varepsilon_2\|\tilde{W}_{N}[E_2]\|_{L^2(\tilde{\Gamma}_r)}$ in \eqref{eqn:errorest2}, so that we end up with an overall estimate \begin{align}
&\|U^+ - U_N^{\varepsilon}\|_{L^2(\tilde{\Gamma}_r)}\label{eqn:errorestfinal}
\\&\leq\|U^+\|_{\mathcal{H}_s(\Omega_r)}\sup_{n=0,1,\ldots}\frac{\big|1-\tilde{\Phi}_N^\wedge(n)\big|}{2\big(n+\frac{1}{2}\big)^s}+\|U^+\|_{\mathcal{H}_s(\Omega_r)}\sup_{n=0,1,\ldots}\frac{\big|1-{\Phi}_{N}^\wedge(n)\sigma_n\big|}{2\big(n+\frac{1}{2}\big)^s}\nonumber
\\&\quad+\varepsilon_1\|E_1\|_{L^2(\Omega_R)}\sup_{n=0,1,\ldots}\big|\Phi_{N}^\wedge(n)\big|
+\Big(\ensuremath{\varepsilon}_2\|E_2\|_{L^2(\Omega_r)}+\frac{1}{2}\|U^+\|_{L^2(\Omega_r)}\Big)\sup_{n=0,1,\ldots}\big|\tilde{\Psi}_{N}^\wedge(n)\big|\nonumber
\\&\quad+2\sqrt{2}\pi r^2\left(\ensuremath{\varepsilon}_2\|E_2\|_{L^2(\Omega_r)}+\|U^+\|_{L^2(\Omega_r)}\right)\big\|\tilde{\Psi}_{N}\big\|_{L^2([-1,1-\rho])}.\nonumber \end{align} Eventually, finding 'good' kernels $\Phi_N$ and $\tilde{\Psi}_N$ reduces to finding symbols $\Phi_N^\wedge(n)$, $\tilde{\Phi}_N^\wedge(n)$ that keep the right hand side of \eqref{eqn:errorestfinal} small (note that $\tilde{\Psi}_{N}^\wedge(n)$ is given by $\tilde{\Phi}_N^\wedge(n)-\Phi_N^\wedge(n)\sigma_n$). We choose these symbols to be the minimizers of the functional \begin{equation}\label{eqn:mineq2} {\begin{aligned}
\mathcal{F}(\Phi_N,\tilde{\Psi}_N)=&\sum_{n=0}^{\lfloor\kappa N\rfloor}\tilde{\alpha}_{N,n}\big|1-\tilde{\Phi}_{N}^\wedge(n)\big|^2+\sum_{n=0}^{N}\alpha_{N,n}\big|1-{\Phi}_{N}^\wedge(n)\sigma_n\big|^2
\\&+\beta_N\sum_{n=0}^{N}\big|\Phi_N^\wedge(n)\big|^2+8\pi^2 r^4\big\|\tilde{\Psi}_N\big\|^2_{L^2([-1,1-\rho])}, \end{aligned}} \end{equation} with $\Phi_N$ being a member of Pol$_{N}$ and $\tilde{\Psi}_N$ a member of Pol$_{\lfloor\kappa N\rfloor}$. The suprema in \eqref{eqn:errorestfinal} have been changed to square sums to simplify the determination of the minimizers. All pre-factors appearing in \eqref{eqn:errorestfinal} have been compensated into the parameters $\tilde{\alpha}_{N,n}$, ${\alpha}_{N,n}$, and $\beta_N$. They decide how much emphasis is set on the approximation property, how much on the behaviour of the downward continuation, and how much on the localization of the kernel $\tilde{\Psi}_N$. More precisely, the first term on the right hand side of \eqref{eqn:mineq2} reflects the overall approximation error (under the assumption that undisturbed global data is available on $\Omega_R$ as well as on $\Omega_r$), the second term only measures the error due to the downward continuation of undisturbed data on $\Omega_R$. The third and fourth term can be regarded as penalty terms reflecting the norm of the regularized downward continuation operator $T_N$ and the localization of the wavelet kernel (i.e., the error made by neglecting information outside the spherical cap $\mathcal{C}_r(x,\rho)$), respectively.
\section{Theoretical Results}\label{sec:theo}
Sections \ref{sec:sett} and \ref{sec:min} have motivated the choice of the functional $\mathcal{F}$ (cf. \eqref{eqn:mineq2}) and the approximation $U_N^\ensuremath{\varepsilon}$ of $U^+$ (cf. \eqref{eqn:firstapprox3}). In this section, we want to study the approximation more rigorously with respect to its convergence. The general idea for the proof of the convergence stems from \cite{michel11} where the optimization of approximate identity kernels has been treated. We start with a lemma indicating the solution of the minimization of the functional $\mathcal{F}$.
\begin{lem}\label{prop:minsol} Assume that all parameters $\tilde{\alpha}_{N,n}$, $\alpha_{N,n}$, and $\beta_N$ are positive. Then there exist unique minimizers $\Phi_N\in\textnormal{Pol}_{N}$ and $\tilde{\Psi}_N\in\textnormal{Pol}_{\lfloor\kappa N\rfloor}$ of the functional $\mathcal{F}$ in \eqref{eqn:mineq2} that are determined by the symbols $\phi=(\Phi_N^\wedge(1)\sigma_1,\ldots,\Phi_N^\wedge(N)\sigma_{N},\tilde{\Phi}_{N}^\wedge(1),\ldots,\tilde{\Phi}_{N}^\wedge(\lfloor\kappa N\rfloor))^T$ which solve the linear equations \begin{align}\label{eqn:lineq1} \mathbf{M}\phi=\alpha, \end{align} where \begin{align}
\mathbf{M}=\left(\begin{array}{c|c}\mathbf{D}_1+\mathbf{P}_1&-\mathbf{P}_2\\\hline-\mathbf{P}_3&\mathbf{D}_2+\mathbf{P}_4\end{array}\right), \end{align} and $\alpha=(\alpha_{N,1},\ldots,\alpha_{N,N},\tilde{\alpha}_{N,1},\ldots,\tilde{\alpha}_{N,\lfloor\kappa N\rfloor})^T$. The diagonal matrices $\mathbf{D}_1$, $\mathbf{D}_2$ are given by \begin{align} \mathbf{D}_1=\textnormal{diag}\left(\frac{\beta_N}{\sigma_n^2}+\alpha_{N,n}\right)_{n=0,\ldots,N}, \qquad\mathbf{D}_2=\textnormal{diag}\big(\tilde{\alpha}_{N,n}\big)_{n=0,\ldots,\lfloor\kappa N\rfloor}, \end{align} whereas $\mathbf{P}_1$,\ldots, $\mathbf{P}_4$ are submatrices of the Gram matrix $\big(P_{n,m}^\rho\big)_{n,m=0,\ldots, \lfloor\kappa N\rfloor}$. More precisely, \begin{align} \mathbf{P}_1=\big(P_{n,m}^\rho\big)_{n,m=0,\ldots,N},&\qquad \mathbf{P}_2=\big(P_{n,m}^\rho\big)_{n=0,\ldots,N\atop m=0,\ldots,\lfloor\kappa N\rfloor}, \\\mathbf{P}_3=\big(P_{n,m}^\rho\big)_{n=0,\ldots,\lfloor\kappa N\rfloor\atop m=0,\ldots,N},&\qquad\mathbf{P}_4=\big(P_{n,m}^\rho\big)_{n,m=0,\ldots,\lfloor\kappa N\rfloor}. \end{align} with \begin{align} P_{n,m}^\rho=\frac{(2n+1)(2m+1)}{2}\int_{-1}^{1-\rho}P_n(t)P_m(t)dt. \end{align} \end{lem}
\begin{proof} First we observe that the representation \eqref{eqn:psin} of the kernel $\tilde{\Psi}_N$ and its zonality together with the addition theorem for spherical harmonics imply \small \begin{align}\label{eqn:psinorm}
&8\pi^2 r^4\left\|\tilde{\Psi}_N\right\|^2_{L^2([-1,1-\rho])}
\\&=\frac{8\pi^2 r^4}{r^4}\int_{-1}^{1-\rho}\Bigg|\sum_{n=0}^{\lfloor\kappa N\rfloor}\frac{2n+1}{4\pi}\left(\tilde{\Phi}_{N}^\wedge(n)-\Phi_N^\wedge(n)\sigma_n\right)P_n(t)\Bigg|^2dt\nonumber \\&=\sum_{n=0}^{\lfloor\kappa N\rfloor}\sum_{m=0}^{\lfloor\kappa N\rfloor}\frac{(2n+1)(2m+1)}{2}\left(\tilde{\Phi}_{N}^\wedge(m)-\Phi_N^\wedge(m)\sigma_m\right)\left(\tilde{\Phi}_{N}^\wedge(n)-\Phi_N^\wedge(n)\sigma_n\right)\int_{-1}^{1-\rho}\!\!\!\!\!\!\!P_n(t)P_m(t)dt\nonumber \\&=\sum_{n=0}^{\lfloor\kappa N\rfloor}\sum_{m=0}^{\lfloor\kappa N\rfloor}\left(\tilde{\Phi}_{N}^\wedge(m)-\Phi_N^\wedge(m)\sigma_m\right)\left(\tilde{\Phi}_{N}^\wedge(n)-\Phi_N^\wedge(n)\sigma_n\right)P_{n,m}^\rho.\nonumber \end{align} \normalsize Inserting \eqref{eqn:psinorm} into \eqref{eqn:mineq2} and then differentiating the whole expression with respect to $\Phi_N^\wedge(n)\sigma_n$ and $\tilde{\Phi}_{N}^\wedge(n)$ leads to \begin{align} -2\alpha_{N,n}(1-\Phi_N^\wedge(n)\sigma_n)+2\beta_N\frac{\Phi_N^\wedge(n)\sigma_n}{\sigma_n^2}-2\sum_{m=0}^{\lfloor\kappa N\rfloor}\left(\tilde{\Phi}_{N}^\wedge(m)-\Phi_N^\wedge(m)\sigma_m\right)P_{n,m}^\rho, \end{align} for $n=0,\ldots,N$, and \begin{align} -2\tilde{\alpha}_{N,n}(1-\tilde{\Phi}_{N}^\wedge(n))+2\sum_{m=0}^{\lfloor\kappa N\rfloor}\left(\tilde{\Phi}_{N}^\wedge(m)-\Phi_N^\wedge(m)\sigma_m\right)P_{n,m}^\rho, \end{align} for $n=0,\ldots,\lfloor\kappa N\rfloor$, respectively. Setting the two expressions above equal to zero, a proper reordering leads to the linear equation \eqref{eqn:lineq1}. At last, we observe that the matrix $(P_{n,m}^\rho)_{n,m=0,\ldots,\lfloor\kappa N\rfloor}$ is positive definite (since it represents a Gram matrix of linearly independent functions) and that all appearing diagonal matrices are positive definite due to positive matrix entries. Thus, the matrix $\mathbf{M}$ is positive definite and the linear system \eqref{eqn:lineq1} is uniquely solvable and leads to a minimum of \eqref{eqn:mineq2}. \end{proof}
We continue with the statement of convergence for $U_N^\varepsilon$ (cf. Theorem \ref{thm:convres}). For the proof we need a localization result for Shannon-type kernels, more precisely, a (spherical) variation of the Riemann localization property. We borrow this result as a particular case from \cite{yuguang13}:
\begin{prop}\label{prop:loc1} If $F\in\mathcal{H}_s(\Omega_r)$, $s\geq 2$, it holds that \begin{align}\label{eqn:locprop}
\lim_{N\to\infty}\left\|\int_{\Omega_r\setminus\mathcal{C}_r(\rho,\cdot)}\tilde{\Psi}^{Sh}_N(\cdot,y)F(y)d\omega(y)\right\|_{L^2 (\Omega_r)}=0, \end{align} where $\tilde{\Psi}^{Sh}_N$ is the Shannon-type kernel with symbols $\big(\tilde{\Psi}_N^{Sh}\big)^\wedge(n)=1$, if $N +1\leq n\leq \lfloor\kappa N\rfloor$, and $\big(\tilde{\Psi}_N^{Sh}\big)^\wedge(n)=0$, else. \end{prop}
\begin{thm}\label{thm:convres} Assume that the parameters $\alpha_{N,n}$, $\tilde{\alpha}_{N,n}$, and $\beta_N$ are positive and suppose that, for some fixed $\delta>0$ and $\kappa>1$, \begin{align} \inf_{n=0,\ldots, \lfloor\kappa N\rfloor}\frac{\beta_N\frac{1-\left(\frac{R}{r}\right)^{2(N+1)}}{1-\left(\frac{R}{r}\right)^2}+( \lfloor\kappa N\rfloor+1)^2}{{\alpha}_{N,n}}&=\mathcal{O}\left(N^{-2(1+\delta)}\right),\quad \textit{ for }N\to\infty.\label{eqn:ass1} \end{align} The same relation shall hold true for $\tilde{\alpha}_{N,n}$. Additionally, let every $N$ be associated with an $\varepsilon_1=\varepsilon_1(N)>0$ and an $\varepsilon_2=\varepsilon_2(N)>0$ such that \begin{align} \lim_{N\to\infty}\varepsilon_1 \left(\frac{R}{r}\right)^{N}=\lim_{N\to\infty}\varepsilon_2 \,{N}=0.\label{eqn:propx} \end{align} The functions $F_1:\Omega_R\to\mathbb{R}$ and $F_2:\Gamma_r\to\mathbb{R}$, $r<R$, are supposed to be such that a unique solution $U$ of \eqref{eqn:11}--\eqref{eqn:13} exists and that the restriction $U^+$ is of class $\mathcal{H}_s(\Omega_r)$, for some fixed $s\geq 2$. The erroneous input data is given by $F_1^{\ensuremath{\varepsilon}_1}=F_1+\ensuremath{\varepsilon}_1 E_1$ and $F_2^{\ensuremath{\varepsilon}_2}=F_2+\ensuremath{\varepsilon}_2 E_2$, with $E_1\in L^2(\Omega_R)$ and $E_2\in L^2(\Omega_r)$. If the kernels $\Phi_N\in\textnormal{Pol}_{N}$ and kernel $\tilde{\Psi}_N\in\textnormal{Pol}_{ \lfloor\kappa N\rfloor}$ are the minimizers of the functional $\mathcal{F}$ from \eqref{eqn:mineq2} and if $U_N^\ensuremath{\varepsilon}$ is given as in \eqref{eqn:firstapprox3}, then \begin{align}
\lim_{N\to\infty}\left\|U^+-U_ {N}^{\varepsilon}\right\|_{L^2(\tilde{\Gamma}_r)}=0.\label{eqn:convres} \end{align} \end{thm}
\begin{proof} As an auxiliary set of kernels, we define the Shannon-type kernels $\Phi_N^{Sh}$ and $\tilde{\Psi}_N^{Sh}$ via the symbols \begin{align} \big(\Phi_N^{Sh}\big)^\wedge(n)=\left\{\begin{array}{ll}\frac{1}{\sigma_n},&n\leq N,\\0,&\textnormal{else},\end{array}\right.\qquad \big(\tilde{\Phi}_{N}^{Sh}\big)^\wedge(n)=\left\{\begin{array}{ll}1,&n\leq \lfloor\kappa N\rfloor,\\0,&\textnormal{else},\end{array}\right. \end{align} and $\big(\tilde{\Psi}_N^{Sh}\big)^\wedge(n)=\big(\tilde{\Phi}_N^{Sh}\big)^\wedge(n)-\big(\Phi_N^{Sh}\big)^\wedge(n)\sigma_n$. $\Phi_N^{Sh}$ represents the kernel of the so-called truncated singular value decomposition for the downward continuation operator ${T}^{down}$. By using the addition theorem for spherical harmonics and properties of the Legendre polynomials, we obtain \begin{align}
\mathcal{F}(\Phi_N^{Sh},\tilde{\Psi}_N^{Sh})&=\beta_N\sum_{n=0}^{N}\frac{1}{\sigma_n^2}+8\pi^2 r^4\big\|\tilde{\Psi}_N^{Sh}\big\|^2_{L^2([-1,1-\rho])}\label{eqn:shannonest}
\\&\leq \beta_N\frac{1-\left(\frac{R}{r}\right)^{2(N+1)}}{1-\left(\frac{R}{r}\right)^{2}}+8\pi^2\sum_{n=0}^{\lfloor\kappa N\rfloor}\left(\frac{2n+1}{4\pi}\right)^2\left\|P_n\right\|^2_{L^2([-1,1])}\nonumber \\&= \beta_N\frac{1-\left(\frac{R}{r}\right)^{2(N+1)}}{1-\left(\frac{R}{r}\right)^{2}}+(\lfloor\kappa N\rfloor+1)^2.\nonumber \end{align} The kernels that minimize $\mathcal{F}$ are denoted by $\Phi_N$ and $\tilde{\Psi}_N$. In consequence, \begin{align} \tilde{\alpha}_{N,n}\big(1-\tilde{\Phi}_{N}^\wedge(n)\big)^2&\leq\mathcal{F}(\Phi_N,\tilde{\Psi}_N)\leq \mathcal{F}(\Phi_N^{Sh},\tilde{\Psi}_N^{Sh}) \\&\leq \beta_N\frac{1-\left(\frac{R}{r}\right)^{2(N+1)}}{1-\left(\frac{R}{r}\right)^{2}}+(\lfloor\kappa N\rfloor+1)^2,\nonumber \end{align} for all $n\leq \lfloor\kappa N\rfloor$. In combination with \eqref{eqn:ass1}, this leads to \begin{align}\label{eqn:conphij}
\big|1-\tilde{\Phi}_{N}^\wedge(n)\big|= \mathcal{O}(N^{-1-\delta}), \end{align}
uniformly for all $n\leq \lfloor\kappa N\rfloor$ and $N\to\infty$. The estimate $|1-\Phi_N^\wedge(n)\sigma_n|=\mathcal{O}(N^{-1-\delta})$ follows in the same manner and holds uniformly for all $n\leq N$ and $N\to\infty$. Finally, since $\tilde{\Phi}_{N}^\wedge(n)=0$ for $n> \lfloor\kappa N\rfloor$, we end up with \begin{align}
\lim_{N\to\infty}\sup_{n=0,1,\ldots}\frac{|1-\tilde{\Phi}_{N}^\wedge(n)|}{(n+\frac{1}{2})^{s}}=0, \end{align} which shows that the first term on the right hand side of the error estimate \eqref{eqn:errorestfinal} vanishes. The second term can be treated analogously.
Due to the uniform boundedness of $|\Phi_J^\wedge(n)\sigma_n|$, there exists some constant $C>0$ such that \eqref{eqn:propx} implies \begin{align}
\lim_{N\to\infty}\ensuremath{\varepsilon}_1\sup_{n=0,1,\ldots}\left|\Phi_N^\wedge(n)\right|\leq C\lim_{N\to\infty}\ensuremath{\varepsilon}_1\left(\frac{R}{r}\right)^{N}=0. \end{align}
so that the third term on the right hand side of \eqref{eqn:errorestfinal} vanishes as well. The fourth term does not vanish. However, taking a closer look at the derivation of this term, we see that it suffices to show that $\big\|\sum_{n=0}^\infty\sum_{k=1}^{2n+1}\tilde{\Psi}_{N}^\wedge(n)F_r^\wedge(n,k)\frac{1}{r}Y_{n,k}\big\|_{L^2(\Omega_r)}$ tends to zero (where $F$ stands for $U^+$ or $E_2$). Using the previous results and $|\tilde{\Psi}_N^\wedge(n)|\leq |1-{\Phi}_N^\wedge(n)\sigma_n|+|1-\tilde{\Phi}_N^\wedge(n)|$, it follows that \begin{align}
\left\|\sum_{n=0}^\infty\sum_{k=1}^{2n+1}\tilde{\Psi}_{N}^\wedge(n)F_r^\wedge(n,k)\frac{1}{r}Y_{n,k}\right\|_{L^2(\Omega_r)}^2\leq \mathcal{O}(N^{-2\delta})+C\sum_{n=N+1}^{\lfloor \kappa N\rfloor}\sum_{k=1}^{2n+1}\left|F_r^\wedge(n,k)\right|^2, \end{align}
for some constant $C>0$. Latter vanishes for $N\to\infty$ since $F$ (i.e., $U^+$ or $E_2$) is of class $L^2(\Omega_r)$. In order to handle the last term of the error estimate \eqref{eqn:errorestfinal}, it has to be shown that $\|\tilde{\Psi}_N\|_{L^2([-1,1-\rho])}$ tends to zero. This, again, is generally not true. But, taking a closer look at the derivation of the estimate indicates that it suffices to show that \small\begin{align}\label{eqn:altterms}
\Bigg\|\,\,\int_{\Omega_r\setminus\mathcal{C}_r(\rho,\cdot)}\tilde{\Psi}_{N}(\cdot,y) U^+(y)d\omega(y)\Bigg\|_{L^2(\Omega_r)}, \qquad \ensuremath{\varepsilon}_2\Bigg\|\,\,\int_{\Omega_r\setminus\mathcal{C}_r(\rho,\cdot)}\tilde{\Psi}_{N}(\cdot,y) E_2(y)d\omega(y)\Bigg\|_{L^2(\Omega_r)} \end{align}\normalsize vanish as $N\to\infty$. For the left expression we obtain \begin{equation}\label{eqn:locest} \begin{aligned}
&\Bigg\|\,\,\int_{\Omega_r\setminus\mathcal{C}_r(\rho,\cdot)}\tilde{\Psi}_{N}(\cdot,y) U^+(y)d\omega(y)\Bigg\|_{L^2(\Omega_r)}
\\&\leq\Bigg\|\,\,\int_{\Omega_r\setminus\mathcal{C}_r(\rho,\cdot)}\tilde{\Psi}^{Sh}_{N}(\cdot,y) U^+(y)d\omega(y)\Bigg\|_{L^2(\Omega_r)}
\\&\quad\,+\Bigg\|\,\,\int_{\Omega_r\setminus\mathcal{C}_r(\rho,\cdot)}\left(\tilde{\Psi}^{Sh}_{N}(\cdot,y)-\tilde{\Psi}_{N}(\cdot,y)\right) U^+(y)d\omega(y)\Bigg\|_{L^2(\Omega_r)}
\\&\leq\Bigg\|\,\,\int_{\Omega_r\setminus\mathcal{C}_r(\rho,\cdot)}\tilde{\Psi}^{Sh}_{N}(\cdot,y) U^+(y)d\omega(y)\Bigg\|_{L^2(\Omega_r)}
\\&\quad\,+2\pi r^2\|\tilde{\Psi}^{Sh}_{N}-\tilde{\Psi}_{N}\|_{L^2([-1,1-\rho])} \|U^+\|_{L^1(\Omega_r)}, \end{aligned} \end{equation} where Young's inequality has been used in the last row. Since $U^+\in \mathcal{H}_s(\Omega_r)$, $s\geq 2$, Proposition \ref{prop:loc1} implies that the first term on the right hand side of \eqref{eqn:locest} tends to zero as $N\to\infty$. The second term on the right hand side of \eqref{eqn:locest} can be treated as follows \begin{equation} \begin{aligned}
&\big\|\tilde{\Psi}^{Sh}_{N}-\tilde{\Psi}_{N}\big\|_{L^2([-1,1-\rho])}^2
\\&\leq\sum_{n=0}^{\lfloor\kappa N\rfloor}\left(\frac{2n+1}{4\pi}\right)^2\left|\tilde{\Psi}_N^\wedge(n)-\left(\tilde{\Psi}^{Sh}_N\right)^\wedge(n)\right|^2\|P_n\|_{L^2([-1,1])}^2
\\&\leq\sum_{n=0}^{\lfloor\kappa N\rfloor}\frac{2n+1}{8\pi^2}\left|\left|\tilde{\Phi}_{N}^\wedge(n)-1\right|+\left|{\Phi}_N^\wedge(n)\sigma_n-1\right|\chi_{[0,N]}(n)\right|^2 \\&=\sum_{n=0}^{\lfloor\kappa N\rfloor}\frac{2n+1}{8\pi^2}\mathcal{O}\left(N^{-2(1+\delta)}\right)=\mathcal{O}\left(N^{-2\delta}\right). \end{aligned} \end{equation} By $\chi_{[0,N]}(n)$ we mean the characteristic function, i.e., it is equal to 1 if $n=0,\ldots,N$, and equal to 0 otherwise. In conclusion, the left expression in \eqref{eqn:altterms} vanishes as $N\to\infty$. For the right expression in \eqref{eqn:altterms}, we obtain the desired result in a similar but easier manner. Finally, combining all steps of the proof, we have shown that the right hand side of the error estimate \eqref{eqn:errorestfinal} converges to zero, which yields \eqref{eqn:convres}. \end{proof}
\begin{rem}\label{rem:lockernels} The condition $U^+\in \mathcal{H}_s(\Omega_r)$, $s\geq 2$, in Theorem \ref{thm:convres} can be relaxed to $U^+\in \mathcal{H}_s(\Omega_r)$, $s>0$, if the minimizing functional $\mathcal{F}$ in \eqref{eqn:mineq2} is substituted by \begin{align}\label{eqn:mineq3}
\mathcal{F}_{Filter}(\Phi_N,\tilde{\Psi}_N)=&\sum_{n=0}^{\lfloor \kappa N\rfloor}\tilde{\alpha}_{N,n}\big|K_N^\wedge(n)-\tilde{\Phi}_{N}^\wedge(n)\big|^2+\sum_{n=0}^{N}\alpha_{N,n}\big|K_N^\wedge(n)-{\Phi}_{N}^\wedge(n)\sigma_n\big|^2
\\&+\beta_N\sum_{n=0}^{N}\big|\Phi_N^\wedge(n)\big|^2+8\pi^2 r^4\left\|\tilde{\Psi}_N\right\|^2_{L^2([-1,1-\rho])},\nonumber \end{align} where $K_N^\wedge(n)$ are the symbols of a filtered kernel (compare, e.g., \cite{yuguang132} and references therein for more information) \begin{align}
\tilde{\Phi}_{N}^{Filter}(x,y)=\sum_{n=0}^{N}\sum_{k=1}^{2n+1}K_N^\wedge(n)Y_{n,k}\left(\frac{x}{|x|}\right)Y_{n,k}\left(\frac{y}{|y|}\right). \end{align}
Then the proof of Theorem \ref{thm:convres} follows in a very similar manner as before, just that the minimizing kernels are not compared to the Shannon-type kernel but to the filtered kernel. Opposed to the Shannon-type kernel, an appropriately filtered kernel has the localization property $\lim_{N\to\infty}\|\tilde{\Phi}_{N}^{Filter}\|_{L^1([-1,1-\rho])}=0$ which makes the condition $s\geq 2$ on the smoothness of $U^+$ (required for Proposition \ref{prop:loc1}) obsolete. \end{rem}
To finish this section, we want to comment on the localization of the kernel $\tilde{\Psi}_N$. While we have used $\|\tilde{\Psi}_N\|_{L^2([-1,1-\rho])}$ as a measure for the localization inside a spherical cap $\mathcal{C}_r(\cdot,\rho)$ (values close to zero meaning a good localization, i.e., small leakage of information into $\Omega_r\setminus\mathcal{C}_r(\cdot,\rho)$), a more suitable quantity to consider would be \begin{align}\label{eqn:altloc}
\frac{\big\|\tilde{\Psi}_N\big\|_{L^2([-1,1-\rho])}}{\big\|\tilde{\Psi}_N\big\|_{L^2([-1,1])}}. \end{align} This is essentially the expression that is minimized for the construction of Slepian functions (see, e.g., \cite{plattner13}, \cite{plattner13b}, \cite{simons06}, \cite{simons10}). However, using \eqref{eqn:altloc} as a penalty term in the functional $\mathcal{F}$ from \eqref{eqn:mineq2} would make it significantly harder to find its minimizers. Furthermore, it turns out that the kernel $\tilde{\Psi}_N$ that minimizes the original functional actually keeps the quantity \eqref{eqn:altloc} small as well (at least asymptotically in the sense that \eqref{eqn:altloc} vanishes for $N\to\infty$). The latter essentially originates from the property that $\tilde{\Psi}_N$ converges to a Shannon-type kernel (which has the desired property).
\begin{lem}\label{lem:loc} Assume that the parameters $\alpha_{N,n}$, $\tilde{\alpha}_{N,n}$, and $\beta_N$ are positive and suppose that, for some fixed $\delta>0$ and $\kappa>1$, \begin{align} \inf_{n=0,\ldots, \lfloor\kappa N\rfloor}\frac{\beta_N\frac{1-\left(\frac{R}{r}\right)^{2(N+1)}}{1-\left(\frac{R}{r}\right)^2}+( \lfloor\kappa N\rfloor+1)^2}{{\alpha}_{N,n}}&=\mathcal{O}\left(N^{-2(1+\delta)}\right),\quad \textit{ for }N\to\infty.\label{eqn:ass11} \end{align} The same relation shall hold true for $\tilde{\alpha}_{N,n}$. If the scaling kernel $\Phi_N\in\textnormal{Pol}_{N}$ and the wavelet kernel $\tilde{\Psi}_N\in\textnormal{Pol}_{\lfloor\kappa N\rfloor}$ are the minimizers of the functional $\mathcal{F}$ from \eqref{eqn:mineq2}, then \begin{align} \label{eqn:optloc2} \lim_{N\to\infty}
\frac{\big\|\tilde{\Psi}_N\big\|^2_{L^2([-1,1-\rho])}}{\big\|\tilde{\Psi}_N\big\|^2_{L^2([-1,1])}}=0. \end{align} \end{lem}
\begin{proof}
The function $F_N(t)={|\tilde{\Psi}_N(t)|^2}/{\|\tilde{\Psi}_N\|^2_{L^2([-1,1])}}$ can be regarded as the density function of a random variable $t\in[-1,1]$. Thus, we can write \begin{align}
\frac{\big\|\tilde{\Psi}_N\big\|^2_{L^2([-1,1-\rho])}}{\big\|\tilde{\Psi}_N\big\|^2_{L^2([-1,1])}}&=\int_{-1}^{1-\rho}F_N(t)dt=P_N(t< 1-\rho)=1-P_N\big(t-E_N(t)\geq 1-\rho-E_N(t)\big)\nonumber \\&\leq \frac{V_N(t)}{V_N(t)+ (1-\rho-E_N(t))^2}, \end{align} where we have used Cantelli's inequality for the last estimate. By $P_N(t< a)$ we mean the probability (with respect to the density function $F_N$) that $t$ lies in the interval $[-1,a)$ while $P_N(t\geq a)$ means the probability of $t$ being in the interval $[a,1]$. Furthermore, $E_N(t)$ denotes the expected value of $t$ and $V_N(t)=E(t^2)-(E(t))^2$ the variance of $t$. In other words, we are done if we can show that $\lim_{N\to\infty}E_N(t)=1$ and $\lim_{N\to \infty}V_N(t)=0$.
First, we use the addition theorem for spherical harmonics and the recurrence relation $tP_n(t)=\frac{1}{2n+1}\left((n+1)P_{n+1}(t)+nP_{n-1}(t)\right)$ to obtain \begin{align}\label{eqn:et1}
\int_{-1}^1 t|\tilde{\Psi}_N(t)|^2 dt&= \frac{1}{r^2} \sum_{n=0}^{\lfloor\kappa N\rfloor}\sum_{m=0}^{\lfloor\kappa N\rfloor}\frac{2n+1}{4\pi}\frac{2m+1}{4\pi}\tilde{\Psi}_N^\wedge(n)\tilde{\Psi}_N^\wedge(m)\int_{-1}^1tP_n(t)P_m(t)dt
\\&=\sum_{n=0}^{\lfloor\kappa N\rfloor}\frac{2n+1}{16\pi^2r^2}\tilde{\Psi}_N^\wedge(n)\left(n\tilde{\Psi}_N^\wedge(n-1)+(n+1)\tilde{\Psi}_N^\wedge(n+1)\right)\int_{-1}^1|P_n(t)|^2 dt\nonumber \\&=\frac{1}{8\pi^2r^2}\sum_{n=0}^{\lfloor\kappa N\rfloor}n\tilde{\Psi}_N^\wedge(n)\tilde{\Psi}_N^\wedge(n-1)+(n+1)\tilde{\Psi}_N^\wedge(n)\tilde{\Psi}_N^\wedge(n+1).\nonumber \end{align}
From \eqref{eqn:conphij} and the corresponding estimate for $\Phi_N^\wedge(n)\sigma_n$ we find $|\tilde{\Psi}_N^\wedge(n)|^2=\mathcal{O}(N^{-2(1+\delta)})$ for $n\leq N$, and $|1-\tilde{\Psi}_N^\wedge(n)|^2=|1-\tilde{\Phi}_{N}^\wedge(n)|^2=\mathcal{O}(N^{-2(1+\delta)})$, for $N+1\leq n\leq \lfloor\kappa N\rfloor$. As a consequence, \eqref{eqn:et1} implies \begin{align}\label{eqn:et11}
&\int_{-1}^1 t|\tilde{\Psi}_N(t)|^2 dt
\\&=\mathcal{O}(N^{-2\delta})+\sum_{n=N+1}^{\lfloor\kappa N\rfloor} \frac{2n+1}{8\pi^2r^2}|\tilde{\Psi}_N^\wedge(n)|^2\left(\frac{n}{2n+1}\frac{\tilde{\Psi}_N^\wedge(n-1)}{\tilde{\Psi}_N^\wedge(n)}+\frac{n+1}{2n+1}\frac{\tilde{\Psi}_N^\wedge(n+1)}{\tilde{\Psi}_N^\wedge(n)}\right),\nonumber \end{align}
where $\sup_{N+1\leq n\leq\lfloor \kappa N\rfloor}\big|1-\big(\frac{n}{2n+1}{\tilde{\Psi}_N^\wedge(n-1)}/{\tilde{\Psi}_N^\wedge(n)}+\frac{n+1}{2n+1}{\tilde{\Psi}_N^\wedge(n+1)}/{\tilde{\Psi}_N^\wedge(n)}\big)\big|\leq\varepsilon_N$, for some $\varepsilon_N>0$ that satisfies $\varepsilon_N\to0$ as $N\to\infty$. The term $\|\tilde{\Psi}_N\|_{L^2([-1,1])}^2$ can be expressed similarly by \begin{align}\label{eqn:et2}
\|\tilde{\Psi}_N\|_{L^2([-1,1])}^2=\sum_{n=0}^{\lfloor\kappa N\rfloor} \frac{2n+1}{8\pi^2r^2}|\tilde{\Psi}_N^\wedge(n)|^2=\mathcal{O}(N^{-2\delta})+\sum_{n=N+1}^{\lfloor\kappa N\rfloor} \frac{2n+1}{8\pi^2r^2}|\tilde{\Psi}_N^\wedge(n)|^2. \end{align} Thus, combining \eqref{eqn:et1}--\eqref{eqn:et2}, we obtain \begin{align}
\lim_{N\to\infty}E_N(t)=\lim_{N\to\infty}\int_{-1}^1 t|F_N(t)| dt=\lim_{N\to\infty}\frac{\int_{-1}^1 t|\tilde{\Psi}_N(t)|^2 dt}{\|\tilde{\Psi}_N\|_{L^2([-1,1])}^2}=1. \end{align} In a similar fashion it can be shown that $\lim_{N\to\infty}E_N(t^2)=1$, implying $\lim_{N\to\infty}V_N(t)=0$, which concludes the proof. \end{proof}
\section{Numerical Test}\label{sec:num}
We generate a potential $U^+$ and data $F_1$, $F_2$ from uniformly distributed random Fourier coefficients up to spherical harmonic degree $100$. Analogously, the noise $E_1$, $E_2$ is produced by uniformly distributed random Fourier coefficients up to spherical harmonic degree $110$ and is then scaled such that $\|E_1\|_{L^2(\Omega_R)}=\|F_1\|_{L^2(\Omega_R)}$ and $\|E_2\|_{L^2(\Gamma_r)}=\|F_2\|_{L^2(\Gamma_r)}$. The mean Earth radius is given by $r=6371.2$km; for the satellite orbit we choose $R=7071.2$km (i.e., a satellite altitude of $700$km above the Earth's surface). We fix $N=100$ for the approximation $U_N^\varepsilon$ and choose $\kappa>0$ such that $\lfloor \kappa N\rfloor=130$. Then $U_N^\varepsilon$ is computed for the following different settings: \begin{itemize}
\item[(a)] data $F_1^{\varepsilon_1}$ is available on an equiangular grid of $40804$ points on $\Omega_R$; data $F_2^{\varepsilon_2}$ is given on a Gauss-Legendre grid of $49062$ points in a spherical cap $\Gamma_r=\mathcal{C}_r(x_0,2\rho)$ around the North pole $x_0$. We choose $\tilde{\Gamma}_r=\mathcal{C}_r(x_0,\rho)$ and vary the radius $\rho\in\{0.5,0.02\}$,
\item[(b)] the parameters of the functional $\mathcal{F}$ are chosen among the cases $\beta_N\in\{10^{-3},\ldots,10^{4}\}$, $\tilde{\alpha}_{N,n}\in\{10^{1},\ldots, 10^{8}\}$, as well as ${\alpha}_{N,n}\in\{\tilde{\alpha}_{N,n},0.2\tilde{\alpha}_{N,n}\}$,
\item[(c)] the noise level on $\Omega_R$ is varied among $\varepsilon_1\in\{0.001,0.01,0.05,0.1\}$; for the noise in $\Gamma_r$ we choose $\varepsilon_2\in\{\varepsilon_1,2\varepsilon_1,5\varepsilon_1\}$. \end{itemize} For the numerical integration required for the evaluation of $U_N^\varepsilon$ we use the scheme from \cite{drihea} on $\Omega_R$ and the scheme from \cite{womers12} in $\mathcal{C}_r(x,\rho)$. As a reference for the optimized kernels, an approximation $U_M^{\varepsilon,Sh}$ with Shannon-type kernels is computed as well: \begin{itemize}
\item[(a)] we choose $\Phi_{M}^{Sh}$ with symbols $\big(\Phi_{M}^{Sh}\big)^\wedge(n)=\frac{1}{\sigma_n}$, if $n\leq M$, and $\big(\Phi_{M}^{Sh}\big)^\wedge(n)=0$, else, as well as $\tilde{\Psi}_{M}^{Sh}$ with symbols $\big(\tilde{\Psi}_{M}^{Sh}\big)^\wedge(n)=1$, if $M+1\leq n\leq 130$, and $\big(\tilde{\Psi}_{M}^{Sh}\big)^\wedge(n)=0$, else,
\item[(b)] the cut-off degree $M$ is varied among $M\in\{0,2,4,\ldots,100\}$. \end{itemize}
The Shannon-type kernels form a reasonable reference since the optimization of $\Phi_N$, $\tilde{\Psi}_N$ via $\mathcal{F}$ is done with respect to a Shannon-type situation. The relative errors $\|U_N^\varepsilon-U^+\|_{L^2(\tilde{\Gamma}_r)}/\|U^+\|_{L^2(\tilde{\Gamma}_r)}$ and $\|U_M^{\varepsilon,Sh}-U^+\|_{L^2(\tilde{\Gamma}_r)}/\|U^+\|_{L^2(\tilde{\Gamma}_r)}$ for the different situations are supplied in Tables \ref{tab:err1} and \ref{tab:err2}.
As a comparison, we also compute approximations of $U^+$ using solely data $F_2$ in $\Gamma_r$ or solely data $F_1$ on $\Omega_R$, respectively. More precisely: \begin{itemize}
\item[(a)] spherical splines (based on the iterated Green function for the Beltrami operator; see, e.g., \cite{freeden81} and \cite{freeschrei09}) are applied for the approximation from local/regional data in $\Gamma_r$; for spherical cap radius $\rho=0.5$, we interpolate; for spherical cap radius $\rho=0.02$, we use a least square approach with smoothing parameter $\alpha=10^{-5}$ in order to deal with the ill-conditioned spline matrix (instead of a Gauss-Legendre grid, we choose approximately 30000 points in $\Gamma_r$ with similar spacing in longitudinal and latitudinal direction, avoiding concentration towards the poles and leading to better results for the spline method),
\item[(b)] when using satellite data on $\Omega_R$ only, we apply the truncated singular value decomposition (TSVD) for different cut-off degrees $M\in\{0,2,4,\ldots,100\}$; in addition, the regularized collocation method (RCM) from \cite{naumov} with Tikhonov regularization is tested for different regularization parameters $\alpha\in\{10^{-7},\ldots,10^{-1}\}$ (the equiangular integration grid on $\Omega_R$ and the corresponding weights are the same as chosen before). \end{itemize}
Tables \ref{tab:err4a} and \ref{tab:err4} indicate the results for the set of parameters that performed best for each of the methods above. We can make the following observations: \begin{itemize} \item[(1)] Tables \ref{tab:err1} and \ref{tab:err2} show that the optimized kernels behave better than the Shannon-type kernels over the broad range of settings. \item[(2)] The results in Table \ref{tab:err1} indicate the expected behaviour for the Shannon-type kernels: in the case $\varepsilon_1=\varepsilon_2$ only ground data is used (i.e., cut-off degree $M=0$), while the cut-off degree increases if the noise level of the ground data in $\Gamma_r$ is larger than for the satellite data on $\Omega_R$ (i.e., $\varepsilon_2>\varepsilon_1$). The same behaviour can be observed for the optimized kernels (cf. Figure \ref{fig:kernelplot}(a)), however, with a smoother transition between ground and satellite data: e.g., for the choice $\beta_n=3\cdot 10^3$, $\tilde{\alpha}_{N,n}=10^3$, ${\alpha}_{N,n}=10^3$, mostly ground data is used (i.e., $\tilde{\Psi}_N^\wedge(n)\approx 1$), but to a small extent (up to around spherical harmonic degree $n=10$) also satellite data is included (i.e., ${\Phi}_N^\wedge(n)\not=0$). This leads to an improvement of the relative error, e.g., by more than a factor two for the case $\ensuremath{\varepsilon}_1=\ensuremath{\varepsilon}_2=0.001$ (cf. Table \ref{tab:err1}(a)). \item[(3)] If we reduce the spherical cap radius from $\rho=0.5$ to $\rho=0.02$, we find a similar behaviour as described in (2) (cf. Table \ref{tab:err2} and Figure \ref{fig:kernelplot}(b)). But we also find that for a small noise level $\varepsilon_1=0.001$ the relative error of the approximation is amplified significantly (by a factor of approximately $10$) while it behaves better for larger noise levels. Thus, the approximation error for small noise levels can be in first place accounted to an insufficient localization of the used kernels. Increasing the truncation degree $N$ of the scaling and wavelet kernels $\Phi_N$ and $\tilde{\Psi}_N$, respectively, according to the size of the region $\Gamma_r$ (i.e., according to the radius $\rho$) can compensate this problem. \item[(4)] Table \ref{tab:err4a} shows that the results for splines that use only ground data in $\Gamma_r$ are comparable to the results for the optimized kernels from Tables \ref{tab:err1} and \ref{tab:err2}, at least if $\ensuremath{\varepsilon}_1=\ensuremath{\varepsilon}_2$. In case of higher noise levels in $\Gamma_r$ than on $\Omega_R$ (i.e., $\ensuremath{\varepsilon}_2>\ensuremath{\varepsilon}_1$), it becomes clear that the combination of ground and satellite data by optimized kernels yields improved results. Only for a small spherical cap radius $\rho=0.02$ and a small $\ensuremath{\varepsilon}_1=0.001$ (cf. Table \ref{tab:err2}), spherical splines behave significantly better than the optimized kernels. However, as already mentioned in (3), this can be overcome by increasing $N$ for the optimized kernels (noting that the splines that we use are non-band-limited). \item[(5)] In Table \ref{tab:err4} we can see that for our test example pure downward continuation from satellite data on $\Omega_R$ yields significantly worse results than all other tested methods. This holds true especially for TSVD. RCM reveals a better behaviour but can compete with the other methods only for high noise levels and $\ensuremath{\varepsilon}_2$ significantly larger than $\ensuremath{\varepsilon}_1$ (e.g., $\ensuremath{\varepsilon}_1=0.1$ and $\ensuremath{\varepsilon}_2=5\ensuremath{\varepsilon}_1$). Remembering Remark \ref{rem:lockernels}, one might consider using the symbols of the Tikhonov kernel for the functional $\mathcal{F}$ instead of the symbols of Shannon-type kernels to obtain optimized kernels $\Phi_N$ and $\tilde{\Psi}_N$ that further improve the results for the latter situation. \end{itemize}
\begin{rem} Condition \eqref{eqn:ass1} suggests that the parameter $\beta_N$ should be significantly smaller than $\alpha_{N,n}$ and $\tilde{\alpha}_{N,n}$. This does not seem to be reflected by the parameters in Tables \ref{tab:err1} and \ref{tab:err2}. But Condition \eqref{eqn:propx} implies a relation between $N$ and $\ensuremath{\varepsilon}_1$, $\ensuremath{\varepsilon}_2$ that, for the choice $N=100$, leads to significantly smaller $\ensuremath{\varepsilon}_1$, $\ensuremath{\varepsilon}_2$ than those chosen in Tables \ref{tab:err1} and \ref{tab:err2}. Indeed, if we choose, e.g., $\ensuremath{\varepsilon}_1=\ensuremath{\varepsilon}_2=10^{-10}$ and repeat the above examples for $\rho=0.5$, we obtain a set of good parameters $\beta_N=10^{-6}$, $\alpha_{N,n}=10^4$, $\tilde{\alpha}_{N,n}=10^4$ that reflects the expected behaviour. However, our goal was to test realistic noise levels for the intended applications and show that the proposed method via optimized kernels works well for these scenarios. \end{rem}
\begin{table}
(a) \,\,\begin{tabular}{ | c || c | c || c | c | }
\hline
$\varepsilon_1$ & \multicolumn{2}{|c||}{Shannon} &\multicolumn{2}{|c|}{Optimized}\\ \cline{2-5}
$(\varepsilon_2=\varepsilon_1)$ & Rel. Error & Cut-Off $M$ & Rel. Error & Param. $\beta_N,\tilde{\alpha}_{N,n},\alpha_{N,n}$\\\hline
0.001& 0.008 & 0 & 0.003 & $3\cdot 10^3, 10^3, 10^3$\\
0.01 & 0.012& 0 & 0.010 & $7\cdot 10^3, 10^3, 10^3$\\
0.05 & 0.050 & 0 & 0.048 & $7\cdot 10^3, 10^3, 2\cdot10^2$ \\
0.1 & 0.098 & 0 & 0.096& $7\cdot 10^3, 10^3,2\cdot10^2$\\
\hline \end{tabular}\\[2ex]
(b) \,\,\begin{tabular}{ | c || c | c || c | c | }
\hline
$\varepsilon_1$ & \multicolumn{2}{|c||}{Shannon} &\multicolumn{2}{|c|}{Optimized}\\ \cline{2-5}
$(\varepsilon_2=2\varepsilon_1)$ & Rel. Error & Cut-Off $M$ & Rel. Error & Param. $\beta_N,\tilde{\alpha}_{N,n},\alpha_{N,n}$\\\hline
0.001 & 0.008 & 0 & 0.003& $5\cdot 10^2, 10^3, 2\cdot10^2$\\
0.01 & 0.021 & 0 & 0.019& $3\cdot 10^{-2}, 5\cdot 10^2, 5\cdot 10^2$\\
0.05 & 0.096 & 28 & 0.096 & $3\cdot 10^{-2}, 10^3, 2\cdot 10^2$ \\
0.1 & 0.188 & 28 & 0.188& $3\cdot 10^{-2}, 10^3, 2\cdot 10^2$\\
\hline \end{tabular}\\[2ex]
(c) \,\,\begin{tabular}{ | c || c | c || c | c | }
\hline
$\varepsilon_1$ & \multicolumn{2}{|c||}{Shannon} &\multicolumn{2}{|c|}{Optimized}\\ \cline{2-5}
$(\varepsilon_2=5\varepsilon_1)$ & Rel. Error & Cut-Off $M$ & Rel. Error & Param. $\beta_N,\tilde{\alpha}_{N,n},\alpha_{N,n}$\\\hline
0.001 & 0.009 & 0 & 0.005& $7\cdot 10^{-2}, 10^3, 10^3$\\
0.01 & 0.046 & 36 & 0.041& $10^{-2}, 10^2, 10^2$\\
0.05 & 0.213 & 36 & 0.203 & $10^{-2}, 10^2, 10^2$ \\
0.1 & 0.424 & 38 & 0.406 & $10^1, 3\cdot 10^6, 6\cdot10^5$\\
\hline \end{tabular}
\caption{Relative errors $\|U_N^\varepsilon-U^+\|_{L^2(\tilde{\Gamma}_r)}/\|U^+\|_{L^2(\tilde{\Gamma}_r)}$ and $\|U_M^{\varepsilon,Sh}-U^+\|_{L^2(\tilde{\Gamma}_r)}/\|U^+\|_{L^2(\tilde{\Gamma}_r)}$ for spherical cap radius $\rho=0.5$. The noise level $\varepsilon_2$ in $\Gamma_r$ is varied among (a)--(c).}\label{tab:err1} \end{table}
\begin{table}
(a) \,\,\begin{tabular}{ | c || c | c || c | c | }
\hline
$\varepsilon_1$ & \multicolumn{2}{|c||}{Shannon} &\multicolumn{2}{|c|}{Optimized}\\ \cline{2-5}
$(\varepsilon_2=\varepsilon_1)$ & Rel. Error & Cut-Off $M$ & Rel. Error & Param. $\beta_N,\tilde{\alpha}_{N,n},\alpha_{N,n}$\\\hline
0.001& 0.014 & 0 & 0.011 & $3\cdot 10^3, 10^3, 10^3$\\
0.01 & 0.017& 0 & 0.014 & $7\cdot 10^3, 10^3, 2\cdot 10^2$\\
0.05 & 0.052 & 0 & 0.047 & $7\cdot 10^3, 10^3, 2\cdot 10^2$ \\
0.1 & 0.099 & 0 & 0.096& $7\cdot 10^3, 10^3, 2\cdot 10^2$\\
\hline \end{tabular}\\[2ex]
(b) \,\,\begin{tabular}{ | c || c | c || c | c | }
\hline
$\varepsilon_1$ & \multicolumn{2}{|c||}{Shannon} &\multicolumn{2}{|c|}{Optimized}\\ \cline{2-5}
$(\varepsilon_2=2\varepsilon_1)$ & Rel. Error & Cut-Off $M$ & Rel. Error & Param. $\beta_N,\tilde{\alpha}_{N,n},\alpha_{N,n}$\\\hline
0.001 & 0.014 & 0 & 0.011& $3\cdot 10^3, 10^3, 10^3$\\
0.01 & 0.025 & 0 & 0.020& $7\cdot 10^3, 5\cdot 10^2, 5\cdot 10^2$\\
0.05 & 0.099 & 26 & 0.094 & $7\cdot 10^{-1}, 10^3, 2\cdot 10^2$ \\
0.1 & 0.194 & 26 & 0.191& $7\cdot 10^{-1}, 10^3, 2\cdot 10^2$\\
\hline \end{tabular}\\[2ex]
(c) \,\,\begin{tabular}{ | c || c | c || c | c | }
\hline
$\varepsilon_1$ & \multicolumn{2}{|c||}{Shannon} &\multicolumn{2}{|c|}{Optimized}\\ \cline{2-5}
$(\varepsilon_2=5\varepsilon_1)$ & Rel. Error & Cut-Off $M$ & Rel. Error & Param. $\beta_N,\tilde{\alpha}_{N,n},\alpha_{N,n}$\\\hline
0.001 & 0.015 & 0 & 0.012& $3\cdot 10^3, 5\cdot10^3, 10^3$\\
0.01 & 0.051 & 42 & 0.045& $3\cdot 10^{-2}, 5\cdot 10^2, 10^2$\\
0.05 & 0.235 & 40 & 0.234 & $7\cdot 10^{-3}, 5\cdot 10^2, 10^2$ \\
0.1 & 0.469 & 38 & 0.470 & $7\cdot 10^{-3}, 5\cdot 10^2, 10^2$\\
\hline \end{tabular}
\caption{Relative error $\|U_N^\varepsilon-U^+\|_{L^2(\tilde{\Gamma}_r)}/\|U^+\|_{L^2(\tilde{\Gamma}_r)}$ and $\|U_M^{\varepsilon,Sh}-U^+\|_{L^2(\tilde{\Gamma}_r)}/\|U^+\|_{L^2(\tilde{\Gamma}_r)}$ for spherical cap radius $\rho=0.02$. The noise level $\varepsilon_2$ in $\Gamma_r$ is varied among (a)--(c).}\label{tab:err2} \end{table}
\begin{table} \begin{center}
\begin{tabular}{ | c || c | c | }
\hline
$\varepsilon_2$ & \multicolumn{2}{|c|}{Spline} \\ \cline{2-3}
& $\rho=0.5$ & $\rho=0.02$ \\
\hline
0.001 & {0.004} & 0.003 \\
0.01 & {0.009}& 0.010 \\
0.05 & {0.046} & 0.049 \\
0.1 & 0.093 & 0.100 \\
0.25 & 0.230 & 0.251 \\
0.5 & 0.461 & 0.502 \\
\hline \end{tabular} \end{center}
\caption{Spline interpolation/approximation: relative error $\|U^{\varepsilon,Spline}-U^+\|_{L^2(\tilde{\Gamma}_r)}/\|U^+\|_{L^2(\tilde{\Gamma}_r)}$ for varying radius $\rho\in\{0.02,0.5\}$.}\label{tab:err4a} \end{table}
\begin{table} \begin{center}
\begin{tabular}{ | c || c | c || c | c | }
\hline
$\varepsilon_1$ & \multicolumn{2}{|c||}{TSVD} & \multicolumn{2}{|c|}{RCM} \\ \cline{2-5}
& Rel. Error & Cut-Off M & Rel. Error & Param. $\alpha$\\
\hline
0.001 & {0.229} & 90& 0.099 & $10^{-4}$ \\
0.01 & {0.629}& 72 & 0.178& $6\cdot 10^{-4}$ \\
0.05 & {0.770} & 56 & 0.228 & $3\cdot 10^{-3}$ \\
0.1 & 0.817 & 50 & 0.244 & $6\cdot 10^{-3}$ \\
\hline \end{tabular} \end{center}
\caption{Downward continuation: relative errors $\|U_M^{\varepsilon,TSVD}-U^+\|_{L^2(\tilde{\Gamma}_r)}/\|U^+\|_{L^2(\tilde{\Gamma}_r)}$ and $\|U^{\varepsilon,RCM}_\alpha-U^+\|_{L^2(\tilde{\Gamma}_r)}/\|U^+\|_{L^2(\tilde{\Gamma}_r)}$.}\label{tab:err4} \end{table}
\begin{figure}
\caption{Exemplary plots of the spectral behaviour of some of the optimized kernels used (a) in Table \ref{tab:err1} and (b) in Table \ref{tab:err2}.}
\label{fig:kernelplot}
\end{figure}
\section{The Vectorial Case}\label{sec:vect}
In some geophysical problems, especially in geomagnetism, it is not the scalar potential $U$ we are interested in but the vectorial gradient $\nabla U$. Just as well, it is often the gradient $\nabla U$ that is measured on $\Omega_R$ and $\Gamma_r$. Thus, we are not confronted with the scalar equations \eqref{eqn:11}--\eqref{eqn:13} but with the vectorial problem \begin{align} b=\nabla U,&\quad \textnormal{in }\Omega_r^{ext},\label{eqn:bvp441} \\\Delta U=0,&\quad \textnormal{in }\Omega_r^{ext},\label{eqn:bvp442} \\b=f_1,&\quad \textnormal{on }\Omega_R,\label{eqn:bvp443} \\b=f_2,&\quad \textnormal{on }\Gamma_r.\label{eqn:bvp444} \end{align}
Notationwise, we typically use lower-case letters to indicate vector fields, upper case letters for scalar fields, and boldface upper-case letters for tensor fields (the abbreviation $b$ for $\nabla U$ is simply chosen from common notation in geomagnetism). Starting from equations \eqref{eqn:bvp441}--\eqref{eqn:bvp444}, the general procedure for approximating $b^+=b|_{\Gamma_r}$ is essentially the same as for the scalar case treated in the previous sections. Therefore, we will be rather brief on the description and omit the proofs. An exception is given by Proposition \ref{prop:loc}, where we supply a vectorial counterpart to the localization property of Proposition \ref{prop:loc1}, and by Proposition \ref{lem:lockernel2} in order to indicate how the vectorial setting can be reduced to the previous scalar results.
Before dealing with the actual problem, it is necessary to introduce some basic vectorial framework. Here, we mainly follow the course of \cite{freeschrei09} and use the following set of vector spherical harmonics: \begin{align} {y}_{n,k}^{(1)}\left(\xi\right)&=\xi Y_{n,k}\left(\xi\right),\label{eqn:ynk1} \\{y}_{n,k}^{(2)}\left(\xi\right)&=\frac{1}{\sqrt{n(n+1)}}\nabla^*_{\xi}Y_{n,k}\left(\xi\right), \\{y}_{n,k}^{(3)}\left(\xi\right)&=\frac{1}{\sqrt{n(n+1)}}L_{\xi}^*Y_{n,k}\left(\xi\right),\label{eqn:ynk3} \end{align}
for $n=1,2,\ldots$, and $k=1,\ldots, 2n+1$, with ${y}_{n,k}^{(1)}$ additionally being defined for $n=0$ and $k=1$. The operator $\nabla^*_\xi$ denotes the surface gradient, i.e., the tangential contribution of the gradient $\nabla_x$ (more precisely, $\nabla_x=\xi\frac{\partial}{\partial{r}}+\frac{1}{r}\nabla^*_{\xi}$, with $\xi=\frac{x}{|x|}$ and $r=|x|$). The surface curl gradient $L^*_{\xi}$ stands short for $\xi\wedge\nabla_{\xi}^*$ (with $\wedge$ being the vector product of two vectors $x,y\in\mathbb{R}^3$). Together, the functions \eqref{eqn:ynk1}--\eqref{eqn:ynk3} form an orthonormal basis of the space $l^2(\Omega)$ of vectorial functions that are square-integrable on the unit sphere. They are complemented by a set of tensorial Legendre polynomials $\mathbf{P}^{(i,i)}_n$ of degree $n$ and type $(i,i)$ that are defined via \begin{align} \mathbf{P}^{(1,1)}_n\left(\xi,\eta\right)&=\xi\otimes\eta \,P_n\left(\xi\cdot\eta\right), \\\mathbf{P}^{(2,2)}_n\left(\xi,\eta\right)&=\frac{1}{{n(n+1)}}\nabla^*_{\xi}\otimes\nabla^*_{\eta}P_n\left(\xi\cdot\eta\right), \\\mathbf{P}^{(3,3)}_n\left(\xi,\eta\right)&=\frac{1}{{n(n+1)}}L^*_{\xi}\otimes L^*_{\eta}P_n\left(\xi\cdot\eta\right). \end{align} The operator $\otimes$ denotes the tensor product $x\otimes y=xy^T$ of two vectors $x,y\in\mathbb{R}^3$. In analogy to the scalar case, vector spherical harmonics and tensorial Legendre polynomials are connected by an addition theorem. Since we are only dealing with vector fields of the form $\nabla U$ in this section, the vector spherical harmonics of type $i=3$ and the tensorial Legendre polynomials of type $(i,i)=(3,3)$ are not required and will be neglected for the remainder of this paper. The vectorial counterpart to the Sobolev space $\mathcal{H}_s(\Omega_r)$ is defined as \begin{align}\label{eqn:sobspace2}
\mathfrak{h}_s(\Omega_r)=\Big\{f\in l^2(\Omega_r) :\|f\|_{\mathfrak{h}_s(\Omega_r)}^2=\sum_{i=1}^2\sum_{n=0_i}^\infty\sum_{k=1}^{2n+1}\left(n+\textnormal{\footnotesize$\frac{1}{2}$}\right)^{2s}\big|({f}_r^{(i)})^\wedge(n,k)\big|^2<\infty\Big\}, \end{align} where $0_i=0$ if $i=1$ and $0_i=1$ if $i=2$, and $({f}_r^{(i)})^\wedge(n,k)$ denotes the Fourier coefficient of degree $n$, order $k$, and type $i$, i.e., \begin{align}
({f}_r^{(i)})^\wedge(n,k)=\int_{\Omega_r}f(y)\cdot \frac{1}{r}y_{n,k}^{(i)}\left(\frac{y}{|y|}\right)d\omega(y). \end{align} The space of band-limited tensorial kernels with maximal degree $N$ is defined as \small\begin{align}\label{eqn:polnj2} \mathbf{Pol}_{N}=\left\{\mathbf{K}(x,y)=\sum_{i=1}^2\sum_{n=0_i}^{N}\sum_{k=1}^{2n+1}\mathbf{K}^\wedge(n) {y}^{(i)}_{n,k}\left(\xi\right)\otimes{y}^{(i)}_{n,k}\left(\eta\right):\mathbf{K}^\wedge(n)\in\mathbb{R}\right\}.
\end{align}\normalsize The kernels $\mathbf{Pol}_{N}$ are tensor-zonal. In particular, this means that the absolute value $|{\mathbf{K}}(x,y)|$ depends only on the scalar product $\xi\cdot\eta$ (this does not have to hold true for the kernel ${\mathbf{K}}(x,y)$ itself).
\begin{rem} In geomagnetic modeling, another set of vector spherical harmonics is used more commonly than the one applied in this paper. We have used the basis \eqref{eqn:ynk1}--\eqref{eqn:ynk3} since it is generated by simpler differential operators, which reduces the effort to obtain a vectorial version of the localization principle later on. However, both basis systems eventually yield the same results. More information on the other basis system and its application in geomagnetism can be found, e.g., in \cite{backus96}, \cite{ger11}, \cite{ger12}, \cite{gubbins12}, and \cite{mayer06}. \end{rem}
Similar to the scalar case in Subsection \ref{subsec:dc}, there is a vectorial upward continuation operator $t^{up}$ and a vectorial downward continuation operator $t^{down}$, defined via tensorial kernels with singular values $\sigma_n=\left(\frac{r}{R}\right)^{n+1}$ and $\frac{1}{\sigma_n}$, respectively (note that in the scalar case, $\sigma_n=\left(\frac{r}{R}\right)^{n}$). The downward continuation operator can be approximated by a bounded operator $t_N:l^2(\Omega_R)\to l^2(\Omega_r)$: \begin{align} {t_N[f_1](x)=\int_{\Omega_R}\mathbf{\Phi}_N(x,y) f_1(y)d\omega(y),\quad x\in\Omega_r,} \end{align} with \begin{align} {\mathbf{\Phi}_N(x,y)=\sum_{i=1}^2\sum_{n=0_i}^{N}\sum_{k=1}^{2n+1}\mathbf{\Phi}_{N}^\wedge(n)\frac{1}{r}{y}^{(i)}_{n,k}\left(\xi\right)\otimes \frac{1}{R}{y}^{(i)}_{n,k}\left(\eta\right).}\label{eqn:scalkernel2} \end{align} The symbols $\mathbf{\Phi}_{N}^\wedge(n)$ need to satisfy the same conditions (a), (b) as for the scalar analogue in Subsection \ref{subsec:dc}. A refinement by local data in $\Gamma_r$ is achieved by the vectorial wavelet operator $\tilde{{w}}_N:l^2({\Gamma}_r)\to l^2(\tilde{\Gamma}_r)$: \begin{align}\label{eqn:wjdef} {\tilde{{w}}_N[f_2](x)=\int_{\mathcal{C}_r(x,\rho)}\tilde{\mathbf{\Psi}}_N(x,y) f_2(y)d\omega(y),\quad x\in\tilde{\Gamma}_r,} \end{align} with \begin{align} {\tilde{\mathbf{\Psi}}_N(x,y)=\sum_{i=1}^2\sum_{n=0_i}^{\lfloor\kappa N\rfloor}\sum_{k=1}^{2n+1}\tilde{ \mathbf{\Psi}}_{N}^\wedge(n)\frac{1}{r}{y}^{(i)}_{n,k}\left(\xi\right)\otimes \frac{1}{r}{y}^{(i)}_{n,k}\left(\eta\right)},\label{eqn:wavkernel2} \end{align} for some fixed $\kappa>1$, and \begin{align} {\tilde{ \mathbf{\Psi}}_{N}^\wedge(n)=\tilde{ \mathbf{\Phi}}_{N}^\wedge(n)-\mathbf{\Phi}_{N}^\wedge(n)\sigma_n.} \end{align} The symbols $\tilde{ \mathbf{\Phi}}_{N}^\wedge(n)$ are assumed to satisfy conditions $($a'$)$ and $($b'$)$ from Subsection \ref{subsec:comb}. As in the scalar case, the input data $f_1$, $f_2$ is perturbed by deterministic noise $e_1\in l^2(\Omega_R)$ and $e_2\in l^2(\Omega_r)$, so that we are dealing with $f_1^{\ensuremath{\varepsilon}_1}=f_1+\ensuremath{\varepsilon}_1 e_1$ and $f_2^{\ensuremath{\varepsilon}_2}=f_2+\ensuremath{\varepsilon}_2 e_2$. An approximation of $b^+$ in $\tilde{\Gamma}_r$ is then defined by \begin{align}\label{eqn:approxbj3} {b_N^{\ensuremath{\varepsilon}}={t}_{N}[f_1^{\ensuremath{\varepsilon}_1}]+\tilde{{w}}_{N}[f_2^{\ensuremath{\varepsilon}_2}].} \end{align} A similar error estimate as in \eqref{eqn:errorestfinal} motivates the minimization of the functional \begin{equation}\label{eqn:mineq4} {\begin{aligned}
\mathcal{F}(\mathbf{\Phi}_N,\tilde{\mathbf{\Psi}}_N)=&\sum_{n=0}^{\lfloor\kappa N\rfloor}\tilde{\alpha}_{N,n}\big|1-\tilde{\mathbf{\Phi}}_{N}^\wedge(n)\big|^2+\sum_{n=0}^{N}\alpha_{N,n}\big|1-{\mathbf{\Phi}}_{N}^\wedge(n)\sigma_n\big|^2
\\&+\beta_N\sum_{n=0}^{N}\big|\mathbf{\Phi}_N^\wedge(n)\big|^2+8\pi^2 r^4\big\|\tilde{\mathbf{\Psi}}_N\big\|^2_{L^2([-1,1-\rho])}. \end{aligned}} \end{equation}
In the vectorial case, the term $\|\tilde{\mathbf{\Psi}}_N\|^2_{L^2([-1,1-\rho])}$ is not quite as basic as in the scalar case \eqref{eqn:1dint} but still expressible by elementary means. For more details on the vectorial and tensorial setup, the reader is referred to \cite{freeschrei09}. Now, we are all set to state the vectorial counterparts to the theoretical results from Section \ref{sec:theo}. As mentioned earlier, the proofs are mostly omitted due to their similarity.
\begin{lem}\label{prop:minsol4} Assume that all parameters $\tilde{\alpha}_{N,n}$, $\alpha_{N,n}$, and $\beta_N$ are positive. Then there exists unique minimizers $\mathbf{\Phi}_N\in\textnormal{\textbf{Pol}}_{N}$ and $\tilde{\mathbf{\Psi}}_N\in\textnormal{\textbf{Pol}}_{\lfloor\kappa N\rfloor}$ of the functional $\mathcal{F}$ in \eqref{eqn:mineq4} that are determined by $\phi=(\mathbf{\Phi}_N^\wedge(1)\sigma_1,\ldots,\mathbf{\Phi}_N^\wedge(N)\sigma_{N},\tilde{\mathbf{\Phi}}_{N}^\wedge(1),\ldots,\tilde{\mathbf{\Phi}}_{N}^\wedge(\lfloor\kappa N\rfloor))^T$ which solves the linear equations \begin{align}\label{eqn:lineq4} \mathbf{M}{\phi}=\alpha, \end{align} where \begin{align}
\mathbf{M}=\left(\begin{array}{c|c}\mathbf{D}_1+\mathbf{P}_1&-\mathbf{P}_2\\\hline-\mathbf{P}_3&\mathbf{D}_2+\mathbf{P}_4\end{array}\right), \end{align} and $\alpha=(\alpha_{N,1},\ldots,\alpha_{N,N},\tilde{\alpha}_{N,1},\ldots,\tilde{\alpha}_{N,\lfloor\kappa N\rfloor})^T$. The diagonal matrices $\mathbf{D}_1$, $\mathbf{D}_2$ are given by \begin{align} \mathbf{D}_1=\textnormal{diag}\left(\frac{\beta_N}{\sigma_n^2}+\alpha_{N,n}\right)_{n=0,\ldots,N}, \qquad\mathbf{D}_2=\textnormal{diag}\big(\tilde{\alpha}_{N,n}\big)_{n=0,\ldots,\lfloor\kappa N\rfloor}, \end{align} whereas $\mathbf{P}_1$,\ldots, $\mathbf{P}_4$ are submatrices of the Gram matrix $\big(P_{n,m}^\rho\big)_{n,m=0,\ldots, \lfloor\kappa N\rfloor}$. More precisely, \begin{align} \mathbf{P}_1=\big(P_{n,m}^\rho\big)_{n,m=0,\ldots,N},&\qquad \mathbf{P}_2=\big(P_{n,m}^\rho\big)_{n=0,\ldots,N\atop m=0,\ldots,N\lfloor\kappa N\rfloor}, \\\mathbf{P}_3=\big(P_{n,m}^\rho\big)_{n=0,\ldots,\lfloor\kappa N\rfloor\atop m=0,\ldots,N},&\qquad\mathbf{P}_4=\big(P_{n,m}^\rho\big)_{n,m=0,\ldots,\lfloor\kappa N\rfloor}. \end{align} with \small\begin{align} P_{n,m}^\rho=&\frac{(2n+1)(2m+1)}{16\pi^2}\int_{-1}^{1-\rho}P_n\left(t\right)P_m\left(t\right)dt \\&+\frac{(2n+1)(2m+1)}{16\pi^2n(n+1)m(m+1)}\bigg(\int_{-1}^{1-\rho}(1+t^2)P_n'(t)P_m'(t)dt+\int_{-1}^{1-\rho}(1-t^2)^2P_n''(t)P_m''(t)dt\nonumber \\&\quad-\int_{-1}^{1-\rho}t(1-t^2)P_n''(t)P_m'(t)dt-\int_{-1}^{1-\rho}t(1-t^2)P_n'(t)P_m''(t)dt\bigg),\nonumber \end{align}\normalsize for $n\not=0$ and $m\not=0$. If $n=0$ or $m=0$, then $P_{n,m}^\rho=\frac{(2n+1)(2m+1)}{16\pi^2}\int_{-1}^{1-\rho}P_n\left(t\right)P_m\left(t\right)dt$. By $P_n'$, $P_n''$ we mean the first and second order derivatives of the Legendre polynomials. \end{lem}
To show that the left expression of \eqref{eqn:altterms} vanishes in the scalar case as $N\to\infty$, we used the localization property from Proposition \ref{prop:loc1}. A similar result is needed to prove Theorem \ref{thm:convres2}. The corresponding vectorial localization property for the Shannon-type kernel is stated in the next proposition.
\begin{prop}\label{prop:loc} If $f\in\mathfrak{h}_s(\Omega_r)$, $s\geq 2$, it holds that \begin{align}\label{eqn:locpropvect}
\lim_{N\to\infty}\left\|\int_{\Omega_r\setminus\mathcal{C}_r(\rho,\cdot)}\tilde{\mathbf{\Psi}}^{Sh}_N(\cdot,y)f(y)d\omega(y)\right\|_{l^2 (\Omega_r)}=0, \end{align} where $\tilde{\mathbf{\Psi}}^{Sh}_N$ is the tensorial Shannon-type kernel with symbols $\big(\tilde{\mathbf{\Psi}}_N^{Sh}\big)^\wedge(n)=1$, if $N +1\leq n\leq \lfloor\kappa N\rfloor$, and $\big(\tilde{\mathbf{\Psi}}_N^{Sh}\big)^\wedge(n)=0$, else. \end{prop}
\begin{proof} We first observe that \begin{align}\label{eqn:vectloceq1} &\int\limits_{\Omega_r\setminus\mathcal{C}_r(x,\rho)}\tilde{\mathbf{\Psi}}^{Sh}_{N}(x,y) f(y)d\omega(y) \\&=\int\limits_{\Omega_r\setminus\mathcal{C}_r(x,\rho)}\left(\sum_{i=1}^2\sum_{n=N+1}^{\lfloor\kappa N\rfloor}\sum_{k=1}^{2n+1}\frac{1}{r}y_{n,k}^{(i)}\left(\xi\right)\otimes \frac{1}{r}y_{n,k}^{(i)}\left(\eta\right)\right) f(r\eta)d\omega(r\eta)\nonumber \\&=\sum_{i=1}^2\sum_{n=N+1}^{\lfloor\kappa N\rfloor}\sum_{k=1}^{2n+1}\frac{1}{r}y_{n,k}^{(i)}\left(\xi\right)\int\limits_{\Omega_r\setminus\mathcal{C}_r(x,\rho)}\frac{1}{r}y_{n,k}^{(i)}\left(\eta\right)\cdot f(r\eta)d\omega(r\eta),\nonumber \end{align}
where, as always, $\xi=\frac{x}{|x|}$ and $\eta=\frac{y}{|y|}$. Since $f$ is of class $\mathfrak{h}_s(\Omega_r)$, $s\geq 2$, it follows that $f(x)=\xi F_1(x)+\nabla^*_\xi F_2(x)$ for some scalar functions $F_1$ of class $\mathcal{H}_s(\Omega_r)$ and $F_2$ of class $\mathcal{H}_{s+1}(\Omega_r)$. Taking a closer look at the terms of type $i=1$ in \eqref{eqn:vectloceq1}, using the orthogonality of $\xi$ and $\nabla^*_\xi$, we obtain \begin{align} &\sum_{k=1}^{2n+1}\frac{1}{r}y_{n,k}^{(1)}\left(\xi\right)\int\limits_{\Omega_r\setminus\mathcal{C}_r(x,\rho)}\frac{1}{r}y_{n,k}^{(1)}\left(\eta\right)\cdot f(r\eta)d\omega(r\eta)\label{eqn:eqn1} \\&=\sum_{k=1}^{2n+1}\frac{1}{r}y_{n,k}^{(1)}\left(\xi\right)\int\limits_{\Omega_r\setminus\mathcal{C}_r(x,\rho)}\frac{1}{r}y_{n,k}^{(1)}\left(\eta\right)\cdot \left(\eta F_1(r\eta)+\nabla^*_{\eta}F_2(r\eta)\right)d\omega(r\eta)\nonumber \\&=\sum_{k=1}^{2n+1}\frac{1}{r^2}\xi Y_{n,k}\left(\xi\right)\int\limits_{\Omega_r\setminus\mathcal{C}_r(x,\rho)}Y_{n,k}\left(\eta\right)F_1(r\eta)d\omega(r\eta)\nonumber \\&=\frac{1}{r^2}\xi\int\limits_{\Omega_r\setminus\mathcal{C}_r(x,\rho)}\frac{2n+1}{4\pi}P_n\left(\xi\cdot\eta\right)F_1(r\eta)d\omega(r\eta).\nonumber \end{align} For the terms of type $i=2$, the use of Green's formulas (cf. \cite{ger11} for more details) and the addition theorem for spherical harmonics implies \begin{align} &\xi\wedge\sum_{k=1}^{2n+1}\frac{1}{r}y_{n,k}^{(2)}\left(\xi\right)\int\limits_{\Omega_r\setminus\mathcal{C}_r(x,\rho)}\frac{1}{r}y_{n,k}^{(2)}\left(\eta\right)\cdot f(r\eta)d\omega(r\eta)\label{eqn:shit} \\&=\xi\wedge\sum_{k=1}^{2n+1}\frac{1}{r}y_{n,k}^{(2)}\left(\xi\right)\nonumber \int\limits_{\Omega_r\setminus\mathcal{C}_r(x,\rho)}\frac{1}{r}\frac{1}{\sqrt{n(n+1)}}\nabla^*_{\eta}Y_{n,k}\left(\eta\right)\cdot \nabla^*_{\eta}F_2(r\eta)d\omega(r\eta)\nonumber \\&=-\xi\wedge\sum_{k=1}^{2n+1}\frac{1}{r^2}\frac{1}{{n(n+1)}}\nabla^*_{\xi}Y_{n,k}\left(\xi\right)\int\limits_{\Omega_r\setminus\mathcal{C}_r(x,\rho)}\left(\Delta^*_{\eta}Y_{n,k}\left(\eta\right)\right)F_2(r\eta)d\omega(r\eta)\nonumber \\&\quad+\xi\wedge\sum_{k=1}^{2n+1}\frac{1}{r^2}\frac{1}{{n(n+1)}}\nabla^*_{\xi}Y_{n,k}\left(\xi\right)\int_{\partial\mathcal{C}_r(x,\rho)}F_2(r\eta)\frac{\partial}{\partial\nu_\eta}Y_{n,k}\left(\eta\right)d\sigma(r\eta)\nonumber
\\&=\sum_{k=1}^{2n+1}\frac{1}{r^2}L^*_{\xi}Y_{n,k}\left(\xi\right)\int\limits_{\Omega_r\setminus\mathcal{C}_r(x,\rho)}Y_{n,k}\left(\eta\right)F_2(r\eta)d\omega(r\eta)\nonumber \\&\quad+\frac{1}{r^2}\frac{1}{{ n(n+1)}}\int_{\partial\mathcal{C}_r(x,\rho)}F_2(r\eta)\frac{2n+1}{{4\pi}}L^*_{\xi}\frac{\partial}{\partial\nu_\eta}P_{n}\left(\xi\cdot\eta\right)d\sigma(r\eta)\nonumber \\&=-\frac{1}{r^2}\int\limits_{\Omega_r\setminus\mathcal{C}_r(x,\rho)}F_2(r\eta)\frac{2n+1}{4\pi}L_{\eta}^*P_n\left(\xi\cdot\eta\right)d\omega(r\eta)\nonumber \\&\quad+\frac{1}{r^2}\frac{1}{{ n(n+1)}}\int_{\partial\mathcal{C}_r(x,\rho)}F_2(r\eta)\frac{2n+1}{{4\pi}}L^*_{\xi}\frac{\partial}{\partial\nu_\eta}P_{n}\left(\xi\cdot\eta\right)d\sigma(r\eta)\nonumber \\&=\frac{1}{r^2}\int\limits_{\Omega_r\setminus\mathcal{C}_r(x,\rho)}\frac{2n+1}{4\pi}P_n\left(\xi\cdot\eta\right)L_{\eta}^*F_2(r\eta)d\omega(r\eta)\nonumber \\&\quad+\frac{1}{r^2}\int_{\partial\mathcal{C}_r(x,\rho)}\tau_{\eta}F_2(r\eta)\frac{2n+1}{{4\pi}}P_{n}\left(\xi\cdot\eta\right)d\sigma(r\eta)\nonumber \\&\quad+\frac{1}{r^2}\frac{1}{{ n(n+1)}}\int_{\partial\mathcal{C}_r(x,\rho)}F_2(r\eta)\frac{2n+1}{{4\pi}}L^*_{\xi}\frac{\partial}{\partial\nu_\eta}P_{n}\left(\xi\cdot\eta\right)d\sigma(r\eta)\nonumber \end{align}\normalsize By $\frac{\partial}{\partial\nu_\eta}$ we mean the normal derivative at $r\eta\in\partial\mathcal{C}_r(x,\rho)$, and $\tau_\eta$ denotes the tangential unit vector at $r\eta\in\partial\mathcal{C}_r(x,\rho)$. The reason for the application of $\xi\wedge$ in \eqref{eqn:shit} is that we can then work with the operator $L_{\xi}^*$ instead of $\nabla_\xi ^*$. The surface curl gradient has the nice property $L_{\xi}^*P_{n}\left(\xi\cdot\eta\right)=-L_{\eta}^*P_{n}\left(\xi\cdot\eta\right)$ which we have used in the seventh line of Equation \eqref{eqn:shit}. Furthermore, since $\xi$ and $\nabla_{\xi}^*$ are orthogonal, the convergence of \eqref{eqn:shit} for $N\to \infty$ also implies convergence for the same expression without the application of $\xi\wedge$. Now, we can use the scalar localization result from Proposition \ref{prop:loc} to obtain \begin{align}
\lim_{N\to\infty}\Bigg\|\frac{\cdot}{|\cdot|}\int\limits_{\Omega_r\setminus\mathcal{C}_r(\rho,\cdot)}\Bigg(\underbrace{\sum_{n=N+1}^{\lfloor\kappa N\rfloor}\frac{2n+1}{4\pi r^2}P_n\left(\frac{\cdot}{|\cdot|}\cdot\eta\right)}_{=\tilde{\Psi}_N^{Sh}(\cdot,y)}\Bigg)F_1(r\eta)d\omega(r\eta)\Bigg\|_{l^2(\Omega_r)}=0 \end{align} and \begin{align}
\lim_{N\to\infty}\Bigg\|\int\limits_{\Omega_r\setminus\mathcal{C}_r(\rho,\cdot)}\left(\sum_{n=N+1}^{\lfloor\kappa N\rfloor}\frac{2n+1}{4\pi r^2}P_n\left(\frac{\cdot}{|\cdot|}\cdot\eta\right) \right)L_{\eta}^*F_2(r\eta)d\omega(r\eta)\Bigg\|_{l^2(\Omega_r)}=0, \end{align} which deals with the relevant contributions to the asymptotic behaviour of \eqref{eqn:eqn1} and \eqref{eqn:shit}. It remains to investigate the boundary integrals appearing on the right hand side of \eqref{eqn:shit}. Observing the differential equation $(1-t^2)P_n''(t)-2tP_n'(t)+n(n+1)P_n(t)=0$, $t\in[-1,1]$, for Legendre polynomials leads to \begin{align}\label{eqn:bvintzero} &\!\!\!\!\!\!\frac{1}{r^2}\int_{\partial\mathcal{C}_r(x,\rho)}\tau_{\eta}F_2(r\eta)\frac{2n+1}{{4\pi}}P_{n}\left(\xi\cdot\eta\right)d\sigma(r\eta) \\&\!\!\!\!\!\!+\frac{1}{r^2}\frac{1}{{ n(n+1)}}\int_{\partial\mathcal{C}_r(x,\rho)}F_2(r\eta)\frac{2n+1}{{4\pi}}L^*_{\xi}\frac{\partial}{\partial\nu_\eta}P_{n}\left(\xi\cdot\eta\right)d\sigma(r\eta)\nonumber \\=&\frac{1}{r^2}\int_{\partial\mathcal{C}_r(x,\rho)}\tau_{\eta}F_2(r\eta)\frac{2n+1}{{4\pi}}\bigg(P_{n}(\xi\cdot\eta)+\frac{1}{ n(n+1)}\big(1-(\xi\cdot\eta)^2\big)P_{n}''(\xi\cdot\eta)\nonumber \\&\qquad\qquad\qquad\qquad\qquad\qquad-\frac{2}{ n(n+1)}(\xi\cdot\eta)P_{n}'(\xi\cdot\eta)\bigg)d\sigma(r\eta)\nonumber \\=&0.\nonumber \end{align} Combining \eqref{eqn:vectloceq1}--\eqref{eqn:bvintzero} implies the desired property \eqref{eqn:locpropvect}. \end{proof}
\begin{thm}\label{thm:convres2} Assume that parameters $\alpha_{N,n}$, $\tilde{\alpha}_{N,n}$, and $\beta_N$ are positive and suppose that, for some $\delta>0$ and $\kappa>1$, \begin{align} \inf_{n=0,\ldots, \lfloor \kappa N\rfloor}\frac{\beta_N\frac{1-\left(\frac{R}{r}\right)^{2(N+1)}}{1-\left(\frac{R}{r}\right)^2}+(\lfloor \kappa N\rfloor+1)^2}{{\alpha}_{N,n}}&=\mathcal{O}\left(N^{-2(1+\delta)}\right),\quad \textit{ for }N\to\infty.\label{eqn:ass4} \end{align} An analogous relation shall hold true for $\tilde{\alpha}_{N,n}$. Additionally, let every $N$ be associated with an $\varepsilon_1=\varepsilon_1(N)>0$ and an $\varepsilon_2=\varepsilon_2(N)>0$ such that \begin{align} \lim_{N\to\infty}\varepsilon_1 \left(\frac{R}{r}\right)^{N}=\lim_{N\to\infty}\varepsilon_2\, {N}=0.\label{eqn:propx3} \end{align} The functions $f_1:\Omega_R\to\mathbb{R}^3$ and $f_2:\Gamma_r\to\mathbb{R}^3$, $r<R$, are supposed to be such that a unique solution $b$ of \eqref{eqn:bvp441}--\eqref{eqn:bvp444} exists and that the restriction $b^+$ is of class $\mathfrak{h}_s(\Omega_r)$, for some fixed $s\geq 2$. The erroneous input data is given by $f_1^{\ensuremath{\varepsilon}_1}=f_1+\ensuremath{\varepsilon}_1 e_1$ and $f_2^{\ensuremath{\varepsilon}_2}=f_2+\ensuremath{\varepsilon}_1 e_2$, with $e_1\in l^2(\Omega_R)$ and $e_2\in l^2(\Omega_r)$. If the kernels $\mathbf{\Phi}_N\in\textnormal{\textbf{Pol}}_{N}$ and the wavelet kernel $\tilde{\mathbf{\Psi}}_N\in\textnormal{\textbf{Pol}}_{\lfloor \kappa N\rfloor}$ are the minimizers of the functional $\mathcal{F}$ from \eqref{eqn:mineq4} and if $b_N^\ensuremath{\varepsilon}$ is given as in \eqref{eqn:approxbj3}, then \begin{align}
\lim_{N\to\infty}\left\|b^+-b_ {N}^{\varepsilon}\right\|_{l^2(\tilde{\Gamma}_r)}=0.\label{eqn:convres3} \end{align} \end{thm}
\begin{lem}\label{lem:lockernel2} Assume that the parameters $\alpha_{N,n}$, $\tilde{\alpha}_{N,n}$, and $\beta_N$ are positive and suppose that, for some $\delta>0$ and $\kappa>1$, \begin{align} \inf_{n=0,\ldots, \lfloor \kappa N\rfloor}\frac{\beta_N\frac{1-\left(\frac{R}{r}\right)^{2(N+1)}}{1-\left(\frac{R}{r}\right)^2}+(\lfloor \kappa N\rfloor+1)^2}{{\alpha}_{N,n}}&=\mathcal{O}\left(N^{-2(1+\delta)}\right),\quad \textit{ for }N\to\infty.\label{eqn:ass5} \end{align} An analogous relation shall hold true for $\tilde{\alpha}_{N,n}$. If the scaling kernel $\mathbf{\Phi}_N\in\textnormal{\textbf{Pol}}_{N}$ and the wavelet kernel $\tilde{\mathbf{\Psi}}_N\in\textnormal{\textbf{Pol}}_{\lfloor \kappa N\rfloor}$ are the minimizers of the functional $\mathcal{F}$ from \eqref{eqn:mineq4}, then \begin{align} \label{eqn:optloc3}
\lim_{N\to\infty}\frac{\big\|\tilde{\mathbf{\Psi}}_N\big\|^2_{L^2([-1,1-\rho])}}{\big\|\tilde{\mathbf{\Psi}}_N\big\|^2_{L^2([-1,1])}}=0. \end{align} \end{lem}
\begin{proof}
Set $F_N(t)={|\tilde{\mathbf{\Psi}}_N(x,y)|^2}/{\|\tilde{\mathbf{\Psi}}_N\|^2_{L^2([-1,1])}}$, where $t=\xi \cdot\eta $. Since $\tilde{\mathbf{\Psi}}_N$ is tensor-zonal, $F_N$ is well-defined and can be regarded as a density function of a random variable $t\in[-1,1]$. From here on, the proof is essentially the same as for Lemma \ref{lem:loc} and we have to show $\lim_{N\to\infty}E_N(t)=1$ and $\lim_{N\to\infty}V_N(t)=0$. We just indicate the proof for $E_N(t)$, the case of $V_N(t)$ follows analogously. Setting $\tilde{\Psi}_N(t)=|\tilde{\mathbf{\Psi}}_N(x,y)|$, again with $t=\xi \cdot\eta $, we get \begin{align}\label{eqn:ejtvect}
&2\pi r^2\int_{-1}^1t|\tilde{\Psi}_N(t)|^2dt=\int_{\Omega_r}\xi \cdot\eta \left|\tilde{\mathbf{\Psi}}_N\left(r\xi ,r\eta \right)\right|^2d\omega(r\eta) \\&=\frac{1}{r^2}\sum_{i=1}^2\sum_{n=0_i}^{\lfloor \kappa N\rfloor}\sum_{k=1}^{2n+1}\sum_{m=0_i}^{{\lfloor \kappa N\rfloor}}\sum_{l=1}^{2m+1}\tilde{\mathbf{\Psi}}_N^\wedge(n)\tilde{\mathbf{\Psi}}_N^\wedge(m)y_{n,k}^{(i)}\left(\xi \right)\cdot y_{m,l}^{(i)}\left(\xi \right)\nonumber \\&\qquad\qquad\qquad\qquad\qquad\qquad\quad\times\int_{\Omega_r}(\xi \cdot\eta)\, y_{n,k}^{(i)}\left(\eta \right)\cdot y_{m,l}^{(i)}\left(\eta \right)d\omega(r\eta)\nonumber \\&=\frac{1}{r^2}\sum_{n=0}^{\lfloor \kappa N\rfloor}\sum_{k=1}^{2n+1}\sum_{m=0}^{{\lfloor \kappa N\rfloor}}\sum_{l=1}^{2m+1}\tilde{\mathbf{\Psi}}_N^\wedge(n)\tilde{\mathbf{\Psi}}_N^\wedge(m)Y_{n,k}\left(\xi \right)Y_{m,l}\left(\xi \right)\nonumber \\&\qquad\qquad\qquad\qquad\qquad\quad\times\int_{\Omega_r}(\xi \cdot\eta)\, Y_{n,k}\left(\eta \right)Y_{m,l}\left(\eta \right)d\omega(r\eta)\nonumber \\&\quad+\frac{1}{r^2}\sum_{n=1}^{\lfloor \kappa N\rfloor}\sum_{k=1}^{2n+1}\sum_{m=1}^{{\lfloor \kappa N\rfloor}}\sum_{l=1}^{2m+1}\frac{\tilde{\mathbf{\Psi}}_N^\wedge(n)\tilde{\mathbf{\Psi}}_N^\wedge(m)}{n(n+1)m(m+1)}\nabla^*_{\xi }Y_{n,k}\left(\xi \right)\cdot\nabla^*_{\xi }Y_{m,l}\left(\xi \right)\nonumber \\&\qquad\qquad\qquad\qquad\qquad\quad\times\xi \cdot\int_{\Omega_r}\eta \left(\nabla^*_{\eta }Y_{n,k}\left(\eta \right)\cdot\nabla^*_{\eta }Y_{m,l}\left(\eta \right)\right)d\omega(r\eta)\nonumber \end{align}\normalsize The first term on the right hand side of \eqref{eqn:ejtvect} can be written as a one-dimensional integral \begin{align}\label{eqn:firstsum} &\frac{1}{r^2}\sum_{n=0}^{\lfloor \kappa N\rfloor}\sum_{k=1}^{2n+1}\sum_{m=0}^{{\lfloor \kappa N\rfloor}}\sum_{l=1}^{2m+1}\tilde{\mathbf{\Psi}}_N^\wedge(n)\tilde{\mathbf{\Psi}}_N^\wedge(m)Y_{n,k}\left(\xi \right)Y_{m,l}\left(\xi \right) \\&\qquad\qquad\qquad\qquad\quad\,\,\,\times\int_{\Omega_r}(\xi \cdot\eta)\, Y_{n,k}\left(\eta \right)Y_{m,l}\left(\eta \right)d\omega(r\eta)\nonumber \\&=\sum_{n=0}^{\lfloor \kappa N\rfloor}\sum_{m=0}^{{\lfloor \kappa N\rfloor}}\frac{(2n+1)(2m+1)}{8\pi}\tilde{\mathbf{\Psi}}_N^\wedge(n)\tilde{\mathbf{\Psi}}_N^\wedge(m)\int_{-1}^1tP_{n}\left(t\right)P_{m}\left(t\right)dt.\nonumber \end{align} The second term requires significantly more effort. We start by observing that \begin{align} &\int_{\Omega_r}\eta \left(\nabla^*_{\eta }Y_{n,k}\left(\eta \right)\cdot\nabla^*_{\eta }Y_{m,l}\left(\eta \right)\right)d\omega(r\eta)\label{eqn:1green} \\&=-\frac{1}{2}\int_{\Omega_r}Y_{m,l}\left(\eta \right)\nabla_{\eta }^*\cdot\left(\eta \otimes\nabla^*_{\eta }Y_{n,k}\left(\eta \right)\right)+Y_{n,k}\left(\eta \right)\nabla_{\eta }^*\cdot\left(\eta \otimes\nabla^*_{\eta }Y_{m,l}\left(\eta \right)\right)d\omega(r\eta)\nonumber \end{align}
\begin{align} &=-\frac{1}{2}\int_{\Omega_r}\nabla_{\eta }^*\left(Y_{n,k}\left(\eta \right)Y_{m,l}\left(\eta \right)\right)d\omega(r\eta)\nonumber +\frac{n(n+1)}{2}\int_{\Omega_r}\eta Y_{n,k}\left(\eta \right)Y_{m,l}\left(\eta \right)d\omega(r\eta)\nonumber \\&\quad+\frac{m(m+1)}{2}\int_{\Omega_r}\eta Y_{n,k}\left(\eta \right)Y_{m,l}\left(\eta \right)d\omega(r\eta)\nonumber \\&=\frac{n(n+1)}{2}\int_{\Omega_r}\eta Y_{n,k}\left(\eta \right)Y_{m,l}\left(\eta \right)d\omega(r\eta)\nonumber +\frac{m(m+1)}{2}\int_{\Omega_r}\eta Y_{n,k}\left(\eta \right)Y_{m,l}\left(\eta \right)d\omega(r\eta),\nonumber \end{align}\normalsize where we have used $\nabla_{\eta}^*\cdot\left(\eta\otimes\nabla^*_{\eta}Y_{n,k}\left(\eta\right)\right)=\eta\Delta_\eta^*Y_{n,k}\left(\eta\right)+\nabla_\eta^*Y_{n,k}\left(\eta\right)$. Plugging \eqref{eqn:1green} into the second term on the right hand side of Equation \eqref{eqn:ejtvect}, and using the expression $2\nabla_\xi ^*Y_{n,k}(\xi )\cdot \nabla_\xi ^*Y_{m,l}(\xi )=\Delta_\xi ^*(Y_{n,k}(\xi )Y_{m,l}(\xi ))+(n(n+1)+m(m+1))Y_{n,k}(\xi )Y_{m,l}(\xi )$, leads to \begin{align} &\frac{1}{r^2}\sum_{n=1}^{\lfloor \kappa N\rfloor}\sum_{k=1}^{2n+1}\sum_{m=1}^{{\lfloor \kappa N\rfloor}}\sum_{l=1}^{2m+1}\frac{\tilde{\mathbf{\Psi}}_N^\wedge(n)\tilde{\mathbf{\Psi}}_N^\wedge(m)}{n(n+1)m(m+1)}\nabla^*_{\xi }Y_{n,k}\left(\xi \right)\cdot\nabla^*_{\xi }Y_{m,l}\left(\xi \right)\label{eqn:2term} \\&\qquad\qquad\qquad\qquad\quad\times\xi \cdot\int_{\Omega_r}\eta \nabla^*_{\eta }Y_{n,k}\left(\eta \right)\cdot\nabla^*_{\eta }Y_{m,l}\left(\eta \right)d\omega(r\eta)\nonumber \\&=\frac{1}{2r^2}\sum_{n=1}^{\lfloor \kappa N\rfloor}\sum_{k=1}^{2n+1}\sum_{m=1}^{{\lfloor \kappa N\rfloor}}\sum_{l=1}^{2m+1}\tilde{\mathbf{\Psi}}_N^\wedge(n)\tilde{\mathbf{\Psi}}_N^\wedge(m)\frac{n(n+1)}{m(m+1)}\nonumber \\&\qquad\qquad\qquad\quad\qquad\qquad\times\int_{\Omega_r}(\xi\cdot\eta)\, Y_{n,k}\left(\xi \right)Y_{m,l}\left(\xi \right)Y_{n,k}\left(\eta \right)Y_{m,l}\left(\eta \right)d\omega(r\eta)\nonumber \\&\quad+\frac{1}{2r^2}\sum_{n=1}^{\lfloor \kappa N\rfloor}\sum_{k=1}^{2n+1}\sum_{m=1}^{{\lfloor \kappa N\rfloor}}\sum_{l=1}^{2m+1}\tilde{\mathbf{\Psi}}_N^\wedge(n)\tilde{\mathbf{\Psi}}_N^\wedge(m)\nonumber \\&\qquad\qquad\qquad\quad\qquad\qquad\times\int_{\Omega_r}(\xi\cdot\eta)\, Y_{n,k}\left(\xi \right)Y_{m,l}\left(\xi \right)Y_{n,k}\left(\eta \right)Y_{m,l}\left(\eta \right)d\omega(r\eta)\nonumber \\&\quad+\frac{1}{2r^2}\sum_{n=1}^{\lfloor \kappa N\rfloor}\sum_{k=1}^{2n+1}\sum_{m=1}^{{\lfloor \kappa N\rfloor}}\sum_{l=1}^{2m+1}\frac{\tilde{\mathbf{\Psi}}_N^\wedge(n)\tilde{\mathbf{\Psi}}_N^\wedge(m)}{n(n+1)}\nonumber \\&\qquad\qquad\qquad\quad\qquad\qquad\times\xi \cdot\Delta_{\xi }^*\int_{\Omega_r}\eta Y_{n,k}\left(\xi \right)Y_{m,l}\left(\xi \right)Y_{n,k}\left(\eta \right)Y_{m,l}\left(\eta \right)d\omega(r\eta)\nonumber \\&=\sum_{n=1}^{\lfloor \kappa N\rfloor}\sum_{m=1}^{{\lfloor \kappa N\rfloor}}\tilde{\mathbf{\Psi}}_N^\wedge(n)\tilde{\mathbf{\Psi}}_N^\wedge(m)\left(\frac{1}{2}+\frac{n(n+1)-2}{2m(m+1)}\right)\frac{(2n+1)(2m+1)}{8\pi}\int_{-1}^1tP_n(t)P_m(t)dt,\nonumber \end{align}\normalsize where we have used the addition theorem, the property $\Delta_{\xi }^*P_n(\xi \cdot\eta)=\Delta_{\eta}^*P_n(\xi \cdot\eta)$, Green's formulas, and $\Delta_{\eta}^*\,\eta=-2\eta$ in the last row. Eventually, combining \eqref{eqn:ejtvect}, \eqref{eqn:firstsum}, and \eqref{eqn:2term}, we are lead to \begin{align}\label{eqn:inttpsiN}
&\int_{-1}^1t|\tilde{\Psi}_N(t)|^2dt \\&=\sum_{n=1}^{\lfloor \kappa N\rfloor}\sum_{m=1}^{{\lfloor \kappa N\rfloor}}\tilde{\mathbf{\Psi}}_N^\wedge(n)\tilde{\mathbf{\Psi}}_N^\wedge(m)\left(\frac{3}{4}+\frac{n(n+1)-2}{4m(m+1)}\right)\frac{(2n+1)(2m+1)}{8\pi^2r^2}\int_{-1}^1tP_n(t)P_m(t)dt.\nonumber \end{align}\normalsize Observing $\lim_{m,n\to\infty}\frac{3}{4}+\frac{n(n+1)-2}{4m(m+1)}=1$ if $m\in\{n-1,n,n+1\}$, we can proceed with \eqref{eqn:inttpsiN} in a similar manner as in \eqref{eqn:et1} and \eqref{eqn:et11}. Together with \begin{align}\label{eqn:etvect}
\|\tilde{\mathbf{\Psi}}_N\|_{L^2([-1,1])}^2=\sum_{n=0}^{\lfloor \kappa N\rfloor} \frac{2n+1}{4\pi^2r^2}|\tilde{\mathbf{\Psi}}_N^\wedge(n)|^2=\mathcal{O}(N^{-2\delta})+\sum_{n=N+1}^{\lfloor \kappa N\rfloor} \frac{2n+1}{4\pi^2r^2}|\tilde{\mathbf{\Psi}}_N^\wedge(n)|^2, \end{align} this leads to the desired property \begin{align}
\lim_{N\to\infty}E_N(t)=\lim_{N\to\infty}\int_{-1}^1 t|F_N(t)|^2 dt=\lim_{N\to\infty}\frac{\int_{-1}^1 t|\tilde{\Psi}_N(t)|^2 dt}{\|\tilde{\mathbf{\Psi}}_N\|_{L^2([-1,1])}^2}=1, \end{align} concluding the proof. \end{proof}
\section{Conclusion}\label{sec:concl} The combination of satellite and ground data is an important step to obtain high-resolution models, e.g., of the Earth's gravitational and crustal magnetic field. The numerical examples in this article have shown that the proposed approach via optimized kernels yields improved results over a naive approximation by Shannon-type kernels. Furthermore, it proved superior to spline approximation/interpolation from ground data in $\Gamma_r$ and downward continuation of satellite data on $\Omega_R$ via TSVD/RCM over a wide range of scenarios. Splines can only compete if the noise level in $\Gamma_r$ is not larger than the noise level on $\Omega_R$. Downward continuation, on the other hand, is comparable only if the noise level on $\Omega_R$ is significantly smaller than in $\Gamma_r$. The combined approach presented here automatically weighs the different properties of ground and satellite data against each other. In this sense, a crucial ingredient for the construction of the optimized kernels is the choice of the parameters $\beta_N$, $\alpha_{N,n}$, $\tilde{\alpha}_{N,n}$, and the truncation degree $N$. They may be motivated by a priori knowledge of the noise levels $\ensuremath{\varepsilon}_1$, $\ensuremath{\varepsilon}_2$, and the size of the region $\Gamma_r$, but in general they will require an adequate yet to investigate parameter choice strategy. \\[2ex]
\textbf{Acknowledgements.} This work was partly conducted at UNSW Australia and supported by a fellowship within the Postdoc-program of the German Academic Exchange Service (DAAD). The author thanks Ian Sloan, Rob Womersley, and Yu Guang Wang for valuable discussions on Proposition \ref{prop:loc1} as well as the two reviewers for their valuable comments and suggestions to improve the paper.
\end{document} | arXiv |
Assessing two methods for estimating excess mortality of chronic diseases from aggregated data
Ralph Brinks1,2,
Thaddäus Tönnies1 &
Annika Hoyer1
To assess the numerical properties of two recently published estimation techniques for excess mortality based on aggregated data about diabetes in Germany.
Application of the new methods to the claims data yields implausible findings for the excess mortality of type 2 diabetes in ages below 50 years of age.
Aggregated data such as health insurance claims data become more and more available for research purposes. Recently, we proposed a new method to estimate the excess mortality in chronic diseases from aggregated age-specific prevalence and incidence data [1, 2]. So far, estimates of excess mortality have only been plausible for ages 50+ and have shown to be unstable in younger ages. For example, in the simulation study of [2], the bias increases as the age decreases (Table 1 in [2]).
The theoretical background for estimating the excess mortality stems from the illness-death model for chronic diseases [3]. In [4] we have shown that the temporal change, ∂p = (∂t + ∂a) p of the age-specific prevalence p is related to the incidence rate i, the mortality rates m0 and m1 of the people with and without the disease, respectively, the general mortality m and the mortality rate ratio R = m1/m0 via the following equations:
$$ {\partial p} = \, \left( {{ 1 { }{-}p}} \right) \, \{{ i{-}}p \times \left( {{m_{ 1} {-}m_{0}} } \right)\} $$
$$ = \left( { 1 { }{-}p} \right)\{ i{-}m \times p\left( {R{-}{ 1}} \right)/\left[ { 1 { } + p\left( {R{-}{ 1}} \right)} \right]\} . $$
(1b)
There are two assumptions such that Eqs. (1a) and (1b) are true: (a) there is no remission from the chronic condition back to the healthy state and (b) age-specific prevalence of the chronic condition in the migrating population is the same as in the resident population.
Given the age-specific prevalence p, the age-specific incidence rate i and the general mortality rate m, Eqs. (1a) and (1b) can be used to estimate the excess mortality rate ∆m = m1 − m0 and the mortality rate ratio R [1, 2]:
$$ \Delta m = \, \{ i{-}\partial p/\left( { 1 { }{-}p} \right)\} /p, $$
$$ R = { 1 } + { 1}/p \times \{ i\left( { 1 { }{-}p} \right) \, {-}\partial p\} /\{ \left( { 1 { }{-}p} \right) \, \left( {m{-}i} \right) \, + \partial p\} $$
The aim of this research note is to explore the reasons why estimates of excess mortality for younger ages are biased and what can be done to extend the age range to ages below 50 years. As a testing example, we use claims data about diabetes from the German statutory health insurance based on about 70 million people collected during the period from 2009 to 2015 [5].
Methods and materials
Goffrier et al. report the age-specific prevalence p of type 2 diabetes in 2009 and 2015 [5]. The age-specific prevalence data p for men in 2009 and 2015 are modeled by a linear regression model after application of a logit transformation. Furthermore, the age-specific incidence rate i for diabetes in men halfway between 2009 and 2015, i.e., in the year 2012, is reported. The age-specific incidence rate i for 2012 is modeled by a linear regression model after a log-transformation. These data are used as input for Eqs. (2a) and (2b). For applying Eq. (2b) we also use the general mortality m in 2012 from the Federal Statistical Office of Germany.
With these input data, Eqs. (2a) and (2b) allow to estimate the age-specific excess mortality ∆m and the mortality rate ratio R. While R has a straightforward interpretation as the ratio of the mortality rate of the diabetic population compared to the non-diabetic population, the excess mortality rate ∆m is more interpretable when it is related to another mortality rate. As it holds m = p m1 + (1 − p) m0, we have ∆m/m ≤ ∆m/m0= R − 1 and thus R ≥ 1 + ∆m/m ≥ ∆m/m. Hence, we decided to report the quotient ∆m/m, which is a lower bound for R.
In order to assess uncertainty in the results, we implemented a multidimensional probabilistic sensitivity analysis [6]. The key idea is to randomly sample from the distributions of input parameters (i.e., prevalence in 2009 and 2015, and incidence in 2012), and calculate the outcomes (i.e., measures of excess mortality). As the input parameters are sampled from random distributions many times, we get a sequence of outcomes, which also follows a random distribution representing the combined uncertainty in the input parameters [6]. We report empirical medians, and 2.5% and 97.5% quantiles for approximate 95% confidence intervals of the outcomes based on 5000 samples from the input distributions.
Figure 1 shows the age-specific ratio ∆m/m. Below 50 years of age the excess mortality rate is more than 10 times higher than the mortality rate of the general population. The ratio peaks at a value of more than 200 at the age of about 30 years. As R ≥ ∆m/m, we see that the estimate of the excess mortality is extraordinarily high.
Age-specific ratio ∆m/m of the excess mortality (∆m) and the general mortality (m). The graph shows the empirical median of ∆m/m with 95% confidence bounds (vertical bars) based on the probabilistic sensitivity analysis with 5000 simulation runs. The ratio ∆m/m is a lower bound for the mortality rate ratio R
Application of Eq. (2b) for obtaining the mortality rate ratio R, yields the results as shown in Table 1. We see that for ages below 55 years of age, the mortality rate ratios are implausibly high or become negative. By definition of the mortality rate ratio, a quotient of two positive rates, negative values are not possible. Thus, we see that the estimates based on Eq. (2b) do not yield sensible results for lower age groups and thus are not reliable.
Table 1 Mortality rate ratios (R) for different age-groups
In this manuscript we have applied two methods to estimate indices for the excess mortality of a chronic condition from age-specific prevalence and incidence data. The first index is the difference ∆m between the mortality rate of the diseased people (m1) and the people without the disease (m0), i.e., ∆m = m1 − m0. Sometimes, the index ∆m is called attributable risk [7]. The second index is the mortality rate ratio R = m1/m0. In an example about diabetes in the German male population, it turns out that both estimates are numerically unstable for ages below 50 years. In case of ∆m, unreasonably high values have been obtained in the diabetes data (more than 200 times the mortality of the general population). The estimated values of R can lead to implausible results such as negative rate ratios.
The question arises if the implausible results might be a consequence of the assumptions for Eq. (1) being violated. The two assumptions are: no remission and prevalence in migrants is the same as in residents. While remission of diabetes has indeed been observed [8], it has not been a relevant therapy option or health policy in Germany during the study period. Note that the input data [5] refers to millions of people. Little is known about the second assumption. The prevalence of diabetes in migrants from and to Germany is currently not investigated on population level. However, in another age-related chronic disease (dementia), we analyzed the most extreme cases (i.e., all immigrants having the chronic condition and all emigrants being free from the chronic condition and vice versa) and the overall epidemiological measures were only negligibly affected [9]. Thus, we think that violations of the two assumptions have only very minor effects on reported results.
Implausible results, at least in theory, may be due to changes in the distributions of relevant covariates in the input data. Examples for relevant covariates might be the change of diagnostic criteria for diabetes, changes in the distribution of disease duration, distribution of body weight, the quality of glucose control or the presence of co-morbidities. In fact, possible effects of changing covariates are not estimable by our method and we do not doubt that these exist. However, we believe that the study period (2009–2015) is relative short to comprise considerable changes. Furthermore, in Germany there has not been a change in diagnostic criteria in diabetes during the study period.
In simulation studies, we found that the diagnostic accuracy of the claims data plays a crucial role for the proposed methods of estimating excess mortality. By diagnostic accuracy we mean sensitivity and specificity of the claims data compared to the gold standard of diagnosing diabetes. In principle, diagnostic accuracy may undergo secular changes, e.g., if reimbursement policy is changing. It could be possible, for instance, that false positive diagnoses in the prevalence of 2015 can be increased compared to 2009, if physicians obtain more reimbursement for the later point in time. We note, however, that such up-coding is fraud and is enforced by penalty. The impact of changes of diagnostic accuracy is subject to an ongoing theoretical analysis (including a comprehensive simulation study) aimed for an upcoming paper.
Based on the results in this example, we see that special attention is required in interpreting the results of the two estimation techniques, when applied to lower age ranges.
The aim of this research note was to assess the performance of two recently published estimators for the excess mortality of a chronic disease from prevalence and incidence data. While in previous publications [1, 2] reasonable results have been found for ages over 50 years, here we demonstrated problems of these estimators in younger age groups. The reasons for the problems seem to lie in the estimators itself. For instance, if the partial derivative of the prevalence (∂p) is close to zero and the incidence rate (i) is close to the general mortality (m), i.e., i ≈ m, the denominator in Eq. (2b) is close to zero. Thus, the fraction on the right hand side of Eq. (2b) becomes very large in magnitude. This explains the highly oscillating values in Table 1. Despite Eq. (2a) does not have the (cancellation) problem for i ≈ m, implausibly high values are obtained too. The reason is the factor 1/p on the right hand side of Eq. (2a). For values of the prevalence (p) being close to zero, the reciprocal 1/p becomes very large. For example, in the lowest age group (15–19 years), the fraction 1/p takes values of about 900, which explains the high estimate for ∆m in this age group. Strategies to overcome these problems are currently under development and will be subject of a future article.
The source code for this analysis is available as an electronic supplement to this published article (Additional file 1). The underlying data about diabetes were taken from a free publicly available source [5], which has been cited in the text and is part of the source code (see Additional file 1).
Tönnies T, Hoyer A, Brinks R. Excess mortality for people diagnosed with type 2 diabetes in 2012—estimates based on claims data from 70 million Germans. Nutr Metab Cardiovasc Dis. 2018;28(9):887–91. https://doi.org/10.1016/j.numecd.2018.05.008.
Brinks R, Tönnies T, Hoyer A. New ways of estimating excess mortality of chronic diseases from aggregated data: insights from the illness-death model. BMC Public Health. 2019;19(1):844. https://doi.org/10.1186/s12889-019-7201-7.
Kalbfleisch JD, Prentice RL. Statistical analysis of failure time data. 2nd ed. Hoboken: Wiley & Sons; 2002.
Brinks R, Landwehr S. A new relation between prevalence and incidence of a chronic disease. Math Med Biol. 2015;32(4):425–35. https://doi.org/10.1093/imammb/dqu024.
Goffrier B, Schulz M, Bätzing-Feigenbaum J. Administrative Prävalenzen und Inzidenzen des diabetes mellitus von bis 2015. Versorgungsatlas. 2017. https://doi.org/10.20364/VA-17.03.
Oakley JE, O'Hagan A. Probabilistic sensitivity analysis of complex models: a Bayesian approach. J Royal Stat Soc. 2004;66:751–69.
Hennekens CH, Buring JE. Epidemiology in medicine. Philadelphia: Lippincott Williams & Wilkins; 1987.
Steven S, Carey PE, Small PK, Taylor R. Reversal of type 2 diabetes after bariatric surgery is determined by the degree of achieved weight loss in both short- and long-duration diabetes. Diabet Med. 2015;32(1):47–53.
Brinks R, Landwehr S. Age- and time-dependent model of the prevalence of non-communicable diseases and application to dementia in Germany. Theor Popul Biol. 2014;92:62–8. https://doi.org/10.1016/j.tpb.2013.11.006.
The authors wish to thank the Zentralinstitut für Kassenärztliche Versorgung, Berlin, for making the claims data available.
This research did not receive any funding.
Institute for Biometry and Epidemiology, German Diabetes Center, Auf'm Hennekamp 65, 40225, Duesseldorf, Germany
Ralph Brinks, Thaddäus Tönnies & Annika Hoyer
Department and Hiller Research Unit for Rheumatology, University Hospital Duesseldorf, Moorenstr. 5, 40225, Duesseldorf, Germany
Ralph Brinks
Thaddäus Tönnies
Annika Hoyer
RB had the initial idea for this work, developed the source code and drafted the manuscript. TT and AH critically discussed the ideas and revised the manuscript. All authors gave substantial intellectual contributions. All authors read and approved the final manuscript.
Correspondence to Ralph Brinks.
This study solely relies on publically available secondary data (aggregated claims data [5]). Therefore, consent to participate is not required. The Ethics Board of the University Hospital Duesseldorf has confirmed that in case of published data, no review of the Ethics Board is necessary.
Not necessary because this manuscript does not contain data from any individual person.
Script (plain text file, accessible via any text editor, e.g., Notepad, GNU Emacs etc.) for the example about type 2 diabetes, intended to use with the statistical software R (The R Foundation of Statistical Software).
Brinks, R., Tönnies, T. & Hoyer, A. Assessing two methods for estimating excess mortality of chronic diseases from aggregated data. BMC Res Notes 13, 216 (2020). https://doi.org/10.1186/s13104-020-05046-w
Multi-state model
Partial differential equation | CommonCrawl |
\begin{document}
\title{Rapid and robust spin state amplification}
\author{Tom Close$^\dagger$}
\email{[email protected]}
\affiliation{Department of Materials, Oxford University, Oxford OX1 3PH, UK}
\author{Femi Fadugba\footnote{These authors have contributed equally to the work reported here.}}
\affiliation{Department of Materials, Oxford University, Oxford OX1 3PH, UK}
\author{Simon C. Benjamin}
\affiliation{Department of Materials, Oxford University, Oxford OX1
3PH, UK}
\affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543}
\author{Joseph Fitzsimons}
\affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543}
\author{Brendon W. Lovett}
\affiliation{Department of Materials, Oxford University, Oxford OX1 3PH, UK}
\affiliation{School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh EH14 4AS}
\begin{abstract}
Electron and nuclear spins have been employed in many of the early
demonstrations of quantum technology (QT). However applications in
real world QT are limited by the difficulty of measuring single
spins. Here we show that it is possible to rapidly and robustly
amplify a spin state using a lattice of ancillary spins. The model
we employ corresponds to an extremely simple experimental system:
a homogenous Ising-coupled spin lattice in one, two or three
dimensions, driven by a continuous microwave field. We establish
that the process can operate at finite temperature (imperfect
initial polarisation) and under the effects of various forms of
decoherence.
\end{abstract}
\maketitle
\newcommand{\ket}[1]{\mathinner{|{#1}\rangle}}
\newcommand{\braket}[2]{\langle #1|#2\rangle} \newcommand{\sig}[2]{\sigma_#1^{(#2)} }
The standard approach to implementing a quantum technology is to
identify a physical system that can represent a qubit: it must exhibit
two (or more) stable states, it should be manipulable through external
fields and possess a long decoherence time. Provided that the system
can controllably interact with other such systems, then it may be a
strong candidate. Electron and nuclear spins, within suitable
molecules or solid state structures, can meet these
requirements. However the drawback with spin qubits is that they have
not been directly measured through a detection of the magnetic field
they produce. The magnetic moment of a single electron spin is orders
of magnitude too weak to be detected by standard ESR techniques and
even the most sensitive magnetometers still fall short of single spin
measurement~\cite{cleuziou06} - meanwhile the situation with nuclear spins is
worse still. In a few special systems it is possible to convert the
spin information into another degree of freedom. For example, a
spin-dependent optical transition allow spin to photon conversion in
some crystal defects~\cite{jelezko04, neumann10, Morello:2010p6617},
self-assembled semiconductor quantum dots~\cite{vamivakas10, Berezovsky:2008p6616},
and
trapped atoms held in a vacuum~\cite{leibfried03}. Alternatively, spin
to charge conversion is an established technology in lithographic
quantum dots~\cite{hanson07}. However, the majority of otherwise
promising spin systems do not have such a convenient
property~\cite{morton10} and therefore cannot be measured directly.
One suggested solution is to `amplify' a single spin, by using a set
of ancillary spins that are (ideally) initialised to $\ket{0}$. We would look for a transformation of the form
\begin{equation}
\ket{0}\ket{0}^{\otimes n} \rightarrow \ket{0}\ket{0}^{\otimes n} \ \ \ \ \ \
\ket{1}\ket{0}^{\otimes n} \rightarrow \ket{1}\ket{1}^{\otimes n},
\end{equation}
the idea being that the $n$ ancillary spins constitute a large enough
set that state of the art
magnetic field sensing technologies can detect them. Note that the transformation need not be unitary or indeed even coherent: since the intention is to make a measurement of the primary spin, it is not necessary to preserve any superposition (that is, we need not limit ourselves to transformations that take $\alpha\ket{0}\ket{0}^{\otimes n} +\beta\ket{1}\ket{0}^{\otimes n} $ to a cat state like $\alpha\ket{0}^{\otimes n+1}+\beta\ket{1}^{\otimes n+1}$).
This is a rather broadly defined transformation and there are a number
of ways that one might perform it. Clearly one would like to find the
method that is the least demanding experimentally. Previous authors
have proposed schemes using a strictly one-dimensional (1D)
homogeneous lattice with continuous global driving
\cite{Lee:2005p6468}, and an inhomogeneous three-dimensional (3D)
lattice with alternating timed EM pulses
\cite{PerezDelgado:2006p6542}. The former result has the advantage of
simplicity but the rate at which amplification occurs will inevitably
be limited by the single dimension of the array; moreover such a
system must be highly vulnerable to imperfect initialisation
(i.e. finite temperature). Here we generalise to a homogeneous
two-dimensional (2D) square lattice, showing that a continuous global
EM field can drive an amplification process that succeeds at finite
temperatures (imperfect initialisation of the ancilla spins) and in
the presence of decoherence. By bringing the global EM field onto
resonance with certain transitions, we are able to create a set of
rules that govern locally how spins propagate over the lattice. We
then look at the rate of increase in the total number of flipped spins
as a measure of quality of the scheme. While our focus is on the 2D
case, we are also able to predict the performance of the amplification
protocol for a homogeneous 3D lattice with continuous driving.
The case of a 1D lattice has been studied in detail by
Lee and Khitrin \cite{Lee:2005p6468}. Before moving to the 2D
spin lattice that will form the core of the paper, we first recall how to simplify the description of this (semi-infinite) 1D spin chain, with nearest neighbour Ising (ZZ)
interactions. Under a microwave driving field of frequency $\omega$,
the Hamiltonian is given by
\begin{equation} {\cal H} = \sum_{i=1}^{\infty} \epsilon_i \sigma^i_z
+ J_i \sigma_z^i\sigma_z^{i+1} + 2 \Omega_i \sigma_x^i \cos(\omega
t)
\end{equation}
$\epsilon_i$ is the on-site Zeeman energy of spin $i$, and $J_i$ is
the magnitude of the coupling between spins $i$ and $i+1$. $\Omega$
describes the coupling of spin $i$ and the microwave field. In this case, spin $i=1$ is the one whose state is supposed to be amplified. If we assume that the chain is uniform, such that
$\Omega_i = \Omega$, $\epsilon_i = \epsilon$ and $J_i = J$, then moving
into a frame rotating at frequency $\omega$, making a rotating wave
approximation and setting $\omega = \epsilon$ leads to
\begin{equation} {\cal H} = \sum_{i=1}^{\infty} J
\sigma_z^i\sigma_z^{i+1} + \Omega \sigma_x^i .
\end{equation}
In order to understand the dynamics of the system, is it instructive to
explicitly separate all terms that involve a particular spin $k$:
\begin{equation} {\cal H} = J (\sigma_z^{k-1} +
\sigma_z^{k+1})\sigma_z^k + \Omega \sigma^k_x + \sum_{i \neq \{k,
k-1\}} \Omega\sigma_x^i + J \sigma_z^i\sigma_z^{i+1} +
\Omega\sigma_x^{k-1}
\label{hamk}
\end{equation}
Choosing a driving field such that $\Omega\ll J$ means that
spin $k$ will only undergo resonant oscillations when the first term
in Eq.~\ref{hamk} goes to zero - i.e. when the two spins neighbouring
spin $k$ are oriented in opposite directions. In any other
configuration the Ising coupling takes the spin $k$ off resonance with
the microwave and no appreciable dynamics are expected.
Let us now define a subset of states $S$ that exist in the spin chain
Hilbert space, $\ket{n}$, which have the first $n$ spins of the chain
in state $\ket{\uparrow}$ with the rest $\ket{\downarrow}$. If the
rule we just derived holds exactly these states define a closed
subspace. We may then write a very simple isolated Hamiltonian
for this subspace:
\begin{equation}
{\cal H}_S = \Omega \sum_{n=1}^\infty \ket{n}\bra{ n+1}+\ket{n+1}\bra{ n}.
\label{1d_ham}
\end{equation}
With this simplification of the 1D Hamiltonian in mind, we progress now to a semi-infinite square spin lattice with nearest-neighbour ZZ interactions. For this case we have
\begin{equation} {\cal H} = \sum_{i=1}^{\infty}\sum_{j=1}^\infty
\epsilon \sigma^{i, j}_z + J \sigma_z^{i, j}\sigma_z^{i+1, j} +
J\sigma_z^{i, j}\sigma_z^{i, j+1} + 2 \Omega \sigma_x^{i, j}
\cos(\omega t).
\end{equation}
By again considering the terms affecting a particular spin in the main
body of the lattice ($k(>1), l(>1)$ say) we find for $\omega =
\epsilon$ and after moving to a rotating frame and making the rotating
wave approximation:
\begin{equation}
{\cal H} = J \sigma_z^{k, l} (\sigma_z^{k+1, l} + \sigma_z^{k, l+1} + \sigma_z^{k-1, l} + \sigma_z^{k, l-1}) +...
\end{equation}
where we do not explicitly write out terms not involving spin $(k, l)$.
The microwave is now only resonant for spin $(k, l)$ if it has two
neighbour spins in each orientation. For a spin on the edge of the
lattice there are an odd number of neighbours so resonance cannot be
achieved in this case. However, applying a second microwave with
$\omega = \epsilon - J$ allows resonant flips on the edge if two
neighbours are down and one up - and this second field has no effect on the
bulk spins.
The spin to be measured is the corner spin ($i=j=1$) and so would form part of a wider computational apparatus. We may therefore assume that it is a different species with
a unique resonant frequency. The dynamics of the whole lattice may then be summarised by three rules (in order of precedence):
\begin{enumerate}
\item The corner (test) spin is fixed.
\item An edge spin can flip if it has one of its neighbours up and
two down.
\item A body spin can flip if it has two of its neighbours up and two down.
\end{enumerate}
We begin by supposing all spins are initialised in the `down' state
apart from the test spin, which is located in the upper left hand corner
of our lattice. We can describe this initial state by choosing two
basis elements: $\ket 0$ when the test spin is down, and $\ket 1$
when the test spin is up. Using our heuristic rules we can see that these two states do not
couple to each other - that $\bra 0H\ket 1=0$. In fact $\ket 0$
does not couple to any other state, so if we start in the $\ket 0$
state no amplification occurs, as desired.
We will now seek to construct a basis for the subspace containing our
system evolution, by looking at states connected by our
Hamiltonian. It will be convenient to represent these states on the nodes of
a graph, using the edges to represent non-zero elements of the
Hamiltonian.
Our starting point is the state $\ket 1$, with just the corner spin
`up'. From this position our rules allow two possibilities: either the
spin to the right of the corner flips, or the spin below it flips (see Fig.~\ref{partition_states}). In each case the
magnitude of the transition matrix element is $\Omega$. As we continue this procedure, we notice that the states that arise for each excitation number
can be characterised by a non-increasing sequence of
integers that represent the number of `up'-spins in each column of the
lattice (see Fig.~\ref{partition_states}). Such sequences can also be used to define partitions of at
integer: ways of splitting an integer up into a sum of other integers,
e.g. $3=3=2+1=1+1+1$. In fact, the states that arise are in 1-to-1
correspondence with such partitions; we call these states
`partition states' and denote them with standard partition notation
(see Fig.~\ref{partition_states}).
\begin{figure}
\caption{Partition states arranged into a lattice. Edges represent a coupling
through the Hamiltion of strength $\Omega$. Weights represent the
number of different paths through the lattice to a given state.}
\label{partition_states}
\end{figure}
The graph we have just described is depicted in
Fig.~\ref{partition_states} is known as `Young's lattice' and arises
in areas of pure mathematics, such as the representation theory of the
symmetric group, and the theory of differential posets. We have drawn
weights beneath each state, recording the number of ways the state can
be constructed. We will now further reduce the dimension of this basis
by eliminating combinations of states which are inaccessible.
Starting with $\ket 1$ we see that $\bra
1H\left(\alpha_{1,1}\ket{\psi_{1,1}}+ \alpha_{2}\ket{\psi_{2}}\right)
=\Omega\left(\alpha_{1,1}+ \alpha_{2}\right)$ so $\ket 1$ does not
couple to the two-excitation state $\ket{\psi_{1,1}}-\ket{\psi_{2}}$. We can
eliminate this, leaving a single orthogonal, coupled state with two excitations: $\ket 2 :=
\frac{1}{\sqrt{2}}\left( \ket{\psi_{1,1}}+\ket{\psi_2} \right)$.
We may continue to build up coupled states with larger excitation numbers, and in fact we find that there is only a single coupled state in each case (i.e. we can always eliminate $k-1$ combinations of partition states with $k$ excitations).
To see this, first suppose we have the coupled state with $k$
excitations, which by analogy with the 1D case we write as $\ket
k$. We can write $\ket k=\frac{1}{N_k}\sum_{i\in
P(k)}c_{i}\ket{\psi_{i}}$, where $P(k)$ is the set of partitions of
the integer $k$ and $N_k$ a normalisation factor. We want to construct
the state $\ket{k+1}$ by eliminating the $k$-dimensional subspace with
$k+1$ excitations, to which $\ket{k}$ does not couple.
Let $\ket{\psi}=\sum_{j\in P(k+1)}\alpha_{j}\ket{\psi_{j}}$ and
consider the states $\ket{\psi}$ such that
\[
0=\bra kH\ket{\psi}=\sum_{i\in P(k)}\sum_{j\in P(k+1)}c_{i}^{*}\alpha_{j}\bra{\psi_{i}}H\ket{\psi_{j}}\]
but $\bra{\psi_{i}}H\ket{\psi_{j}}=\Omega$ if $i$ is a {\it parent} of $j$
(a state connect to $j$, in the lattice row above it), and $0$ otherwise, so
\[0=\bra kH\ket{\psi}=\sum_{j\in P(k+1)}\alpha_{j}\sum_{i\in parents(j)}c_{i}^{*}.\]
This is the equation of a hyperplane in $|P(k+1)|$ dimensions,
defining the states that are not coupled to $\ket k$ through the Hamiltonian.
There is a unique single state orthogonal to this hyperplane,
$\beta_{j}=\sum_{i\in parents(j)}c_{i}$, to which $\ket k$ couples.
So the only state with $k+1$ `up'-spins that $\ket k$ couples has coefficients
proportional to $\beta_{j}$. After normalisation, we call this state
$\ket{k+1}$.
Unfortunately there is no easy way to write down the partition states
and weights for the $n$th row of the lattice. Fortunately, for our purposes, we only need to know
that the states $\ket k$ exist and what the coupling between them
is. To find this coupling, consider
\begin{align}
g_{n-1,n} & = \bra nH\ket{n-1} \nonumber\\
& = \frac{1}{N_{n-1}N_n}\sum_{i\in P(n)}\sum_{j\in
P(n-1)}c_{i}^{*}c_{j}\bra{\psi_{i}}H\ket{\psi_{j}} \nonumber\\
& = \frac{1}{N_{n-1}N_n}\Omega\sum_{i\in P(n)}c_{i}^{*}\sum_{j\in
parents(i)}c_{j} \nonumber\\
& = \frac{1}{N_{n-1}N_n}\Omega\sum_{i\in P(n)}|c_{i}|^{2} =
\Omega\frac{ N_n}{N_{n-1}}\label{coupling_as_sum_c2}\end{align}
To find the $N_n$ we need the sum of the squares of the weights of partitions
in a given row. A standard result about Young's lattice
immediately gives us this sum: $n!$ \cite{STANLEY:1975p6605}.
Referring back to Eq. (\ref{coupling_as_sum_c2}), and using
$N_i=\sqrt{i!}$, we see that
\begin{equation}
{\cal{H}} = \Omega \sum_{n} \sqrt{n} \left( \ket{n-1} \bra{n} +
\ket{n} \bra{n-1} \right) .
\label{2d_ham}
\end{equation}
In essence we
have established a linear sequence of states, each coupled to the
the next analogously to the states on a 1D chain \ref{1d_ham}. However, each of our states
is in fact a superposition of many configurations of the 2D array,
and crucially the effective coupling
from each state to the next increases along the sequence.
It has been shown (e.g. \cite{Fitzsimons:2005p6472}) that a quantum state released at
the end of a semi-inifinite chain of states, with constant couplings, will
travel ballistically: the average position of the state along the
chain is
proportional to the time passed, and inversely proportional to the
coupling strength. Since, in the one-dimensional case, the position is
proportional to the number of spins that have flipped, we have that
the total polarisation will increase linearly with time.
We can establish the rate of propagation in the 2D case using the ansatz
that the time taken to travel between two neighbouring nodes is inversely
proportional to the strength of the coupling between them. The total
time is then $t_{2D}\propto \sum_{i=1}^{n}\frac{1}{\sqrt{i}}\simeq
n^{\frac{1}{2}}$. As in the one-dimensional case, the position along the chain
corresponds to the the number of spins that have flipped, and so we
would expect the total polarisation to be proportional to $t^2$. This prediction of a quadratic speed-up of signal going from 1D to 2D is the central result of our paper, and
was confirmed by simple numerical simulations of Eq. (\ref{2d_ham})
(Fig. \ref{comparison}).
\begin{figure}
\caption{Expected total polarisation against time. Time in units of
$\frac{1}{\Omega}$, dephasing rate $\Gamma = 1$. The gradient of the
'one dimension with decoherence' line tends to $\frac{1}{2}$ asymptotically.}
\label{comparison}
\end{figure}
Unfortunately the mapping from 2D to 1D is not readily extendible to 3D. However,
our results so far could have been anticipated using simple dimensional arguments; if one postulates that the rate of spin propagation is
proportional to the boundary of the region, one can predict the
correct scaling behaviour. In 1D the boundary size is
independent of the region size; no matter how many spins have flipped,
it still has size one. The coupling strength between states $\ket{n}$
is constant. In the 2D case, the boundary size scales
with the square root of the area, and the coupling goes with
$\sqrt{n}$. In 3D, the boundary scales like the cube root of the volume squared, and so we expect the coupling to scale
as $n^{\frac{2}{3}}$. Following similar logic to that used in 2D
case: $t_{3D}\propto \sum_{i=1}^{n}\frac{1}{i^\frac{2}{3}}\simeq
n^{\frac{1}{3}}$, and so $n \sim t^3$.
We now consider the effect of decoherence. Much of the early work on continuous time quantum random walks looked at the speedup they afforded over their classical
counterparts~\cite{Farhi:1998p6471}, but didn't make any statement about the conditions
under which we would expect the quantum walk to exhibit classical
behaviour, as we might expect in a regime of suitably heavy
dephasing, say.
We begin by considering a collective noise operator: $ L=\sum_{n}n\ket n\bra n$. This represents noise that applies uniformly to the whole lattice: global fluctuations in the magnetic field, for example. As the effect of this type of noise depends only on the number of `up' spins, the system remains in the reduced basis of number states calculated earlier, with only the coherences between these states affected.
Our starting point is the Lindblad master equation \begin{equation} \dot{\rho}=i\left[\rho,H\right]+\frac{1}{2}\Gamma\left(2L\rho
L^{\dagger}-L^{\dagger}L\rho-\rho L^{\dagger}L\right). \end{equation}
We proceed by splitting up the equation into diagonal and off-diagonal terms:\begin{align} \dot{\rho}_{ii} & = i\sum_{k=\pm
i}\left(\rho_{ik}g_{ki}-\rho_{ki}g_{ik}\right)=-2\sum_{k=\pm
i}Re\left[\rho_{ik}g_{ki}\right] \label{rhoii}\\
\dot{\rho_{ij}} & = i\left(\sum_{k=\pm j}\rho_{ik}g_{kj}-\sum_{k=\pm
i}\rho_{kj}g_{ik}\right)-\Gamma\rho_{ij}\end{align} where $g_{ij}$ is the coupling between states $i$ and $j$. In the limit of heavy dephasing ($\Gamma\gg g$), we have a process similar to adiabatic following, and we can make the approximation\[ \Gamma\rho_{ij}\approx i\left(\sum_{k=\pm
j}\rho_{ik}g_{kj}-\sum_{k=\pm i}\rho_{kj}g_{ik}\right).\] We consider the $\rho_{ij}$ as a set of $\frac{n(n-1)}{2}$ variables and solve for them in terms of the $\rho_{ii}$. Neglecting terms that are second order in $\frac{g}{\Gamma}$, and substituting back into Eq. (\ref{rhoii}) gives\[
\dot{\rho}_{ii}=-\sum_{j=i\pm1}\frac{2|g_{ij}|^{2}}{\Gamma}\left(\rho_{ii}-\rho_{jj}\right).\] Our quantum chain formally reduces to a classical Markov chain on the same statespace, with transition rates proportional to the coupling squared.
Although states with more `up' spins decohere more quickly, the decoherence rate $\Gamma$ is not multiplied for higher states, as it is the \textit{relative} decoherence rate between neighbouring states, which is of importance.
In one-dimension $g_{ij} = 1$ and we are reduced to a simple random walk on a semi-infinite line. By analogy with simple diffusion we expect that the resulting distribution is roughly Gaussian, with the expected number of flipped spins going with $\sqrt{t}$: the rate of spin propagation drops from $t$ to $\sqrt{t}$. This result was confirmed numerically (Fig. \ref{comparison}).
In the two-dimensional case $g_{ij} = \sqrt{j}, j=i+1$: We get a random walk with increasing transition rates. Numerically (Fig. \ref{comparison}), we find that the rate of spin propagation drops from $t^2$ to $t$ - still an encouraging scaling.
We also investigated the `individual noise' case, where the dephasing occurs independently on each site (see supplementary material). Although the calculations differ, we see the same rate behaviour as for the collective noise case.
Finally we consider imperfect inital polarisation (i.e. finite temperature) - a property exhibited by any real experimental system. As discussed in the supplementary material \footnote{Supplementary material: http://qunat.org/papers/amp/}, a fortuitous consequence of the propagation rules is that our system is particularly robust against this source of error; below an initialization threshold of approximately $4\%$ it is extremely unlikely that a false positive will occur. This places our protocol well within experimental capabilities; for example for an array placed in a standard W-band electron spin resonance system ($100$ GHz) and cooled using liquid 4He to $1.4$ degrees Kelvin, only $3.1\%$ of electron spins will be in the `up' state.
\end{document} | arXiv |
\begin{document}
\begin{abstract} Given a pseudoconvex domain $U$ with $\cali{C}^1$-boundary in $\mathbb{P}^n,$ $n\geq 3,$ we show that if $H^{2n-2}_{{\rm dR}}(U)\not=0,$ then there is a strictly psh function in a neighborhood of $\partial U.$ We also solve the ${\overline\partial}$-equation in $X=\mathbb{P}^n\setminus U,$ for data in $\cali{C}^\infty_{(0,1)}(X).$
We discuss Levi-flat domains in surfaces. If $Z$ is a real algebraic hypersurface in $\mathbb{P}^2,$ (resp a real-analytic hypersurface with a point of strict pseudoconvexity), then there is a strictly psh function in a neighborhood of $Z.$ \end{abstract}
\title{Pseudoconvex domains with smooth boundary in projective spaces}
\noindent {\bf Classification AMS 2010:} Primary: 32Q28, 32U10; 32U40; 32W05; Secondary 37F75\\ \noindent {\bf Keywords:} Levi-flat, ${\overline\partial}$-equation, pseudo-concave sets, strictly plurisubharmonic functions.
\section{Introduction} \label{intro}
In this paper we discuss the pluri-potential theory on a smooth hypersurface $Z$ in $\mathbb{P}^n.$ This includes the question of existence of positive closed (resp. ${dd^c}$-closed) currents supported on
$Z$ and also the question of the existence of a strictly plurisubharmonic function (psh) in a neighborhood of $Z.$ We will sometimes need a pseudo-convexity hypothesis on
a component of the complement. We also give some results in the case where $Z$ is a closed set satisfying some geometric assumptions.
Recall that a complex manifold of dimension $n$ is strongly $q$-complete if it admits a smooth exhaustion function $\rho$ whose Levi form at each point has at least
$(n-q+1)$ strictly positive eigenvalues. The main result in that theory is the following Theorem.
\renewcommand{Theorem 6.5}{Theorem} \begin{thmspec}\label{T:AG}{\rm (Andreotti-Grauert \cite{AG})}. Let $U$ be a strongly $q$-complete manifold. Then for every coherent analytic sheaf $\cali{S}$ over $U,$ $H^k(U,\cali{S})=0,$ for $k\geq q.$
In particular, if $k\geq q, H^{n,k}(U, \mathbb{C})=0$
\end{thmspec} Indeed, for a holomorphic bundle $E,$ $H^{n,k}(U,E)=H^k(U,\Lambda^{n,0} U\otimes E).$
Our main result will use the above theorem.
\renewcommand{Theorem 6.5}{Theorem 1.1} \begin{thmspec}
Let $U$ be a domain in $\mathbb{P}^n,$ $n\geq 3,$ with $\cali{C}^1$ boundary. Assume $U$ is strongly $(n-2)$-complete (i.e. it admits a smooth exhaustion function whose Levi form
has at least $3$ strictly positive eigenvalues at each point). Assume also that $H_{\rm dR}^{2n-2}(U,\mathbb{C})\not=0.$
Then there is a strictly psh function near the boundary of $U.$ \end{thmspec} As a consequence of Theorem 1.1 and of Proposition 2.1 below, we get the following result.
\renewcommand{Theorem 6.5}{Corollary 1. 2} \begin{thmspec}
In $\mathbb{P}^n,$ $n\geq 3,$ there is no $\cali{C}^1$ hypersurface $Z$ such that the two components $U^\pm$ of $\mathbb{P}^n\setminus Z$ are both
strongly $(n-2)$-complete and one them, say $U^-$, satisfies $H^{2}_{\rm dR}(U^-)=0$. \end{thmspec}
Observe that in $\mathbb{P}^n$ the hypothesis $H^{2}_{\rm dR}(U^-)=0$ for a domain with $\cali{C}^1$ boundary implies that $H_{\rm dR}^{2n-2}(U^+,\mathbb{C})\not=0.$ See the proof of Proposition 2. 1 below.
Y. T. Siu has proved the following result, \cite{Siu2}.
\renewcommand{Theorem 6.5}{Theorem} \begin{thmspec}{\rm (Siu).}
In $\mathbb{P}^n,$ $n\geq 3,$ there is no Levi-flat hypersurface $Z$ i.e both sides are $1$-complete (exhaustion with $n$ strictly positive eigenvalues). \end{thmspec} The meaning of Levi flat is that the Levi form of a defining function $r$ for $Z,$ is identically zero on the complex tangent space. Since the Levi problem
has a positive solution in $\mathbb{P}^n,$ this implies that both components $ U^\pm$ are Stein and hence strongly $1$-complete. In particular they are strongly $(n-2)$-complete for $n\geq 3.$
We also study the solvability of the ${\overline\partial}$-equation (resp.${\partial\overline\partial}$-equation) on a pseudo-concave set $X$ in the $\cali{C}^\infty$ category, i.e. we assume that $\mathbb{P}^n\setminus X$ is Stein. We use the H\"ormander duality method (see \cite{BS} for example).
The existence of ${dd^c}$-closed currents on $Z,$ gives an obstruction to the resolution of the ${\overline\partial}$- equation in the smooth category, see Theorem 6.1.
In the last section we discuss the same problem in $\mathbb{P}^2.$ So far it is not known if there are smooth Levi-flat hypersurfaces in $\mathbb{P}^2.$
\noindent {\bf Acknowledgements.} It is a pleasure thank Bo Berndtsson and Tien-Cuong Dinh for their insightful comments and the referee for his questions.
\section{Real hypersurfaces in $\mathbb{P}^n$ }
Let $Z$ be a real hypersurface in $\mathbb{P}^n,$ $n\geq 2,$ of class $\cali{C}^1.$ Since $\mathbb{P}^n$ is simply connected, $\mathbb{P}^n\setminus Z$ has two components $U^\pm.$
Denote by $\omega$ a K{\"a}hler form of mass one on $\mathbb{P}^n.$
\renewcommand{Theorem 6.5}{Proposition 2.1} \begin{thmspec} If $n=2$, the form $\omega$ is $d$-exact either on a neighborhood of $\overline{U^-}$ or on a neighborhood of $\overline{ U^+}.$
If $n\geq 3,$ then: $H_{\rm dR}^{2n-2}(U^{+},\mathbb{C})\not=0,$ iff $H_{\rm dR}^2\overline{ (U^-)}=0$. In particular the form $\omega$ is $d$-exact, in a neighborhood of $\overline{ U^-}$.
\end{thmspec} \proof Assume first $n=2$. If $\omega$ is not $d$-exact in a neighborhood of $\overline{U^+},$ there is a 2-cycle $\sigma^+$ in $\overline{U^+}$ which is non-trivial. Using that $Z$ is $\cali{C}^1,$ we can retract $\sigma^+$as a cycle in ${U^+}.$ Similarly, we would get a nontrivial cycle $\sigma^-$ in ${U^-}.$ But by Poincar\'e duality, $\sigma^+\sim c_+\omega$ and $\sigma^-\sim c_-\omega,$ with $c_+ ,c_-$ non zero. Since $\sigma^+$ and $\sigma^-$ are disjoint, we get $\sigma^+\smile \sigma^-=0,$ and hence $c_+ c_-=0,$ a contradiction.
If $n\geq 3$ and $H_{\rm dR}^{2n-2}(U^{+},\mathbb{C})\not=0.$ Then there is a $(2n-2)$- cycle $\sigma^+$ in $U^+$, with $\sigma^+\sim c_+\omega ^{2n-2}$.
If $H_{\rm dR}^2 (\overline {U^-}) \not=0,$
we would construct as above a non-trivial $2$-cycle in $U^-$ and get a contradiction as above. \endproof
Observe that we cannot have $\omega^{n-1},$ $d$-exact near $\overline{U^+}$ and $\omega,$ $d$-exact near $\overline{U^-}.$ Otherwise, assume that $\omega^{n-1}=d(\phi_+)$ near $ \overline{U^+}$ and $\omega=d(\phi_-)$ near $ \overline{U^-}.$ Then, $\omega^{n-1}-d(\chi_+\phi_+)$ and
$\omega-d(\chi_-\phi_-)$ would have disjoint support for appropriate cut-off functions $\chi_\pm,$ contradicting that the cup-product should be non-zero.
For a compact $X$ in $\mathbb{P}^n,$ we will write $\mathcal H^2(X)=0,$ if there is an open neighborhood $V\supset X,$ such that the de Rham cohomology group $H^2_{\rm dR}(V,\mathbb{C})=0.$
\renewcommand{Theorem 6.5}{Proposition 2.2} \begin{thmspec}
Assume that $\omega$ is $d$-exact in a neighborhood of $\overline{U^-}.$
\begin{enumerate}
\item There is no closed current $A$ of order zero and dimension $2p$ supported on $\overline{U^-},$ such that $\{A\}\not=0.$ Here $ \{.\}$ denotes the de Rham cohomology class. In particular, there is no non-zero positive closed current supported on $\overline{U^-}.$
\item If $T$ is a real ${\partial\overline\partial}$-closed current (non-closed) of bidegree $(1,1)$ supported on $\overline{U^-}$ and $\{T\}\not=0.$ Then there is a ${\overline\partial}$-closed $(2,0)$ holomorphic form
non-identically zero in $U^+.$
\end{enumerate}
\end{thmspec} \proof
Since $\omega=d\varphi,$ near $\overline{U^-},$ we have $$ \langle A,\omega^p\rangle=\langle A\wedge \omega^{p-1}, d\varphi \rangle=0. $$ Hence $\{A\}=0$. If $A$ is positive closed non-zero, necessarily $\{A\} \not=0$.
It is also possible to define the cohomology class of a ${\partial\overline\partial}$-closed current in a compact K{\"a}hler manifold. It suffices to use the ${\partial\overline\partial}$- lemma, and Poincar\' e's duality.
When $T$ is a real ${\partial\overline\partial}$-closed current of bi-degree $(1,1),$ it follows from basic Hodge theory, see \cite{FS} that when $\{T\}=\{\omega\},$ then $$ T=\omega+\partial \sigma+\overline{\partial\sigma}, $$ where $\sigma$ is a $(0,1)$-current. Define $$ T_c:= \omega+\partial\sigma +\overline{\partial\sigma}+{\overline\partial} \sigma+\partial\bar\sigma. $$ It is easy to check that $d T_c=0.$ If $\partial\bar \sigma=0$ on $\overline{U^+},$ then $T_c$ is closed and supported on $\overline{U^-}.$ Since $\{T_c\}=\{\omega\}$ we get a contradiction. It follows that $\partial\bar \sigma$ is not identically zero in $\overline{U^+}.$ But ${\overline\partial} \partial \bar\sigma=-\partial T$ is supported on $\overline{U^-}.$ Hence, ${\overline\partial} (\partial \bar\sigma)=0$ on ${U^+}.$ Therefore, $\partial \bar\sigma$ is a holomorphic $(2,0)$-form. \endproof
\renewcommand{Theorem 6.5}{Corollary 2.3} \begin{thmspec} Let $Z$ be a $\cali{C}^1$ hypersurface in $\mathbb{P}^n.$ There is no positive closed current of bi- degree $(1,1)$ supported on $Z.$ \end{thmspec} \proof If $n=2$, this follows from Proposition 2.2, since we can consider that $Z$ bounds $\overline{U^-},$ and that $H_{\rm dR}^2(\overline{U^-})=0.$
Assume $n\geq3.$ Let $T$ a positive closed current of bidegree $(1,1)$ supported on $Z.$ Fix a point $p\notin Z$ and consider subspaces $L_p$ of co-dimension$(n-2)$ through
$p.$ For almost all $L_p$, $L_p\cap Z$ is of class $\cali{C}^1,$ and the slice of $T$ at $L_p$ is a positive closed current. Hence using the case $n=2,$ almost all slices vanish. This is true for all $p$ out of $Z.$ It follows then, from slicing theory, that $T=0.$ \endproof
\renewcommand{Theorem 6.5}{Remark 2.4} \begin{thmspec} \rm
If $T$ is a closed and flat current, supported on $Z$ then slicing theory is valid. We obtain that $\{T\}=0.$ Indeed the class of a slice is the slice of the class. \end{thmspec}
\renewcommand{Theorem 6.5}{Corollary 2.5} \begin{thmspec}
Let $\overline{U^-},$ be a domain in $\mathbb{P}^2$ with $\cali{C}^1$ boundary. Assume, $H_{\rm dR}^2({U^-})=0.$ Then, there is a neighborhood of $\overline {U^-},$ which is Kobayashi hyperbolic.
\end{thmspec} \proof Otherwise, in an arbitrary neighborhood of $\overline{U^-},$ we will have a non-constant holomorphic image of $\mathbb{C}.$ This will permit to construct an Ahlfors current. In particular, we will have a positive closed current of mass one on $\overline{U^-}.$ Hence it's cohomology class is non-zero, contradicting Proposition 2.2. \endproof
\section{Stricly psh functions near a compact $X$ and currents.}
Let $(M,\omega)$ be a complex Hermitian manifold of dimension $n.$
Let $X\Subset M$ be a compact set. We are interested in some general facts about the existence
of strictly psh functions near $X.$ Strictly psh functions are the starting point in order to use
H\"ormander's $L^2$ estimates, see for example J. J. Kohn \cite{Ko}. In particular, they permit to
prove regularity at the boundary, for the ${\overline\partial}$-equation.
\renewcommand{Theorem 6.5}{Proposition 3.1} \begin{thmspec} Let $X\Subset M$ be a compact set. There is a positive ${\partial\overline\partial}$-closed current $T$ of bi-dimension $(1,1)$ supported on $X$ iff there is no smooth strictly psh function $u$ in a neighborhood of $X.$ Moreover, for any $\cali{C}^2$ function $r,$ vanishing on $X$, any such current $T,$ satisfies the following equations:
\begin{equation}
\label{e:(1)}
\begin{split}
T\wedge \partial r=0,\qquad T\wedge {\partial\overline\partial} r=0.
\end{split}
\end{equation}
\end{thmspec}
\proof
The proof is essentially the same as in \cite{S3} Proposition 2.1. There, $X=\partial U$ and $U$
is smooth and pseudoconvex. Indeed, pseudoconvexity is not needed. The result is used for arbitrary $X,$ in \cite{S3} Theorem 4.3.
If $u$ is a $\cali{C}^2$ psh function in a neighborhood of $X$ and $T$ is a positive current supported on $X,$ then
$$ \langle T,i{\partial\overline\partial} u \rangle=\langle i{\partial\overline\partial} T,u \rangle. $$ So if $i{\partial\overline\partial} T=0,$ we get that $T=0,$ near every point where $u$ is strictly psh. Hence if there is a strictly psh function near $X$, then $T=0$.
We just show that any positive ${\partial\overline\partial}$-closed current $T$ of mass one supported on $X$ satisfies the above relations. The proof of the other assertions is identical to the one in \cite{S3}, mainly Hahn-Banach Theorem.
Since $T$ is ${\partial\overline\partial}$-closed then
$\langle T, i{\partial\overline\partial} r^2\rangle=0.$ Expanding and using that $T$ is positive and is supported on $\{r=0\},$we get:
$T\wedge i\partial r\wedge\overline{\partial} r=0.$ Therefore, $T\wedge \partial r=0.$
Let $\chi$ be a smooth non-negative function with compact support. Using that $T\wedge \partial r=0,$ we get that: $$ 0=\langle T,i{\partial\overline\partial} (\chi r) \rangle=\langle T, \chi i{\partial\overline\partial} r \rangle. $$ Since $\chi$ is arbitrary, the measure, $T\wedge i{\partial\overline\partial} r=0.$ \endproof
\renewcommand{Theorem 6.5}{ Remark 3.2} \begin{thmspec} \rm
If $T$ is positive and ${\partial\overline\partial}$-closed on $M,$ then the calculus can be extended to continuous psh functions near $Support(T)$ \cite{DS3}. In fact if a function $u$ is continuous on $Support(T)$ and is locally approximable, on $Support(T),$ by continuous psh function, then
$T\wedge i{\partial\overline\partial} u =0.$
In particular, let $U$ be a pseudo-convex domain with boundary of class $\cali{C}^2$ in $\mathbb{P}^n.$ According to \cite {OS}, $U$ admits a bounded, strictly psh continuous exhaustion function $u.$
Since $\partial U$ is of class $\cali{C}^2. $ For $ p\in \partial U,$ $u$ is approximable by psh functions in a fixed neighborhood of $p.$ Indeed, it suffices to push functions in the normal direction at $p.$ It follows that for $T$ positive of bi-dimension (1,1), ${\partial\overline\partial}$-closed and supported on $\overline U,$ we have: $T\wedge i{\partial\overline\partial} u =0.$ Hence $T$ is supported on $\partial U.$
\end{thmspec}
\renewcommand{Theorem 6.5}{Corollary 3.3} \begin{thmspec} Let $X\Subset M$ be a compact subset. Assume there is no strictly psh function in a neighborhood of $X.$ Then there is a compact $X_{\infty} \subset X$ with $X_{\infty} = \overline{\bigcup_{\alpha} X_{\alpha}},$ each $X_{\alpha},$ is compact connected and every continuous psh function, in a neighborhood of $X_{\alpha}$ is constant on $X_{\alpha}.$ Moreover, there is a positive ${dd^c}$-closed current $T$ of bidimension $(1,1)$ such that $X_{\infty} =Support (T).$
For any compact $K \subset X$ with $K \cap X_{\infty} = \emptyset,$ there are strictly psh functions near $K.$
\end{thmspec}
\proof
Assume there is no strictly psh function near $X.$ Then there is a positive ${\partial\overline\partial}$-closed current $T$ of bi-dimension $(1,1)$ supported on $X,$ of mass $1.$ We can assume
$T$ is extremal and define $X'= Support (T).$ Then according to Proposition 4.2 in \cite {S3}, every continuous psh function near $X'$ is constant on $X'.$
Let $\mathcal C_{1,1}$ denote the convex compact set of positive ${dd^c}$-closed currents supported on $X,$ of bi-dimension $(1,1)$ and of mass $1.$ Let $(T_{\alpha}),$ be the family of extremal elements in $\mathcal C_{1,1}.$ Let $X_{\alpha}: = Support (T_{\alpha}).$ Define $X_{\infty} = \overline{\bigcup_{\alpha} X_{\alpha}}.$ Since $(T_{\alpha})$ is extremal, it's support $X_{\alpha}$ is connected.
Let $(T_n), n \geq 1,$ be a dense sequence in $\mathcal C_{1,1}.$ Define : $T_{\infty}= \sum _{n} 2^{-n} T_n ,$ then $T_{\infty} \in \mathcal C_{1,1}$ and $X_{\infty} =Support (T_{\infty}).$
As we have seen every $X_{\alpha}$ has the property that continuous psh function in a neighborhood of $X_{\alpha},$ is constant on $X_{\alpha}.$ If $K \subset X$ and $K \cap X_{\infty} = \emptyset,$ then there are strictly psh functions near $K.$
Indeed by Krein-Milman Theorem $\mathcal C_{1,1},$ is the closed convex hull of it's extremal elements. Hence there is no positive ${\partial\overline\partial}$-closed current $T$ of bi-dimension $(1,1)$ supported on $K.$ Proposition 3.1 implies that there is a strictly psh function near $K.$
\endproof \renewcommand{Theorem 6.5}{Remarks 3.4} \begin{thmspec} \rm
\begin{enumerate}
\item Following \cite {Su}, one should call $X_{\infty} $ the Poincar\'e set of $X$ and $ T_{\infty},$ a Poincar\'e current for $X.$
There are many examples of the above decomposition in holomorphic dynamics. It could happen that $X\setminus X_{\infty}, $ contains a biholomorphic image of $ \mathbb{C}^2,$ this is the case in the dynamics of H\'enon maps, if we take $X= \overline K^+,$ see \cite {DS4}.
If $X$ is the the closure of the Torus in Grauert's example for the Levi problem, as described in \cite {S3}, then there are uncountably many $X_{\alpha},$ each one being a real torus and also the closure of an image of $\mathbb{C}.$
In fact for a current $T_{\alpha}$ as above, a continuous $T_{\alpha}$ subharmonic functions (in the sense of \cite {S1}) is necessarily constant. These are the function which are
decreasing limits of $\cali{C}^2$ functions $u,$ satisfying ${dd^c} u \wedge T_{\alpha} \geq 0.$ This is a more intrinsic property, since it depends on the "complex directions" of $T_{\alpha}$
i.e. the infinitesimal complex structure of $X,$ independently of any smoothness assumption.
\item A similar decomposition is given in \cite {S3} Theorem 4.3, for a domain $U$ admitting a continuous psh exhaustion function $\varphi.$ The obstruction to Steiness, is the existence of positive ${\partial\overline\partial}$-closed currents supported on the level sets $\varphi = c.$
The " Poincar\'e" decomposition of an arbitrary domain $V \subset M,$ could be introduced following \cite {S3}, Definition 4.1. The Poincar\'e set $V_{\infty}$ is the union of support of Liouville currents, i.e. positive current of bi-dimension $(1,1),$ such that $i{\partial\overline\partial} T=0$ and $T\wedge i{\partial\overline\partial} v=0,$
for every bounded continuous psh function in $V.$
One can prove that $V_{\infty}$ is the support of a Liouville current $T_{\infty}$ and that $V_{\infty}$ is 1-pseudo-convex.
\item According to \cite {FS1} Corollary 2.6, if $T$ a positive bi-dimension $(1,1)$, ${\partial\overline\partial}$-closed current, then $M \setminus Support(T)$ is $1$-pseudo-convex or with another terminology, $Support(T)$ is $1$-pseudo-concave. So $X_{\infty}$ is $1$-pseudo-concave. Hence the existence of a positive bi-dimension $(1,1),$ ${\partial\overline\partial}$-closed current, always implies the existence of a $1$-pseudo-concave set. In fact there is a Poincar\'e decomposition $(X_{\alpha})$ of $Support(T)$ and each $X_{\alpha}$ is $1$-pseudo-concave.
In dimension $2$ and if $M$ is a surface where the Levi-problem has a positive solution, there is a smooth strictly psh exhaustion function, on $M \setminus X_{\infty}.$ \item Consider a compact set $X\Subset M,$ and a closed pluripolar set $E \subset X.$ Assume $X\setminus E,$ satisfies the local maximum principle for continuous psh functions. More precisely if $p \in X\setminus E,$ and $V_p$ is a neighborhood of $p,$ disjoint from $E.$ Then for any continuous psh function $u$ near $\overline V_p$ , we have
$$ u(p) \leq \max_{z \in X\cap \partial V_p} u(z).$$
Then there is no strictly psh function on $X$ and hence there are extremal positive ${dd^c}$-closed currents supported on $X.$
The proof is basically the same as in \cite {BS}. Suppose there is a strictly psh function $u$ near $X.$ Assume it reaches it's maximum on $X$ at the point $p.$
We can assume $p=0$ in a local chart, with local coordinates $z.$ Let $v$ be a psh near $p,$ such that $v=-\infty$ on $E.$ For $\epsilon$
small enough, the function $u+ \epsilon v(z),$ will have a maximum at a point $q \notin E$ near $p.$ So we can assume $0=p \notin E.$ Then for an appropriate cut-off function $\chi$, equal to $1$ near $0,$ the function $u- \epsilon \chi(z). \|z\|^2,$ will have a strict maximum at $p,$ contradicting the local maximum principle.
In particular if $X=E$ is pluripolar, either there is a strictly psh function near $E$ or it admits a Poincar\'e decomposition. \item Slodkowski \cite {Sl} has shown, that a closed set $Z \subset M$ satisfies the local maximum principle iff it is $1$-pseudo-concave. \item
It follows from Corollary 3.3 that smooth psh functions in a neighborhood of $X$ separate points in $X,$ iff there is a strictly psh function near $X.$ Indeed, if they separate points, ther is no $T_\alpha,$ hence there is a strictly psh function. For the converse, one can observe, that if $u$ is a strictly psh function near $X,$ then smooth function on a level set of $u$ can be extended to a strictly psh function. \end{enumerate}
\end{thmspec}
\renewcommand{Theorem 6.5}{Corollary 3.5} \begin{thmspec} Let $U\Subset M$ be a domain with $\cali{C}^2$ boundary. Ler $r$ be a defining function for $\partial U$. Let $W\subset \partial U$ denote the set of points where the Levi-form is not positive or negative definite. Assume every component of $W$ is of 2-Hausdorff measure zero. Then, there is a smooth strictly psh function $u$ in a neighborhood of $\partial U.$
\end{thmspec}
\proof
We can assume that
$
U:=\left\lbrace z\in U_1:\ r(z)<0 \right\rbrace
$ where $U_1$ is a neighborhood of $\overline U.$ We have that $\partial r$ does not vanish on
$\partial U.$
Assume there is no strictly psh function near $\partial U$. Let $T$ be an extremal positive ${\partial\overline\partial}$-closed current, of mass $1,$ supported on $\partial U$.
Recall that the Levi-form is defined on the complex tangent space of the boundary. If $ \langle\partial r(z),t \rangle=0,$ then the Levi-form at the point $z$ for the direction $t,$ is given by: $ \langle i{\partial\overline\partial} r(z),it\wedge\bar{t} \rangle.$
At points of the boundary where $i{\partial\overline\partial} r>0$ or $i{\partial\overline\partial} r<0,$ on the complex tangent space, it follows from the equation $T\wedge i{\partial\overline\partial} r=0,$
that the current $T$ has no mass there, hence it is supported on $W.$ Since it is extremal it is supported on a component of $W.$ But as observed in \cite{BS}, positive ${\partial\overline\partial}$-closed currents give no mass to sets of 2-Hausdorff measure zero. So $T=0$ and the assertion follows. \endproof
\renewcommand{Theorem 6.5}{Remark 3.6} \begin{thmspec} \rm Let $X$ be a compact set in $M.$ Let $E$ be a closed subset of $X,$ of 2-Hausdorff measure zero. Assume that for every point $p\in X\setminus E$ there is a neighborhood $V_p$ of $p$ and a continuous psh function $u_p$ in $V_p$ peaking at $p$ on $X\cap V_p.$ Then there is a strictly psh function in a neighborhood of $X.$
Indeed, one can construct a continuous psh function, $v_p$ in a neighborhood of $X$, strictly psh at $p$ and peaking at $p$. Using Remark 3.2, one shows that a ${dd^c}$-closed current $T$ supported on $X$ has no mass near $p$. As above, it follows that $T=0.$ \end{thmspec}
A similar argument gives the following. Let $X$ be a real compact sub-manifold in $M$. If the set $E$ of points in $X$ where there is a complex tangent is of 2-Hausdorff measure zero, then there is a smooth strictly psh function $u$ in a neighborhood of $X.$ Indeed any positive ${\partial\overline\partial}$-closed current of bi-dimension $(1,1),$ has to be supported on $E$.
\renewcommand{Theorem 6.5}{Corollary 3.7} \begin{thmspec} Let $C$ be a compact connected real surface in $M.$ There is a strictly psh function near $C$ iff $C$ is not a complex curve.
\end{thmspec}
\proof
It is clear that if $C$ is a complex curve, there is no strictly psh function near $C$ (by maximum principle).
Recall that the support of a positive
${\partial\overline\partial}$-closed currents, satisfies the local maximum principle for local psh functions, \cite {S1} Theorem 3.2.
Asssume $C$ is not a complex curve. Let $T$ be a positive ${\partial\overline\partial}$-closed current of bi-dimension $(1,1)$ supported in $C.$ Since $C$ is a manifold, equations (3.1) permit to consider $T$ as a current on $C.$ Let $E_c$ denote the set of points in $C$ where the tangent space is complex. Since $C$ is not a complex curve, then $E_c$ admits boundary points in $C.$ The current $T$ is of bidimension $(1,1)$ and is supported on $E \subset E_c.$ Let $p$ be a boundary point of $E$ in $C.$ There are psh functions in a fixed neighborhood $V$ of $p$ with a unique peak point on $V$ near
$p.$ This contradicts the above local maximum principle. So there is no such $T.$ \endproof
\section{Constructing strictly psh functions}\label{S:Spsh}
We give a stronger version of Theorem 1.1. We do not assume, that $X$ is a domain with $\cali{C}^1$ boundary. When $X=\overline U{^-},$ is a domain with $\cali{C}^1$ boundary, it is equivalent to assume that $ H_{dR}^2(U^-)=0$ or that $\mathcal H^2(X)=0,$ as explained in Proposition 2.1.
\renewcommand{Theorem 6.5}{Theorem 4.1} \begin{thmspec}
Let $X$ be a compact set in $\mathbb{P}^n,$ $n\geq 3,$ such that $\mathcal H^2(X)=0.$ Assume that the open set $U^+:=\mathbb{P}^n\setminus X$ is strongly $(n-2)$-complete. Then there is a strictly psh function in a neighborhood of $X.$
\end{thmspec}
\proof Let $\cali{C}^\infty_{(0,1)}(X)$ denote the space of smooth forms of bidegree $(0,1)$ on $X.$ Here the smoothness is in the Whitney sense with the usual $\cali{C}^\infty$-Fr\'echet topology. Smooth functions on $X,$ in the Whitney sense, do extend as smooth functions in a neighborhood of $X.$ They admit also an intrinsic characterization using only the jet on $X$ i.e. the collection of derivatives.The jet extends and it is the jet of a smooth function. This permit to give the space a Fr\'echet topology, "uniform convergence on derivatives" \cite {MA}.
The dual space of $\cali{C}^\infty_{(0,1)}(X)$ is the space of currents $R$ of bidegree $(n,n-1)$ on $\mathbb{P}^n,$ supported on $X.$
Let $\mathcal E$ denote the closure in $\cali{C}^\infty_{(0,1)}(X)$ of $\{{\overline\partial} u\},$ $u\in\cali{C}^\infty(X).$ We want to use the Hahn-Banach Theorem, to show that $\varphi^{0,1}$ is in $\mathcal E.$ Let $R$ be a current, supported on $X,$ vanishing on the subspace $\mathcal E.$ We need to show that $\langle R,\varphi^{0,1}\rangle =0.$ Since the current $R$ is supported on $X,$ and vanishes on the subspace $\mathcal E,$ then ${\overline\partial} R=0$ on $\mathbb{P}^n.$ It follows from, $H^{n,n-1}(\mathbb{P}^n)=0,$ that there is $S$ of bidegree $(n,n-2)$ such that $R={\overline\partial} S.$ Moreover, $S$ is smooth on $U^+.$ Indeed, $S$ is constructed using canonical solutions of the Hodge Laplacean, which satisfy the same regularity as the right hand side. Here we are using the local regularity in Hodge theory, \cite {DE}.
The Andreotti-Grauert Theorem implies that on $U{^+},$ since ${\overline\partial} S=0,$ there is a form $B$ such that $S={\overline\partial} B,$ on $U{^+}.$ Let $V\supset X$ be an open neighborhood of $X$ such that on $V,$ $$ \omega=d\varphi=\partial \varphi^{0,1}+{\overline\partial} \varphi^{1,0},\quad {\overline\partial} \varphi^{0,1}=0. $$ Let $\chi$ be a cutoff function with $\chi=1$ in a neighborhood of $U^+\setminus V$ and vanishing near $X.$ Then $R={\overline\partial} S={\overline\partial}[S-{\overline\partial} (\chi B)].$ Observe that $S_1:= S-{\overline\partial} (\chi B)$ is supported on $V,$ where ${\overline\partial}\varphi^{0,1}=0.$ Hence, $$ \langle R,\varphi^{0,1}\rangle =\langle {\overline\partial} S_1, \varphi^{0,1}\rangle=-\langle S_1,{\overline\partial} \varphi^{0,1}\rangle=0. $$ It follows, by Hahn-Banach theorem, that $\varphi^{0,1}\in \mathcal E.$ Hence, there is a family $(u_\epsilon)$ of smooth functions such that ${\overline\partial} u_\epsilon\to \varphi^{0,1}$ in $\cali{C}^\infty(X).$ Then $\omega=\lim_{\epsilon\to 0} i{\partial\overline\partial} \big({u_\epsilon-\bar u_\epsilon\over i}\big).$
As a consequence, for $\epsilon>0$ small enough and $v_\epsilon:= {u_\epsilon-\bar u_\epsilon\over i},$ $i{\partial\overline\partial} v_\epsilon\geq {1\over 2}\omega$ on $X.$ Hence, $v_\epsilon$ is strictly psh near $X.$ \endproof
To get Theorem 1.1, we should take $X=\overline U{^-}.$
In order to prove Corollary 1.2, we will use the following version of the maximum principle, implicit in \cite {S1}.
\renewcommand{Theorem 6.5}{Lemma 4.2} \begin{thmspec} Let $\rho$ be a function of class $\cali{C}^2$ in a neighborhood of a closed ball $ \overline B$ in $\mathbb{C}^k.$ Assume that for every point $ z\in B,$ there is a direction $t_z$ such that $\langle i{\partial\overline\partial} \rho(z),it_z\wedge\bar{t_z} \rangle>0.$ Then there is no local maximum of $\rho$ in $B.$
\end{thmspec}
\proof
Assume by contradiction, that $\rho$ has a local maximum at a point $p\in B.$ Consider a complex disc $D_p$, at p in the direction $t_p.$ The restriction of $\rho$ to $D_p$ is strictly
subharmonic on $D_p,$ near $p.$ It cannot have a local maximum at $p.$
\endproof
We now prove Corollary 1.2.
\proof
Assume to get a contradiction that the component $U^-,$ satisfies $H^2(\overline{U^-})=0.$ According to Theorem 4.1, there is a strictly psh function $v$ near $Z.$
There is a point $z_0\in Z$ where $v(z_0)=\max_Z v.$ Using that $v$ is strictly psh, we can assume the maximum at $z_0$ is strict and $v(z_0)=0.$ Indeed, it suffices to add a negative small perturbation vanishing to second order at $z_0.$ So, $\left\lbrace v<0\right\rbrace,$ is a strictly pseudoconvex domain near $z_0.$ Hence there is a germ of complex hypersurface $W$ tangent to $U^-$ at $z_0$ and such that $W \setminus (z_0 )\subset U^+.$ Then $W \setminus( z_0) $
is strongly $(n-2)$-complete, with an exhaustion function $\rho,$ with $ 2$ strictly positive eigenvalues at each point, going to $+\infty $ at $z_0.$ Recall that the restriction of a
strongly $q$-complete function to a submanifold is still $q$-complete.
Without loss of generality, we can assume that $W \setminus( z_0) $ is a pointed ball $B^{*}$ of dimension $(n-1).$ We can find a sequence of balls $(B_j)$ of dimension $(n-2)$ whose centers $(z_j)$ converge to $z_0.$ On $B_j,$the function $\rho$ has a Levi-form, with one strictly positive eigenvalue. It satisfies the maximum principle, given in Lemma 4.2. Since $\rho,$ is uniformly
bounded on $\bigcup (\partial B_j),$ it cannot converge to $+ \infty$ at $0.$ This finishes the proof . The last part shows that the pointed ball $B^{*}$ of dimension $(n-1),$ is not strongly $(n-2)$-complete.
\endproof
\section{${\overline\partial}$ equation on pseudo-concave sets}\label{S:dbar}
\renewcommand{Theorem 6.5}{Theorem 5.1} \begin{thmspec} Let $X$ be a compact set in $\mathbb{P}^n,$ $n\geq 3.$ Assume $U^+:=\mathbb{P}^n\setminus X$ is pseudoconvex (hence Stein). Then, the following properties hold. \begin{itemize} \item[(i)] Let $\beta$ be a smooth $(0,1)$-form on $X$ such that ${\overline\partial} \beta=0$ in $\cali{C}^\infty_{(0,1)}(X).$ Then for each integer $k,$ there is a function $v\in\cali{C}^k(X)$ such that ${\overline\partial} v= \beta.$ \item[(ii)] If $\mathcal H^2(X)=0,$ then there is a strictly psh function near $X.$
\end{itemize} \end{thmspec}
Recall that on a pseudoconvex domain $U$ in $\mathbb{P}^n,$ Takeuchi \cite{T} and Elencwajg \cite{E}, proved that if $\delta$ denotes the distance to the boundary of $U$ (with respect to the Fubini-Study metric $\omega$), there is a constant $C$ such that near the boundary of $U,$ $$ i{\partial\overline\partial} (-\log \delta)\geq C\omega. $$
In particular, pseudoconvex domains are Stein. Indeed, the result is valid when $U$ is pseudoconvex in a compact K\"ahler manifold $M,$ with positive holomorphic bisectional curvature. Moreover the constant $C$ depends only on the curvature. See \cite{E}, in particular, inequality (39), and Greene-Wu \cite{GW}.
We will use the following result which is a consequence of Serre's duality and H\"ormander's estimates. One solves the ${\overline\partial}$-equation with the weight $e^\varphi,$ with $\varphi$ psh instead of the classical $e^{-\varphi},$ one need a vanishing of the forms on the boundary, see for example \cite{BS}. \renewcommand{Theorem 6.5}{Theorem 5.2} \begin{thmspec} Let $U$ be a pseudoconvex domain in $\mathbb{P}^n.$ Let $\alpha\in L^2_{p,q}(U,{loc}),$ $q<n,$ be a ${\overline\partial}$-closed form such that for a given $s >0$ $$
\int |\alpha|^2{1\over \delta^{s+2}} d\lambda <\infty. $$ Then there exists $u\in L^2_{p,q-1}(U,{loc})$ such that $$ {\overline\partial} u=\alpha $$ and $$
\int |u|^2{1\over \delta^{s+2}} d\lambda\leq
{1\over C}\int |\alpha|^2{1\over \delta^{s+2}} d\lambda.
$$
Here $d\lambda$ denotes the volume form associated to $\omega.$ \end{thmspec} \proof We can consider that $\beta$ is extended as a smooth $(0,1)$-form in $\mathbb{P}^n$ and that ${\overline\partial}\beta$ vanishes to infinite order on $X.$ Indeed, the jet of $\beta$ on $X$ satisfies ${\overline\partial}\beta=0$ as a jet. So the extension of $\beta$ will satisfy the asserted property.
Let $\delta$ denote the Fubini-Study distance to $X$ on $U^+.$ We know that $\varphi=-\log\delta$ is psh. We use the above H\"ormander's type result. For $s>0,$ there is an $l\geq 2$ and a form $\psi,$ such that ${\overline\partial} \psi={\overline\partial} \beta$ on $U^+,$ with the following estimate $$
\int |\psi|^2{1\over \delta^{s+l}} d\lambda\leq
C_s\int |{\overline\partial} \beta|^2{1\over \delta^{s+l}} d\lambda<\infty. $$ We can choose $s=2n+l.$ Hence for $z\in U^+,$ and $k>0$ fixed, \begin{eqnarray*}
{|\psi(z)|^2\over \delta^k(z)}&\lesssim & {1\over\delta^{2n+k}(z) }\int_{B(z,\delta(z))}|\psi|^2+ {1\over \delta^k(z)} \sup\limits_{B(z,\delta(z))} |{\overline\partial} \psi|\\
&\lesssim& \int {|\psi(z)|^2\over \delta^{2n+k}(z)}+o(\delta)\\
&\lesssim& \int {|{\overline\partial}\beta|^2\over \delta^{2n+l+k}(z)}+o(\delta)\leq C. \end{eqnarray*}
Hence, $|\psi(z)|=O(\delta^k).$ Consequently, $\psi$ vanishes on $X$ to any given fixed order. If extended by zero on $X,$ it is in $\cali{C}^k(\mathbb{P}^n).$ We can now solve in $\mathbb{P}^n$ the equation $$ {\overline\partial} u=\beta -\psi. $$ The restriction of $u$ to $X$ satisfies ${\overline\partial} u=\beta$ and is in $\cali{C}^k(X).$
If $\mathcal H^2(X)=0,$ then $\omega={\overline\partial} \varphi^{0,1}+{\overline\partial} \varphi^{0,1}$ near $X,$ with ${\overline\partial}\varphi^{0,1}=0$ near $X.$ We solve ${\overline\partial} u=\varphi^{0,1}$ on $X.$ Then $\omega={\partial\overline\partial} u+{\overline\partial} \partial \bar u=i{\partial\overline\partial} \big( {u-\bar u\over i}\big).$ The function $ v:={u-\bar u\over i}$ is strictly psh on $X$ and hence in a neighborhood of $X.$ \endproof
\renewcommand{Theorem 6.5}{Remarks 5.3} \begin{thmspec}\rm
\begin{enumerate}
\item[1.] A similar result can be obtained for $(p,q)$-forms with $q+1<n.$
\item[2.] Suppose $U^+$ is a Stein domain with smooth boundary and that $H^{2n-2}(U^+)\not=0.$
As we have seen, there is a strictly psh function on a neighborhood of $\partial U^+.$ Then a theorem of J. J. Kohn \cite{Ko} asserts that one can solve
the ${\overline\partial}$-equation in the Sobolev spaces $H^s(U^+)$ for $s$ large enough. One can solve it also in $\cali{C}^\infty.$
\item[3.] The results are valid if we replace $\mathbb{P}^n,$ by a compact simply connected K\"ahler manifold $M$, of dimension $n\geq 3,$ with positive holomorphic bisectional curvature, such that $H^{0,1}(M)=0,$
and $H^{1,1}(M)$ is one-dimensional. \item[4.] To get a strictly psh function on a neighborhood of $X,$ it is enough to assume the existence of a $1$-form $\psi$ near $X$ with $\partial \psi^{0,1}+{\overline\partial} \psi^{1,0}>0$ on $X$ and ${\overline\partial} \psi^{0,1}=0$ on $X.$ This is satisfied when $\mathcal H^2(X)=0.$
Observe however that the existence of such a $1$-form $\psi$ implies the following geometric condition on $X.$ There is no closed current $T$ of dimension $2$, with $\{T\}\not=0,$ such that the component of bi-dimension $(1,1), T_{1,1}$ is positive. Indeed, since $T$ is closed and on $X,$ it follows that ${\partial\overline\partial} (T_{1,1})=0.$ We then have, since ${\overline\partial} \psi^{0,1}=0,$ $$ 0=\langle T,d\psi\rangle= \langle T_{1,1},\partial\psi^{0,1} + {\overline\partial} \psi^{1,0}\rangle. $$ This implies that $T_{1,1}=0,$ and hence $\{T\}=0.$ \end{enumerate}
\end{thmspec}
\section{The case of surfaces}\label{S: dim2}
Let $U^-$ be a pseudoconvex domain in $\mathbb{P}^2$ with $\cali{C}^2$-boundary. Let $r$ be a defining function for the boundary $Z=\partial U^-.$ If for every $z\in Z,$ $$\langle i{\partial\overline\partial} r(z),it\wedge\bar{t} \rangle= 0\qquad\text{if}\qquad \langle\partial r(z),t \rangle=0$$ then we say that the boundary is Levi flat.
It is not known if such domains exist, even if we assume that the boundary is real analytic. However for arbitrary K\"ahler surfaces $M$ and for a Levi flat surface, there is a positive current, ${\partial\overline\partial}$-closed of mass 1 directed by the foliation on the boundary $Z.$
It is shown in \cite{FS} for $\mathbb{P}^2$ and in \cite{DNS} in general, that when there is no positive closed current of mass 1 directed by the foliation, then $T$ is unic. As we have seen in Proposition 2.2, for $\mathbb{P}^2$ or more generally for simply connected K\"ahler surfaces, there is no positive closed current on $Z.$ Indeed, we can assume that $\overline {U^-}$ satisfies $H_{dR}^2(\overline{ U^-})=0.$
\renewcommand{Theorem 6.5}{Theorem 6.1} \begin{thmspec} Let $X$ be a compact set in a compact K\"ahler surface $M$, such that $ \mathcal H^2(X)=0.$ Assume $X$ supports a positive ${\partial\overline\partial}$-closed current $T$ of mass $1.$ Then there is a real analytic $(0,1)$-form $\varphi^{0,1},$ ${\overline\partial}$-closed in a neighborhood of $X$ and such that there is no function $u$ in the Sobolev space $W{^2}(M),$ with ${\overline\partial} u=\varphi^{0,1}$ on $X.$
In particular if $X$ is a real hypersurface, there is no solution $u \in W{^ \frac {3} {2}}(X),$ for the equation ${\overline\partial} u=\varphi^{0,1}.$ \end{thmspec}
\proof For arbitrary $X$ the meaning of ${\overline\partial} u=\varphi^{0,1}$ on $X,$ is that for every current $R \in W^{-1}(M),$ supported on $X,$ $\langle R,\varphi^{0,1}\rangle= \langle R, {\overline\partial} u\rangle.$ If $X$ is a $\cali{C}^1,$ hypersurface, a function in $u \in W{^ \frac {3} {2}}(X) ,$ extends as a function in $W{^2}(M),$ and a function in $W{^2}(M),$ restricts to $W{^ \frac {3} {2}}(X),$ so ${\overline\partial} u=\varphi^{0,1}$ on $X,$ makes sense and the two notions coincide.
Since $\mathcal H^2(X)=0,$ the current $T$ is not closed and hence $\partial T$ is non-zero. It is shown in \cite{FS} that if $T\geq 0,$ ${\partial\overline\partial} T=0$ and $\int T\wedge \Omega=1$ then $T=\Omega+\partial S+\overline{\partial S}+i{\partial\overline\partial} v,$ with $S,$ $\partial S,$ ${\overline\partial} S\in L^2$ and $v\in L^p$ for all $p<2.$ Here $\Omega$ is smooth and represents the class of $T.$
It follows that \[ \partial T=-{\overline\partial}\partial \bar S\] is in the Sobolev space $W^{-1}(M).$ If $u\in W^2(M),$ since ${\partial\overline\partial} T=0,$ \[ \langle \partial T, {\overline\partial} u\rangle=0.\] On the other hand in a neighborhood of $X,$ $\omega=\partial \varphi^{0,1}+{\overline\partial} \varphi^{1,0},$ ${\overline\partial} \varphi^{0,1}=0.$ Hence $$ 1=\langle T,\omega\rangle=-2{\rm Re} \langle \partial T,\varphi^{0,1}\rangle. $$
So $\langle \partial T,\varphi^{0,1}\rangle\not=0.$ Hence we cannot have $\varphi^{0,1} = {\overline\partial} u$ on $X,$
with $u$ having an extension in $W{^2}(M).$
\endproof
\renewcommand{Theorem 6.5}{Remark 6.2} \begin{thmspec} \rm
\item If $X=\partial U^-,$ is Levi-flat with $H_{dR}^2(\overline{ U^-})=0,$ then $\partial T$ is of order zero \cite {FS} , and the same proof shows there is no continuous function $u$ on $\overline{U^-}$
such that ${\overline\partial} u=\varphi^{0,1}.$
\end{thmspec}
\renewcommand{Theorem 6.5}{Question}\rm \begin{thmspec} Suppose $U$ is a smooth pseudoconvex domain in $\mathbb{P}^n,$ $n\geq 2.$ Assume $H_{dR}^2( {U})=0.$ Is there a strictly psh function near the boundary? \end{thmspec} There is an example of a compact K\"ahler surface $M,$ with a Stein domain $U \subset M$ with real analytic boundary, but all bounded psh functions in $U$ are constant,\cite {OS} .
\renewcommand{Theorem 6.5}{Theorem 6.3} \begin{thmspec} Let $M$ be a compact complex manifold of dimension $n$. Let $Z$ be an irreducible real-analytic set of real dimension $m.$ Suppose that there is in $Z$ a germ of a complex analytic set of dimension $p\geq 1.$ Define $A_p$ as the set of points $z\in Z,$ with a germ of complex analytic set of dimension $\geq p$ through $z.$
Then $A_p$ is closed.
If there is a strictly psh function near $Z$ then $A_1$ is empty. \end{thmspec} \proof It is possible to cover $Z$ with finitely many open sets $(V_i)$ such that on $V_i,$ the equation of $Z,$ is
$\rho_i (z,\overline z)=0.$ Here $\rho_i(z,\overline z)$ a real-valued analytic function in $z,\overline z$. We can assume that the functions $\rho_i(z,w)$ are holomorphic in
$V_i\times V_i'$ , where $V_i'$ is the image of $V_i$ by conjugation.
Consider a germ of a complex analytic set $L$ parametrized by a holomorphic map $u:\mathbb{D}^p\to L$. Then, the function $\rho_i(u(t),\overline u(s))$ on $\mathbb{D}^{2p}$ vanishes when $s=\overline t$, where $\overline u(s):=\overline{u(\overline s)}$. Since this function is holomorphic, its zero set is a complex analytic set. Hence, it vanishes everywhere in $\mathbb{D}^{2p}$. Fixing an arbitrary $s$, we deduce that $\rho_i(z, u(s))=0$ for $z\in L$.
Define $ L' := \left\lbrace z\in V_i, \bigcap_{s} \rho_i(z, u(s))=0 \right\rbrace. $ Then $L'$ is an extension of the germ $L$ to an analytic set in $V_i,$ contained in $Z.$ The size of the $V_i$'s is fixed. It follows easily that $A_p$ is closed. This is precisely the Segre argument, see \cite { DS2} and \cite[Example 7]{FS2}.
If $A_1$ is non-empty, then it satisfies the local maximum principle for psh, functions in a neighborhood. If $u$ is strictly psh in a neighborhood of $Z,$ it reaches it's maximum on $A_1$ at a point $p.$ As in Remark 3.4 (4), we can arrange that the maximum is strict. A contradiction. \endproof \renewcommand{Theorem 6.5}{Theorem 6.4} \begin{thmspec}
Let $M$ be a compact complex surface. Let $U$ be a smooth domain with connected real analytic boundary $Z.$ Assume that $Z$ admits a point of strict pseudoconvexity. Then either there is a compact complex curve on the boundary, or there is a strictly psh function near the boundary. In particular if $Z \subset \mathbb{P}^2,$ there is a strictly psh function in a neighborhood of $Z.$ \end{thmspec} \proof Suppose there is no compact complex curve on the boundary. Let $S$ be the union of strictly pseudoconvex points and strictly pseudoconcave points on $Z.$ Let $W= Z\setminus S$. Since the defining function $r$ is real analytic, then $W$ is a real analytic set of dimension $\leq2$. It admits a stratification by smooth manifolds. If there is a curve of real dimension $2$ with complex tangents, then it is a complex curve. By the above theorem it has no boundary and it is necessarily of finite area, since this is the case for $W.$ Hence it is a compact complex curve. Since this is not possible, then the set $C$ with complex tangents is at most of finite one dimensional Hausdorff measure.
On the other hand, if there is no strictly psh function near the boundary, there is a positive ${\partial\overline\partial}$-closed current $T$ of bi-dimension $(1,1)$ of mass 1 supported on $\partial U.$ Such a current has no mass on the set of strictly pseudoconvex points, nor on the set of strictly pseudoconcave points, as follows from equations (3.1). Hence it is supported on $W.$ Since $T$ is of bi-dimension$(1,1)$ it is supported on $C$. But such currents don't give mass to sets of 2-Hausdorff dimension zero. See \cite {S3} for more details on the geometry of such currents. It follows that there is a strictly psh function near $\partial U.$ \endproof \renewcommand{Theorem 6.5}{Remark} \begin{thmspec} \rm It is shown in \cite { DS2} and \cite[Example 7]{FS2} that no Levi-flat hypersurface $Z$ exists in $\mathbb{P}^2$ if we assume it is smooth and real algebraic. Hence, there is a point of strict pseudoconvexity. It follows that for smooth real algebraic surfaces in $\mathbb{P}^2$ there is a strictly psh function in a neighborhood of $Z.$
\end{thmspec}
\renewcommand{Theorem 6.5}{Theorem 6.5} \begin{thmspec} Let $X$ be a compact set in a compact K\"ahler surface $M$. \begin{enumerate}
\item [1.] If $X$ is (locally) pluripolar, then either there is a positive closed current $T$ of bi-degree $(1,1),$ and of mass one supported on $X,$ or there is a strictly psh function in a neighborhood of $X.$
\item[2.] If $X$ is of Lebesgue measure zero, either there is a strictly psh function in a neighborhood of $X$ or a positive closed current $T$ of mass one supported on $X,$ or there is a $(2,0)$-form $A$ in $L_{(2,0)}^2 (M)$ of norm $1$ and holomorphic in $M\setminus X$.
\end{enumerate}
\end{thmspec} \proof Assume $X$ is pluripolar. If there is no strictly psh function near $X,$ then there is a positive ${\partial\overline\partial}$-closed current $T$ of mass $1,$ supported on $X.$
Moreover, as we have seen that,
$T=\Omega+\partial S+\overline{\partial S}+i{\partial\overline\partial} u,$ with $S,$ $\partial S,$ ${\overline\partial} S\in L^2$ and $u\in L^p$ for all $p<2.$ Hence $\partial T=-{\overline\partial}\partial \bar S.$ So $\partial \bar S$ is holomorphic out of $X$, which is pluripolar. Since holomorphic functions in $L^2,$ extend holomorphically through pluripolar sets, it follows that ${\overline\partial}\partial \bar S=0$ in $M$. Hence $T$ is closed.
If we assume only that $X$ is of Lebesgue measure zero, and there is no positive closed current supported on $X$, then $A=\partial \bar S$ is a non-identically zero, holomorphic form in $M\setminus X$, which is in $L^2$. \endproof
\end{document} | arXiv |
Turn a glass of water upside down without letting the water fall out. The secret to this trick involves some basic lessons in air pressure. Best performed over a teacher's head.
Fig. 1: The upside-down water trick.
Fill a glass part way with water. Turn it upside-down. You now have water on the floor. Why did you listen to me?
Pour water in the same glass again. Put an index card over the mouth of the glass and press the palm of your hand on the index card, pressing the card against the rim of the glass and depressing it slightly into the glass in the center (this part is very important). While your hand is on the index card over the mouth of the glass, invert the glass and slowly take your hand away. If you hold the glass steady and level, the water should remain in the glass (Fig. 1).
Why doesn't the water fall out of the glass with the index card?
The answer has to do with air pressure. Any object in air is subject to pressure from air molecules colliding with it. At sea level, the mean air pressure is one "atmosphere" (=101,325 Pascals in standard metric units). This air pressure is pushing up on the card from below, while the water is pushing down on the card from above. The force on the card is just the pressure times the area over which the pressure is applied; that's the definition of pressure. $$Force=Pressure\times Area$$ If you've done the trick correctly, the force from the air below exactly counteracts the force from the water above, and the card stays in place.
Fig. 2: Diagram showing the relevant forces on the water. The blue arrows indicate the forces due to air pressure above and below the water. The red arrow indicates the force of gravity. Together, the three forces balance out to cancel each other.
The details of this delicate balance are more easily understood by looking at the forces on the water, rather than on the card (see Figure 2). The card transfers the force of the air pressure upward to the water, so there is a pressure of (almost1) one atmosphere pushing up on the water from below. Of course there is also pressure from the air inside the glass pushing down on the water from above. The air inside the glass was originally at one atmosphere of pressure when you put the card over it, but when you inverted the glass and removed your hand, the water moved downward a very slight amount (perhaps making the card sag ever so slightly), thereby increasing the volume allotted to the air.
As the air expands to fill this increased volume, several things happen at once. The air molecules spread out so that fewer of them hit the edges of the volume each second, and they slow down so that they don't collide with the container quite as forcefully. As a result, the air pressure goes down a tiny bit according to Boyle's Law. Now the pressure inside the glass pushing down is not as great as the outside pressure pushing up, and this pressure difference is enough to counteract the gravitational force pulling down on the water. Once the card sags enough so that these three forces balance, everything will stay put. For a typical sized glass about half full of air, an air volume increase of less than 1% generates a big enough pressure difference to support the weight of the water.
There is another separate effect that helps keep the water in the glass. Water molecules have a strong attractive "cohesive" force between them due to the fact that each water molecule can make four hydrogen bonds with other water molecules. (This cohesive force is the origin of surface tension.) In the upside-down glass, it helps prevent the first water drop from separating from the rest of the water volume. As a result, the pressure difference required to keep the water in the glass is less than would be needed if there were no cohesive force. In containers with a small opening, like a straw, cohesion plays a bigger relative effect. This is why you can keep water in a straw just by putting your finger over the top, leaving the bottom open. Cohesion adds the extra force necessary to overcome small instabilities in the water.
Why doesn't the water stay in the glass when we don't use the index card?
This is really an issue of stability. In principle, if we could invert the glass of water so that the glass was perfectly level and the water was perfectly still, the forces would balance as before and the water would stay in the glass. In practice, it's impossible to achieve these conditions without the help of the card. If the glass is tilted ever so slightly to one side, or if there is a tiny ripple in the surface of the water, a drop of water will fall out of the glass on the low side, and a bubble of air will enter on the high side to make up the missing volume. Then another drop of water will fall out and another bubble of air will enter, and the process will accelerate until all the water is emptied out of the glass. With the index card in place, the water surface is kept flat and the pressure is evenly distributed over the entire mouth of the glass.
For much smaller openings, surface tension is enough to stabilize the surface, and we actually don't need the index card. Surface tension demands a certain minimum size for a drop to form; as the first water molecules begin to fall, they pull other moleules along with them until there is enough weight to overcome surface tension and separate a drop. In a narrow straw, there isn't enough room in the opening for both a drop of water to fall out and a bubble of air to flow in at the same time.
Does the shape of the glass matter?
Only to a small extent. A glass that is tapered, with the base smaller than the mouth as in Fig. 2, is a little easier than a bottle with a narrow mouth and a wide base. The reason for this is that in the case of the bottle, the card has to sag by a bigger amount in order to generate the necessary volume (and pressure) change. If the card sags too much, it is likely that some water will dribble out the crack on one side and some air will bubble in on the other, and the balance will become unstable.
Note for geeks: In the case of the tapered glass, it might be tempting to think that even if the air pressure were the same on top and bottom, the force pushing down on the water from above is smaller than the force pushing up from below because the area is smaller above the water than below. However, this argument fails to take into account the force from the sides of the glass. If the glass is tapered, the sides of the glass exert a force that has a small downward component, and this component exactly makes up for the reduced area directly above the water. If the air pressure above the water is exactly equal to the air pressure below the water, the upward and downward forces (counting the sides of the glass) are also exactly equal.
Does the water always fall out of your glass?
Try using a lighter more flexible material across the mouth of the glass. A heavy, very rigid plate won't work very well. Remember to press into the glass a little bit before you turn it over.
Make sure the glass is perfectly rigid. If you use a soft plastic cup, the cup will compress as the water sags, preventing a pressure difference from building up. Use a glass that has a mouth bigger than the base (see "Does the shape of the glass matter? above).
Does the water soak through the index card too quickly and make a mess? Try using a foam picnic plate instead of an index card. The foam plate is impervious to water, but it still provides the flexibility needed to depress the plate slightly into the glass before turning it over.
For students who already have the concept of air pressure, it's often worthwhile to let the class brainstorm about why the water stays in the glass before leading them through an explanation. In this case, you might let them experiment with both a rigid glass and a soft plastic cup (which won't hold the water — see "troubleshooting" above) in order to identify the important difference. Give the plastic cup to your most troublesome student and stand back.
For students who know calculus, it might be a good exercise for them to try to calculate the optimal amount of air to leave in the glass. Use a cylindrical glass instead of a tapered glass to make the calculation a little easier. Have them derive an expression for the distance the water must fall in order to balance forces. They will want to minimize this distance as a function of the height of the air column. The solution is a somewhat messy quadratic equation, but they can plug in typical numbers for the height of the glass, the density of water, the density of air, and assorted physical constants, to get a numeric result.
1. Strictly speaking, the upward force on the water is actually the upward force of the air pressure on the card, reduced by the weight of the card, which is assumed to be very light.
This is explained very well.
This is explained very well. Great lesson here saving this for sure! | CommonCrawl |
Glasgow Mathematical Journal
ON WITTEN MULTIPLE ZETA-FUNCTIO...
Guo, Li Paycha, Sylvie and Zhang, Bin 2017. Resurgence, Physics and Numbers. p. 299.
ON WITTEN MULTIPLE ZETA-FUNCTIONS ASSOCIATED WITH SEMI-SIMPLE LIE ALGEBRAS V
Lie algebras and Lie superalgebras
Zeta and $L$-functions: analytic theory
YASUSHI KOMORI (a1), KOHJI MATSUMOTO (a2) and HIROFUMI TSUMURA (a3)
Department of Mathematics, Rikkyo University, Nishi-Ikebukuro, Toshima-ku, Tokyo 171-8501, Japan e-mails: [email protected]
Graduate School of Mathematics, Nagoya University, Chikusa-ku, Nagoya 464-8602, Japan e-mails: [email protected]
Department of Mathematics and Information Sciences, Tokyo Metropolitan University, Hachioji, Tokyo 192-0397, Japan e-mails: [email protected]
Published online by Cambridge University Press: 26 August 2014
We study the values of the zeta-function of the root system of type G2 at positive integer points. In our previous work we considered the case when all integers are even, but in the present paper we prove several theorems which include the situation when some of the integers are odd. The underlying reason why we may treat such cases, including odd integers, is also discussed.
COPYRIGHT: © Glasgow Mathematical Journal Trust 2014
1.Apostol, T. M., Introduction to analytic number theory (Springer, New York, NY, 1976).
2.Bourbaki, N., Groupes et algèbres de Lie, chapitres 4, 5 et 6 (Hermann, Paris, France, 1968).
3.Humphreys, J. E., Introduction to Lie algebras and representation theory, Graduate Texts in Mathematics 9 (Springer-Verlag, New York, NY, 1972).
4.Humphreys, J. E., Reflection groups and coxeter groups (Cambridge University Press, Cambridge, UK, 1990).
5.Komori, Y., Matsumoto, K. and Tsumura, H., Zeta-functions of root systems, in Proceedings of the conference on L-functions, Fukuoka, Japan, 2006) (Weng, L. and Kaneko, M., Editors) (World Science Publisher, Hackensack, NJ, 2007), 115–140.
6.Komori, Y., Matsumoto, K. and Tsumura, H., Zeta and L-functions and Bernoulli polynomials of root systems, Proc. Japan Acad. Ser. A 84 (2008), 57–62.
7.Komori, Y., Matsumoto, K. and Tsumura, H., On Witten multiple zeta-functions associated with semisimple Lie algebras II, J. Math. Soc. Japan 62 (2010), 355–394.
8.Komori, Y., Matsumoto, K. and Tsumura, H., On multiple Bernoulli polynomials and multiple L-functions of root systems, Proc. London Math. Soc. 100 (2010), 303–347.
9.Komori, Y., Matsumoto, K. and Tsumura, H., Functional relations for zeta-functions of root systems, in Number theory: Dreaming in dreams – proceedings of the 5th China-Japan seminar (Aoki, T., Kanemitsu, S. and Liu, J.-Y., Editors) (World Science Publisher, Hackensack, NJ, 2010), 135–183.
10.Komori, Y., Matsumoto, K. and Tsumura, H., On Witten multiple zeta-functions associated with semisimple Lie algebras III, in Multiple Dirichlet series, L-functions and automorphic forms (Bump, D., Friedberg, S. and Goldfeld, D., Editors), Progress in Mathematics Series, vol. 300 (Birkhäuser/Springer, New York, NY, 2012) 223–286.
11.Komori, Y., Matsumoto, K. and Tsumura, H., On Witten multiple zeta-functions associated with semisimple Lie algebras IV, Glasgow Math. J. 53 (2011), 185–206.
12.Komori, Y., Matsumoto, K. and Tsumura, H., Zeta-functions of weight lattices of compact connected semisimple Lie groups, preprint, arXiv:math/1011.0323.
13.Matsumoto, K., Nakamura, T., Ochiai, H. and Tsumura, H., On value-relations, functional relations and singularities of Mordell-Tornheim and related triple zeta-functions, Acta Arith. 132 (2008), 99–125.
14.Matsumoto, K., Nakamura, T. and Tsumura, H., Functional relations and special values of Mordell-Tornheim triple zeta and L-functions, Proc. Amer. Math. Soc. 136 (2008), 2135–2145.
15.Matsumoto, K. and Tsumura, H., On Witten multiple zeta-functions associated with semisimple Lie algebras I, Ann. Inst. Fourier 56 (2006), 1457–1504.
16.Nakamura, T., A functional relation for the Tornheim double zeta function, Acta Arith. 125 (2006), 257–263.
17.Nakamura, T., Double Lerch series and their functional relations, Aequationes Math. 75 (2008), 251–259.
18.Nakamura, T., Double Lerch value relations and functional relations for Witten zeta functions, Tokyo J. Math. 31 (2008), 551–574.
19.Okamoto, T., Multiple zeta values related with the zeta-function of the root system of type A 2, B 2 and G 2, Comment. Math. Univ. St. Pauli 61 (2012), 9–27.
20.Onodera, K., Generalized log sine integrals and the Mordell–Tornheim zeta values, Trans. Amer. Math. Soc. 363 (2011), 1463–1485.
21.Tornheim, L., Harmonic double series, Amer. J. Math. 72 (1950), 303–314.
22.Tsumura, H., On Witten's type of zeta values attached to SO(5), Arch. Math. (Basel) 82 (2004), 147–152.
23.Witten, E., On quantum gauge theories in two dimensions, Comm. Math. Phys. 141 (1991), 153–209.
24.Zagier, D., Values of zeta functions and their applications, in First European Congress of Mathematics vol. II (Joseph, A.et al. Editors), Progress in Mathematics Series, vol. 120 (Birkhäuser, Basel, Switzerland, 1994), 497–512.
25.Zagier, D., Introduction to multiple zeta values, Lectures at Kyushu University (1999, unpublished note).
26.Zhao, J., Multi-polylogs at twelfth roots of unity and special values of Witten multiple zeta function attached to the exceptional Lie algebra $\mathfrak{g}_2$, J. Algebra Appl. 9 (2010), 327–337.
URL: /core/journals/glasgow-mathematical-journal
MSC classification
11M41: Other Dirichlet series and zeta functions
17B20: Simple, semisimple, reductive (super)algebras
40B05: Multiple sequences and series (should also be assigned at least one other classification number in this section) | CommonCrawl |
\begin{document}
\title{On the Minimum Consistent Subset Problem
}
\begin{abstract}
Let $P$ be a set of $n$ colored points in the plane. Introduced by Hart (1968), a {\em consistent subset} of $P$, is a set $S\subseteq P$ such that for every point $p$ in $P\setminus S$, the closest point of $p$ in $S$ has the same color as $p$. The consistent subset problem is to find a consistent subset of $P$ with minimum cardinality. This problem is known to be NP-complete even for two-colored point sets. Since the initial presentation of this problem, aside from the hardness results, there has not been significant progress from the algorithmic point of view. In this paper we present the following algorithmic results:
\begin{enumerate}
\item The first subexponential-time algorithm for the consistent subset problem.
\item An $O(n\log n)$-time algorithm that finds a consistent subset of size two in two-colored point sets (if such a subset exists). Towards our proof of this running time we present a deterministic $O(n \log n)$-time algorithm for computing a variant of the compact Voronoi diagram; this improves the previously claimed expected running time.
\item An $O(n\log^2 n)$-time algorithm that finds a minimum consistent subset in two-colored point sets where one color class contains exactly one point; this improves the previous best known $O(n^2)$ running time which is due to Wilfong (SoCG 1991).
\item An $O(n)$-time algorithm for the consistent subset problem on collinear points; this improves the previous best known $O(n^2)$ running time.
\item A non-trivial $O(n^6)$-time dynamic programming algorithm for the consistent subset problem on points arranged on two parallel lines.
\end{enumerate}
To obtain these results, we combine tools from planar separators, paraboloid lifting, additively-weighted Voronoi diagrams with respect to convex distance functions, point location in farthest-point Voronoi diagrams, range trees, minimum covering of a circle with arcs, and several geometric transformations.
\end{abstract}
\section{Introduction}
One of the important problems in pattern recognition is to classify new objects according to the current objects using the nearest neighbor rule. Motivated by this problem, in 1968, Hart \cite{Hart1968} introduced the notion of {\em consistent subset} as follows. For a set $P$ of colored points\footnote{In some previous works the points have labels, as opposed to colors.} in the plane, a set $S\subseteq P$ is a consistent subset if for every point $p\in P\setminus S$, the closest point of $p$ in $S$ has the same color as $p$. The {\em consistent subset problem} asks for a consistent subset with minimum cardinality.
Formally, we are given a set $P$ of $n$ points in the plane that is partitioned into ${P_1,\dots,P_k}$,
with $k \geqslant 2$, and the goal is to find an smallest set $S\subseteq P$ such that for every $i\in\{1,\dots,k\}$ it holds that if $p\in P_i$ then the nearest neighbor of $p$ in $S$ belongs to $P_i$. It is implied by the definition that $S$ should contain at least one point from every $P_i$.
To keep the terminology consistent with some recent works on this problem we will be dealing with colored points instead of partitions, that is, we assume that the points of $P_i$ are colored $i$. Following this terminology, the consistent subset problem asks for a smallest subset $S$ of $P$ such that the color of every point $p\in P\setminus S$ is the same as the color of its closest point in $S$. The notion of consistent subset has a close relation with Voronoi diagrams, a well-known structure in computational geometry. Consider the Voronoi diagram of a subset $S$ of $P$. Then, $S$ is a consistent subset of $P$ if and only if for every point $s\in S$ it holds that the points of $P$, that lie in the Voronoi cell of $s$, have the same color as $s$; see Figure~\ref{Voronoi-fig}(a).
Since the initial presentation of this problem in 1968, there has not been significant progress from the algorithmic point of view. Although there were several attempts for developing algorithms, they either did not guarantee the optimality \cite{Gates1972, Hart1968, Wilfong1992} or had exponential running time \cite{Ritter1975}.
In SoCG 1991, Wilfong \cite{Wilfong1992} proved that the consistent subset problem is NP-complete if the input points are colored by at least three colors---the proof is based on the NP completeness of the disc cover problem \cite{Masuyama1981}.
He further presented a technically-involved $O(n^2)$-time algorithm for a special case of two-colored input points where one point is red and all other points are blue; his elegant algorithm transforms the consistent subset problem to the problem of covering points with disks which in turn is transformed to the problem of covering a circle with arcs.
It has been recently proved, by Khodamoradi {et~al. } \cite{Khodamoradi2018}, that the consistent subset problem with two colors is also NP-complete---the proof is by a reduction from the planar rectilinear monotone 3-SAT \cite{Berg2012}. Observe that the one color version of the problem is trivial because every single point is a consistent subset. More recently, Banerjee {et~al. } \cite{Banerjee2018} showed that the consistent subset problem on collinear points, i.e., points that lie on a straight line, can be solved optimally in $O(n^2)$ time.
Recently, Gottlieb~{et~al. } \cite{Gottlieb2018} studied a two-colored version of the consistent subset problem --- referred to as the nearest
neighbor condensing problem --- where the points come from a metric space. They prove a lower bound for the hardness of approximating a minimum consistent subset; this lower bound includes two parameters: the doubling dimension of the space and the ratio of the minimum distance between points of opposite colors to the diameter of the point set. Moreover, for this two-colored version of the problem, they give an approximation algorithm whose ratio almost matches the lower bound.
In a related problem, which is called the selective subset problem, the goal is to find the smallest subset $S$ of $P$ such that for every $p\in P_i$ the nearest neighbor of $p$ in $S\cup (P\setminus P_i)$ belongs to $P_i$. Wilfong \cite{Wilfong1992} showed that this problem is also NP-complete even with two colors. See \cite{Banerjee2018} for some recent progress on this problem.
In this paper we study the consistent subset problem. We improve some previous results and present some new results. To obtain these results, we combine tools from planar separators, additively-weighted Voronoi diagrams with respect to a convex distance function, point location in farthest-point Voronoi diagrams, range trees, paraboloid lifting, minimum covering of a circle with arcs, and several geometric transformations. We present the first subexponential-time algorithm for this problem. We also present an $O(n\log n)$-time algorithm that finds a consistent subset of size two in two-colored point sets (if such a subset exists); this is obtained by transforming the consistent subset problem into a point-cone incidence problem in dimension three.
Towards our proof of this running time we present a deterministic $O(n \log n)$-time algorithm for computing a variant of the compact Voronoi diagram; this improves the $O(n \log n)$ expected running time of the randomized algorithm of Bhattacharya {et~al. } \cite{Bhattacharya2010}. We also revisit the case where one point is red and all other points are blue; we give an $O(n\log^2 n)$-time algorithm for this case, thereby improving the previous $O(n^2)$ running time of \cite{Wilfong1992}.
For collinear points, we present an $O(n)$-time algorithm; this improves the previous running time by a factor of $\Theta(n)$. We also present a non-trivial $O(n^6)$-time dynamic programming algorithm for points arranged on two parallel lines.
\section{A Subexponential Algorithm}
The consistent subset problem can easily be solved in exponential time by simply checking all possible subsets of $P$. In this section we present the first subexponential-time algorithm for this problem. We consider the decision version of this problem in which we are given a set $P$ of $n$ colored points in the plane and an integer $k$, and we want to decide whether or not $P$ has a consistent subset of size $k$. Moreover, if the answer is positive, then we want to find such a subset. This problem can be solved in time $n^{O(k)}$ by checking all possible subsets of size $k$. We show how to solve this problem in time $n^{O(\sqrt{k})}$; we use a recursive separator-based technique that was introduced in 1993 by Hwang {et~al. } \cite{Hwang1993} for the Euclidean $k$-center problem, and then extended by Marx and Pilipczuk \cite{Marx2015} for planar facility location problems. Although this technique is known before, its application in our setting is not straightforward and requires technical details which we give in this section.
Consider an optimal solution $S$ of size $k$. The Voronoi diagram of $S$, say $\mathcal{V}$, is a partition of the plane into convex regions. We want to convert $\mathcal{V}$ to a 2-connected 3-regular planar graph that have a balanced curve separator. Then we want to use this separator to split the problem into two subproblems that can be solved independently. To that end, first we introduce small perturbation
\begin{wrapfigure}{r}{1.8in}
\centering
\includegraphics[width=1.75in]{fig/Vor.pdf}
\end{wrapfigure}
\noindent to the coordinates of points of $P$ to ensure that no four points lie on the boundary of a circle; this ensures that every vertex of $\mathcal{V}$ has degree 3. The Voronoi diagram $\mathcal{V}$ consists of finite segments and infinite rays. We want $\mathcal{V}$ to have at most three infinite rays. To achieve this, we introduce three new points $v_1, v_2,v_3$ that lie on the vertices of a sufficiently large equilateral triangle\footnote{The triangle is large in the sense that for every point $p\in P$, the closet point to $p$, among $P\cup\{v_1,v_2,v_3\}$, is in $P$.} that contains $P$, and then we color them by three new colors; see the right figure. Since these three points have distinct colors, they appear in any consistent subset of $P\cup\{v_1,v_2,v_3\}$. Moreover, since they are far from the original points, by adding them to any consistent subset of $P$ we obtain a valid consistent subset for $P\cup\{v_1,v_2,v_3\}$. Conversely, by removing these three points from any consistent subset of $P\cup\{v_1,v_2,v_3\}$ we obtain a valid consistent subset for $P$.
Therefore, in the rest of our description we assume, without loss of generality, that $P$ contains $v_1,v_2,v_3$.
Consequently, the optimal solution $S$ also contains those three points; this implies that $\mathcal{V}$ has three infinite rays which are introduced by $v_1, v_2,v_3$ (see the above figure). We introduce a new vertex at infinity and connect these three rays to that vertex. To this end we obtain a 2-connected 3-regular planar graph, namely $\mathcal{G}$. Marx and Pilipczuk \cite{Marx2015} showed that such a graph has a polygonal separator $\delta$ of size $O(\sqrt{k})$ (going through $O(\sqrt{k})$ faces and vertices) that is {\em face balanced}, in the sense that there are at most $2k/3$ faces of $\mathcal{G}$ strictly inside $\delta$ and at most $2k/3$ faces of $\mathcal{G}$ strictly outside $\delta$. The vertices of $\delta$ alternate between points of $S$ and the vertices of $\mathcal{G}$ as depicted in Figure~\ref{Voronoi-fig}(a). See \cite{Miller1986} for an alternate way of computing a balanced curve separator.
\begin{figure}
\caption{(a) A solution $S$ (bold points), together with its Voronoi diagram $\mathcal{V}$, and a balanced curve separator $\delta$. (b) A subproblem with input domain $D$ (shaded region) and a set $S'$ (bold points) that is part of the solution.}
\label{Voronoi-fig}
\end{figure}
We are going to use dynamic programming based on balanced curve separators of $\mathcal{G}$. The main idea is to use $\delta$ to split the problem into two smaller subproblems, one inside $\delta$ and one outside $\delta$, and then solve each subproblem recursively. But, we do not know $\mathcal{G}$ and hence we have no way of computing $\delta$. However, we can guess $\delta$ by trying all possible balanced curve separators of size $k'=O(\sqrt{k})$.
Every vertex of $\delta$ is either a point of $P$ or a vertex of $\mathcal{G}$ (and consequently a vertex of $\mathcal{V}$) that is introduced by three points of $P$. Therefore, every curve separator of size $k'$ is defined by at most $3k'$ points of $P$, and thus, the number of such separators is at most ${n\choose 3k'}\leqslant n^{3k'}=n^{O(\sqrt{k})}$. To find these curve separators, we try every subset of at most $3k'$ points of $P$. For every such subset we compute its Voronoi diagram, which has at most $6k'$ vertices. For the set that is the union of the $3k'$ points and the $6k'$ vertices, we check all $2^{(6k'+3k')}$ subsets and choose every subset that forms a balanced curve separator (that alternates between points and vertices). Therefore, in a time proportional to $n^{3k'}\cdot 2^{9k'}=n^{O(\sqrt{k})}$ we can compute all balanced curve separators.
By trying all balanced curve separators, we may assume that we have correctly guessed $\delta$ and the subset $S'$ of $P$, with $|S'|\leqslant 3k'$, that defines $\delta$. The solution of our main problem consists of $S'$ and the solutions of the two separate subproblems, one inside $\delta$ and one outside $\delta$. To solve these two subproblems recursively, in the later steps, we get subproblems of the following form. Throughout our description, we will assume that $P$ is fixed for all subproblems. The input of every subproblem consists of a positive integer $x$ $(\leqslant k)$, a subset $S'$ of $y$ $(\leqslant k)$ points of $P$ that are already chosen to be in the solution, and a polygonal domain $D$---possibly with holes---of size $\Theta(y)$ which is a polygon its vertices alternating between the points of $S'$ and the vertices of the Voronoi diagram of $S'$. The task is to select a subset $S\subseteq (P\cap D)\setminus S'$ of size $x$ such that:
\begin{enumerate}[$(i)$]
\item $D$ is a polygon where its vertices alternate between the points of $S'$ and the vertices of the Voronoi diagram of $S\cup S'$, and
\item $S\cup S'$ is a consistent subset for $(P\cap D)\cup S'$.
\end{enumerate}
See Figure~\ref{Voronoi-fig}(b) for an illustration of such a subproblem.
The top-level subproblem has $x=k$ and $y=0$. We stop the recursive calls as soon as we reach a subproblem with $x=O(\sqrt{k})$, in which case, we spend $O(n^x)$ time to solve this subproblem; this is done by trying all subsets of $(P\cap D)\setminus S'$ that have size $x$. For every subproblem, the number of points in $S'$ (i.e., $y$) is at most three times the number of vertices on the boundary of the domain $D$. The number of vertices on the boundary of $D$---that are accumulated during recursive calls---is at most
$$\sqrt{k}+\sqrt{\frac{2}{3}k}+\sqrt{\left(\frac{2}{3}\right)^2 k}+\sqrt{\left(\frac{2}{3}\right)^3 k}+...=O(\sqrt{k}).$$
Therefore, $y=|S'|=O(\sqrt{k})$, and thus the Voronoi diagram of $S\cup S'$ has a balanced curve separator of size $O(\sqrt{x+y})=O(\sqrt{k})$.\footnote{In fact the 2-connected 3-regular planar graph obtained from the Voronoi diagram of $S\cup S'$ has such a separator.} We try all possible $n^{O(\sqrt{k})}$ such separators, and for each of which we recursively solve the two subproblems in its interior and exterior. For these two subproblems to be really independent we include the $O(\sqrt{k})$ points, defining the separator, in the inputs of both subproblems. Therefore, the running time of our algorithm can be interpreted by the following recursion
\[
T(n,k) \leqslant n^{O(\sqrt{k})}
\cdot \max \bigl\{ T(n,k_1+y)+T(n,k_2+y) \mid k_1+k_2+y=k,~ k_1,k_2\leqslant 2k/3,~y=O(\sqrt{k}) \bigr\},
\]
which solves to $T(n,k) \leqslant n^{O(\sqrt{k})}$. Notice that our algorithm solves the decision version of the consistent subset problem for a fixed $k$.
To compute the consistent subset of minimum cardinality,
whose size, say $k$, is unknown at the start of the algorithm,
we apply the following standard technique:
Start with a constant value $\kappa$, for example $\kappa = 1$.
Run the decision algorithm with the value $\kappa$. If the
answer is negative, then double the value of $\kappa$ and repeat this
process until the first time the decision algorithm gives a positive
answer.
Consider the last value for $\kappa$. Note that $\kappa/2 < k \leqslant \kappa$.
We perform a binary search for $k$ in the interval $[\kappa/2,\kappa]$.
In this way, we find the value of $k$, as well as the consistent subset
of minimum cardinality, by running the decision algorithm $O(\log \kappa)$
times. Thus, the total running time is
$n^{O(\sqrt{\kappa})} \cdot O(\log \kappa)$, which is $n^{O(\sqrt{k})}$.
We have proved the following theorem.
\begin{theorem}
A minimum consistent subset of $n$ colored points in the plane can be computed in $n^{O(\sqrt{k})}$ time, where $k$ is the size of the minimum consistent subset.
\end{theorem}
\section{Consistent Subset of Size Two} In this section we investigate the existence of a consistent subset of size two in a set of bichromatic points where every point is colored by one of the two colors, say red and blue. Before stating the problem formally we introduce some terminology. For a set $P$ of points in the plane, we denote the convex hull of $P$ by $\CH{P}$. For two points $p$ and $q$ in the plane, we denote the straight-line segment between $p$ and $q$ by $pq$, and the perpendicular bisector of $pq$ by $\bisector{p}{q}$.
Let $R$ and $B$ be two disjoint sets of total $n$ points in the plane such that the points of $R$ are colored red and the points of $B$ are colored blue. We want to decide whether or not $R\cup B$ has a consistent subset of size two. Moreover, if the answer is positive, then we want to find such points, i.e., a red point $r\in R$ and a blue point $b\in B$ such that all red points are closer to $r$ than to $b$, and all blue points are closer to $b$ than to $r$. Alternatively, we want to find a pair of points $(r,b)\in R\times B$ such that $\bisector{r}{b}$ separates $\CH{R}$ and $\CH{B}$. This problem can be solved in $O(n^2\log n)$ time by trying all the $O(n^2)$ pairs $(r,b)\in R\times B$; for each pair $(r,b)$ we can verify, in $O(\log n)$ time, whether or not $\bisector{r}{b}$ separates $\CH{R}$ and $\CH{B}$. In this section we show how to solve this problem in time $O(n\log n)$. To that end, we assume that $\CH{R}$ and $\CH{B}$ are disjoint, because otherwise there is no such pair $(r,b)$.
\begin{wrapfigure}{r}{1.3in}
\centering
\includegraphics[width=1.25in]{fig/prune.pdf}
\end{wrapfigure} It might be tempting to believe that a solution of this problem contains points only from the boundaries of $\CH{R}$ and $\CH{B}$. However, this is not necessarily the case; in the figure to the right, the only solution of this problem contains $r$ and $b$ which are in the interiors of $\CH{R}$ and $\CH{B}$. Also, due to the close relation between Voronoi diagrams and Delaunay triangulations, one may believe that a solution is defined by the two endpoints of an edge in the Delaunay triangulation of $R\cup B$. This is not necessarily the case either; the green edges in the figure to the right, which are the Delaunay edges between $R$ and $B$, do not introduce any solution.
A {\em separating common tangent} of two disjoint convex polygons, $P_1$ and $P_2$, is a line $\ell$ that is tangent to both $P_1$ and $P_2$ such that $P_1$ and $P_2$ lie on different sides of $\ell$. Every two disjoint convex polygons have two separating common tangents; see Figure~\ref{tangents-fig}. Let $\ell_1$ and $\ell_2$ be the separating common tangents of $\CH{R}$ and $\CH{B}$. Let $R'$ and $B'$ be the subsets of $R$ and $B$ on the boundaries of $\CH{R}$ and $\CH{B}$, respectively, that are between $\ell_1$ and $\ell_2$ as depicted in Figure~\ref{tangents-fig}. For two points $p$ and $q$ in the plane, let $D(p,q)$ be the closed disk that is centered at $p$ and has $q$ on its boundary.
\begin{lemma}
\label{inclusion-exclusion-lemma}
For every two points $r\in R$ and $b\in B$, the bisector $\bisector{r}{b}$ separates $R$ and $B$ if and only if
\begin{enumerate}[$(i)$]
\item $\forall r'\in R':~~~ b\notin D(r',r)$, and
\item $\forall b'\in B':~~~ b\in D(b',r)$.
\end{enumerate} \end{lemma} \begin{proof}
For the direct implication since $\bisector{r}{b}$ separates $R$ and $B$, every red point $r'$ (and in particular every point in $R'$) is closer to $r$ than to $b$; this implies that $D(r',r)$ does not contain $b$ and thus (i) holds. Also, every blue point $b'$ (and in particular every point in $B'$) is closer to $b$ than to $r$; this implies that $D(b',r)$ contains $b$ and thus (ii) holds. See Figure~\ref{tangents-fig}.
Now we prove the converse implication by contradiction. Assume that both (i) and (ii) hold for some $r\in R$ and some $b\in B$, but the bisector $\bisector{r}{b}$ does not separate $R$ and $B$. After a suitable rotation we may assume that $\bisector{r}{b}$ is vertical, $r$ is to the left side of $\bisector{r}{b}$ and $b$ is to the right side of $\bisector{r}{b}$. Since $\bisector{r}{b}$ does not separate $R$ and $B$, there exists either a point of $R$ to the right side of $\bisector{r}{b}$, or a point of $B$ to the left side of $\bisector{r}{b}$. If there is a point of $R$ to the right side of $\bisector{r}{b}$ then there is also a point $r'\in R'$ to the right side of $\bisector{r}{b}$. In this case $r'$ is closer to $b$ than to $r$, and thus the disk $D(r',r)$ contains $b$ which contradicts (i). If there is a point of $B$ to the left side of $\bisector{r}{b}$ then there is also a point $b'\in B'$ to the left side of $\bisector{r}{b}$. In this case $b'$ is closer to $r$ than to $b$ and thus the disk $D(b',r)$ does not contain $b$ which contradicts (ii). \end{proof}
\begin{figure}
\caption{The lines $\ell_1$ and $\ell_2$ are the separating common tangents of $\CH{R}$ and $\CH{B}$. $R'=\{r'_1,r'_2,r'_3\}$ and $B'=\{b'_1,b'_2,b'_3,b'_4\}$ are the subsets of $R$ and $B$ on boundaries of $\CH{R}$ and $\CH{B}$ that lie between $\ell_1$ and $\ell_2$. The feasible region $F_r$ for point $r$ is shaded.}
\label{tangents-fig}
\end{figure}
Lemma~\ref{inclusion-exclusion-lemma} implies that for a pair $(r,b)\in R\times B$ to be a consistent subset of $R\cup B$ it is necessary and sufficient that every point of $R'$ is closer to $r$ than to $b$, and every point of $B'$ is closer to $b$ than to $r$. This lemma does not imply that $r$ and $b$ are necessarily in $R'$ and $B'$. Observe that Lemma~\ref{inclusion-exclusion-lemma} holds even if we swap the roles of $r, r', R'$ with $b,b',B'$ in (i) and (ii). Also, observe that this lemma holds even if we take $R'$ and $B'$ as all red and blue points on boundaries of $\CH{R}$ and $\CH{B}$.
For every red point $r\in R$ we define a {\em feasible region} $\mathcal{F}_r$ as follow \[ \mathcal{F}_r ~=~ \left(\bigcap_{b'\in B'} D(b',r)\right) \setminus\left( \bigcup_{r'\in R'} D(r',r) \right). \] See Figure~\ref{tangents-fig} for illustration of a feasible region. Lemma~\ref{inclusion-exclusion-lemma}, together with this definition, imply the following corollary. \begin{corollary}
\label{feasible-cor}
For every two points $r\in R$ and $b\in B$, the bisector $\bisector{r}{b}$ separates $R$ and $B$ if and only if $b\in \mathcal{F}_r$. \end{corollary}
Based on this corollary, our original decision problem reduces to the following question. \begin{question}
\label{q1}
Is there a blue point $b\in B$ such that $b$ lies in the feasible region $\mathcal{F}_r$ of some red point $r\in R$? \end{question} If the answer to Question~\ref{q1} is positive then $\{r,b\}$ is a consistent subset for $R\cup B$, and if the answer is negative then $R\cup B$ does not have a consistent subset with two points. In the rest of this section we show how to answer Question~\ref{q1}. To that end, we lift the plane onto the paraboloid $z=x^2+y^2$ by projecting every point $s=(x,y)$ in $\mathbb{R}^2$ onto the point $\hat s=(x,y, x^2+y^2)$ in $\mathbb{R}^3$. This lift projects a circle in $\mathbb{R}^2$ onto a plane in $\mathbb{R}^3$. Consider a disk $D(p,q)$ in $\mathbb{R}^2$ and let $\pi(p,q)$ be the plane in $\mathbb{R}^3$ that contains the projection of the boundary circle of $D(p,q)$. Let $H^-(p,q)$ be the lower closed halfspace defined by $\pi(p,q)$, and let $H^+(p,q)$ be the upper open halfspace defined by $\pi(p,q)$. For every point $s\in \mathbb{R}^2$, its projection $\hat s$ lies in $H^-(p,q)$ if and only if $s\in D(p,q)$, and lies in $H^+(p,q)$ otherwise. Moreover, $\hat s$ lies in $\pi(p,q)$ if and only if $s$ is on the boundary circle of $D(p,q)$. For every point $r\in R$ we define a polytope $\mathcal{C}_r$ in $\mathbb{R}^3$ as follow \[ \mathcal{C}_r ~=~ \left(\bigcap_{b'\in B'} H^-(b',r)\right) \cap \left( \bigcap_{r'\in R'} H^+(r',r) \right). \]
Based on the above discussion, Corollary~\ref{feasible-cor} can be translated to the following corollary.
\begin{corollary}
\label{polytope-cor}
For every two points $r\in R$ and $b\in B$, the bisector $\bisector{r}{b}$ separates $R$ and $B$ if and only if $\hat b\in \mathcal{C}_r$. \end{corollary}
This corollary, in turn, translates Question~\ref{q1} to the following question. \begin{question}
\label{q2}
Is there a blue point $b\in B$ such that its projection $\hat b$ lies in the polytope $\mathcal{C}_r$ for some red point $r\in R$? \end{question}
Now, we are going to answer Question~\ref{q2}. The polytope $\mathcal{C}_r$ is the intersection of some halfspaces, each of which has $\hat r$ on its boundary plane. Therefore, $\mathcal{C}_r$ is a cone in $\mathbb{R}^3$ with apex $\hat r$; see Figure~\ref{cones-fig}. Recall that $|R\cup B|=n$, however, for the purposes of worst-case running-time analysis and to simplify indexing, we will index the red points, and also the blue points, from 1 to $n$. Let $r_1,r_2,,\dots, r_n$ be the points of $R$. For every point $r_i\in R$, let $\tau_i$ be the translation that brings $\hat r_1$ to $\hat r_i$. Notice that $\tau_1$ is the identity transformation. In the rest of this section we will write $\mathcal{C}_{i}$ for $\mathcal{C}_{r_i}$.
\begin{lemma}
\label{translation-lemma}
For every point $r_i\in R$, the cone $\mathcal{C}_{i}$ is the translation of $\mathcal{C}_{1}$ with respect to $\tau_i$. \end{lemma} \begin{wrapfigure}{r}{2.2in}
\centering
\includegraphics[width=2.1in]{fig/translation.pdf}
\end{wrapfigure} \noindent{\em Proof.} For a circle $C$ in $\mathbb{R}^2$, let $\pi_C$ denote the plane in $\mathbb{R}^3$ that $C$ translates onto. For every two concentric circles $C_1$ and $C_i$ in $\mathbb{R}^2$ it holds that $\pi_{C_1}$ and $\pi_{C_i}$ are parallel; see the figure to the right. It follows that, if $C_1$ passes through the point $r_1$, and $C_i$ passes through the point $r_i$, then $\pi_{C_i}$ is obtained from $\pi_{C_1}$ by the translation $\tau_i$ that brings $\hat r_1$ to $\hat r_i$, that is $\tau_i(\pi_{C_1})=\pi_{C_i}$. A similar argument holds also for the halfspaces defined by $\pi_{C_1}$ and $\pi_{C_i}$. Since for every $a\in R'\cup B'$ the disks $D(a,r_1)$ and $D(a,r_i)$ are concentric and the boundary of $D(a,r_1)$ passes through $r_1$ and the boundary of $D(a,r_i)$ passes through $r_i$, it follows that $\tau_i(H^+(a,r_1))=H^+(a,r_i)$ and $\tau_i(H^-(a,r_1))=H^-(a,r_i)$. Since a translation of a polytope is obtained by translating each of the halfspaces defining it, we have $\tau_i(\mathcal{C}_{1})= \mathcal{C}_{i}$ as depicted in Figure~\ref{cones-fig}. \qed
\begin{figure}
\caption{The cones $\mathcal{C}_{2}$ and $\mathcal{C}_{3}$ are the translations of $\mathcal{C}_{1}$ with respect to $\tau_2$ and $\tau_3$.}
\label{cones-fig}
\end{figure}
It follows from Lemma~\ref{translation-lemma} that to answer Question~\ref{q2} it suffices to solve the following problem: Given a cone $\mathcal{C}_{1}$ defined by $n$ halfspaces, $n$ translations of $\mathcal{C}_{1}$, and set of $n$ points, we want to decide whether or not there is a point in some cone (see Figure~\ref{cones-fig}). This can be verified in $O(n\log n)$ time, using Theorem~\ref{point-cone-thr} that we will prove later in Section~\ref{point-cone-section}. This is the end of our constructive proof. The following theorem summarizes our result in this section.
\begin{theorem}
Given a set of $n$ bichromatic points in the plane, in $O(n \log n)$ time, we can compute a consistent subset of size two $($if such a set exists$)$. \end{theorem}
\section{One Red Point} In this section we revisit the consistent subset problem for the case where one input point is red and all other points are blue. Let $P$ be a set of $n$ points in the plane consisting of a red point and $n-1$ blue points. Observe that any consistent subset of $P$ contains the only red point and some blue points. In his seminal work in SoCG 1991, Wilfong \cite{Wilfong1992} showed that $P$ has a consistent subset of size at most seven (including the red point); this implies an $O(n^6)$-time brute force algorithm for this problem. Wilfong showed how to solve this problem in $O(n^2)$-time; his elegant algorithm transforms the consistent subset problem to the problem of covering points with disks which in turn is transformed to the problem of covering a circle with arcs. The running time of his algorithm is dominated by the transformation to the circle covering problem which involves computation of $n-1$ arcs in $O(n^2)$ time; all other transformations together with the solution of the circle covering problem take $O(n\log n)$ time (\cite[Lemma 19 and Theorem 9]{Wilfong1992}).
We first introduce the circle covering problem, then we give a summary of Wilfong's transformation to this problem, and then we show how to perform this transformation in $O(n \log^2 n)$ time which implies the same running time for the entire algorithm. We emphasis that the most involved part of the algorithm, which is the correctness proof of this transformation, is due to Wilfong.
\begin{figure}
\caption{(a) Transformation to the circle covering problem. (b) The range tree $T$ on blue points.}
\label{one-red-fig}
\end{figure}
Let $C$ be a circle and let $\mathcal{A}$ be a set of arcs covering the entire $C$. The {\em circle covering} problem asks for a subset of $\mathcal{A}$, with minimum cardinality, that covers the entire $C$.
Wilfong's algorithm starts by mapping input points to the projective plane, and then transforming (in two stages) the consistent subset problem to the circle covering problem. Let $P$ denote the set of points after the mapping, and let $r$ denote the only red point of $P$. The transformation, which is depicted in Figure~\ref{one-red-fig}(a), proceeds as follows. Let $C$ be a circle centered at $r$ that does not contain any blue point. Let $b_1,b_2,\dots,b_{n-1}$ be the blue points in clockwise circular order around $r$ ($b_1$ is the first clockwise point after $b_{n-1}$, and $b_{n-1}$ is the first counterclockwise point after $b_1$). For each point $b_i$, let $D(b_i)$ be the disk of radius $|rb_i|$ centered at $b_i$. Define $cc(b_i)$ to be the first counterclockwise point (measured from $b_i$) that is not in $D(b_i)$, and similarly define $c(b_i)$ to be the first clockwise point that is not in $D(b_i)$. Denote by $A(b_i)$ the open arc of $C$ that is contained in the wedge with counterclockwise boundary ray from $r$ to $cc(b_i)$ and the clockwise boundary ray from $r$ to $c(b_i)$.\footnote{Wilfong shrinks the endpoint of $A(b_i)$ that corresponds to $cc(b_i)$ by half the clockwise angle from $cc(b_i)$ to the next point, and shrinks the endpoint of $A(b_i)$ that corresponds to $c(b_i)$ by half the counterclockwise angle from $c(b_i)$ to the previous point.} Let $\mathcal{A}$ be the set of all arcs $A(b_i)$; since blue points are assumed to be in circular order, $\mathcal{A}$ covers the entire $C$. Wilfong proved that our instance of the consistent subset problem is equivalent to the problem of covering $C$ with $\mathcal{A}$. The running time of his algorithm is dominated by the computation of $\mathcal{A}$ in $O(n^2)$ time. We show how to compute $\mathcal{A}$ in $O(n\log^2 n)$ time.
In order to find each arc $A(b_i)$ it suffices to find the points $cc(b_i)$ and $c(b_i)$. Having the clockwise ordering of points around $r$, one can find these points in $O(n)$ time for each $b_i$, and consequently in $O(n^2)$ time for all $b_i$'s. In the rest of this section we show how to find $c(b_i)$ for all $b_i$'s in $O(n\log^2 n)$ time; the points $cc(b_i)$ can be found in a similar fashion.
By the definition of $c(b_i)$ all points of the sequence $b_{i+1}, \dots,c(b_i)$, except $c(b_i)$, lie inside $D(b_i)$. Therefore among all points $b_{i+1},\dots, c(b_i)$, the point $c(b_i)$ is the farthest from $b_i$. This implies that in the farthest-point Voronoi diagram of $b_{i+1},\dots,c(b_i)$, the point $b_i$ lies in the cell of $c(b_i)$. To exploit this property of $c(b_i)$, we construct a 1-dimensional range tree $T$ on all blue points based on their clockwise order around $r$; blue points are stored at the leaves of $T$ as in Figure~\ref{one-red-fig}(b). At every internal node $\nu$ of $T$ we store the farthest-point Voronoi diagram of the blue points that are stored at the leaves of the subtree rooted at $\nu$; we refer to this diagram by FVD($\nu$). This data structure can be computed in $O(n\log^2 n)$ time because $T$ has $O(\log n)$ levels and in each level we compute farthest-point Voronoi diagrams of total $n-1$ points. To simplify our following description, at the moment we assume that $b_1,\dots, b_{n-1}$ is a linear order. At the end of this section, in Remark 1, we show how to deal with the circular order.
We use the above data structure to find each point $c(b_i)$ in $O(\log^2 n)$ time. To that end, we walk up the tree from the leaf containing $b_i$ (first phase), and then walk down the tree (second phase) as described below; also see Figure~\ref{one-red-fig}(b). For every internal node $\nu$, let $\nu(L)$ and $\nu(R)$ denote its left and right children, respectively. In the first phase, for every internal node $\nu$ in the walk, we locate the point $b_i$ in FVD($\nu(R)$) and find the point $b_f$ that is farthest from $b_i$. If $b_f$ lies in $D(b_i)$ then also does every point stored at the subtree of $\nu(R)$. In this case we continue walking up the tree and repeat the above point location process until we find, for the first time, the node $\nu^*$ for which $b_f$ does not lie in $D(b_i)$. To this end we know that $c(b_i)$ is among the points stored at $\nu^*(R)$. Now we start the second phase and walk down the tree from $\nu^*(R)$. For every internal node $\nu$ in this walk, we locate $b_i$ in FVD($\nu(L)$) and find the point $b_f$ that is farthest from $b_i$. If $b_f$ lies in $D(b_i)$, then also does every point stored at $\nu(L)$, and hence we go to $\nu(R)$, otherwise we go to $\nu(L)$. At the end of this phase we land up in a leaf of $T$, which stores $c(b_i)$. The entire walk has $O(\log n)$ nodes and at every node we spend $O(\log n)$ time for locating $b_i$. Thus the time to find $c(b_i)$ is $O(\log^2 n)$. Therefore, we can find all $c(b_i)$'s in $O(n\log^2 n)$ total time.
\begin{theorem}
A minimum consistent subset of $n$ points in the plane, where one point is red and all other points are blue, can be computed in $O(n \log^2 n)$ time. \end{theorem}
{\noindent \bf Remark 1.} To deal with the circular order $b_1,\dots, b_{n-1}$, we build the range tree $T$ with $2(n-1)$ leaves $b_1,\dots,b_{n-1},b_1,\dots, b_{n-1}$. For a given $b_{i}$, the point $c(b_i)$ can be any of the points $b_{i+1},\dots, b_{n-1},\allowbreak b_{1},\dots,b_{i-1}$. To find $c(b_i)$, we first follow the path from the root of $T$ to the leftmost leaf that stores $b_i$, and then from that leaf we start looking for $c(b_i)$ as described above.
\section{Restricted Point Sets} In this section we present polynomial-time algorithms for the consistent subset problem on three restricted classes of point sets. First we present an $O(n)$-time algorithm for collinear points; this improves the previous quadratic-time algorithm of Banerjee {et~al. } \cite{Banerjee2018}. Then we present an involved non-trivial $O(n^6)$-time dynamic programming algorithm for points that are placed on two parallel lines. Finally we present an $O(n^4)$-time algorithm for two-colored points, namely red and blue, that are placed on two parallel lines such that all points on one line are red and all points on the other line are blue.
\subsection{Collinear Points} \label{collinear-section}
Let $P$ be a set of $n$ colored points on the $x$-axis, and let $p_1,\dots,p_n$ be the sequence of these points from left to right. We present a dynamic programming algorithm that solves the consistent subset problem on $P$. To simplify the description of our algorithm we add a point $p_{n+1}$ very far (at distance at least $|p_1p_n|$) to the right of $p_n$. We set the color of $p_{n+1}$ to be different from that of $p_{n}$. Observe that every solution for $P\cup \{p_{n+1}\}$ contains $p_{n+1}$. Moreover, by removing $p_{n+1}$ from any optimal solution of $P\cup \{p_{n+1}\}$ we obtain an optimal solution for $P$.
Therefore, to compute an optimal solution for $P$, we first compute an optimal solution for $P\cup \{p_{n+1}\}$ and then remove $p_{n+1}$.
Our algorithm maintains a table $T$ with $n+1$ entries $T(1),\dots, T(n+1)$. Each table entry $T(k)$ represents the number of points in a minimum consistent subset of $P_k=\{p_1,\dots,p_k\}$ provided that $p_k$ is in this subset. The number of points in an optimal solution for $P$ will be $T(n+1)-1$; the optimal solution itself can be recovered from $T$. In the rest of this section we show how to solve a subproblem with input $P_{k}$ provided that $p_k$ should be in the solution (thereby in the rest of this section the phrase ``solution of $P_k$'' refers to a solution that contains $p_k$). In fact, we show how to compute $T(k)$, by a bottom-up dynamic programming algorithm that scans the points from left to right. If $P_k$ is monochromatic, then the optimal solution contains only $p_k$, and thus, we set $T(k)=1$. Hereafter assume that $P_k$ is not monochromatic. Consider the partition of $P_k$ into maximal blocks of consecutive points such that the points in each block have the same color. Let $B_1,B_2,\dots,B_{m-1},B_m$ denote these blocks from left to right, and notice that $p_k$ is in $B_m$. Assume that the points in $B_m$ are red and the points in $B_{m-1}$ are blue. Let $p_{y}$ be the leftmost point in $B_{m-1}$; see Figure~\ref{collinear-fig}(a). Any optimal solution for $P_k$ contains at least one point from $\{p_y,\dots, p_{k-1}\}$; let $p_i$ be the rightmost such point ($p_i$ can be either red or blue). Then, $T(k)=T(i)+1$. Since we do not know the index $i$, we try all possible values in $\{y,\dots, k-1\}$ and select one that produces a {\em valid} solution, and that minimizes $T(k)$: \begin{equation} \notag T(k)=\min\{T(i)+1 \mid {i\in\{y,\dots,k-1\} \text{ and $i$ produces a valid solution}}\}. \end{equation} The index $i$ produces a valid solution (or $p_i$ is valid) if one of the following conditions hold:
\begin{enumerate}[$(i)$]
\item $p_i$ is red, or
\item $p_i$ is blue, and for every $j\in \{i+1,\dots, k-1\}$ it holds that if $p_j$ is blue then $p_j$ is closer to $p_i$ than to $p_k$, and if $p_j$ is red then $p_j$ is closer to $p_k$ than to $p_i$. \end{enumerate}
If $(i)$ holds then $p_i$ and $p_k$ have the same color. In this case the validity of our solution for $P_k$ is ensured by the validity of the solution of $P_i$. If $(ii)$ holds then $p_i$ and $p_k$ have distinct colors. In this case the validity of our solution for $P_k$ depends on the colors of points $p_{i+1},\dots,p_{k-1}$. To verify the validity in this case, it suffices to check the colors of only two points that are to the left and to the right of the mid-point of the segment $p_ip_k$. This can be done in $O(|B_{m-1}|)$ time for all blue points in $B_{m-1}$ while scanning them from left to right. Thus, $T(k)$ can be computed in $O(k)$ time because $|B_{m-1}|=O(k)$. Therefore, the total running time of the above algorithm is $O(n^2)$.
\begin{figure}
\caption{(a) Illustration of the computation of $T(k)$ from $T(i)$. (b) Any blue point in the range $[l,r]$ is valid.}
\label{collinear-fig}
\end{figure}
We are now going to show how to compute $T(k)$ in constant time, which in turn improves the total running time to $O(n)$. To that end we first prove the following lemma.
\begin{lemma}
\label{diff-lemma}
Let $s\in \{1,\dots,m\}$ be an integer, $p_i,p_{i+1},\dots,p_j$ be a sequence of points in $B_s$, and $x\in\{i,\dots,j\}$ be an index for which $T(x)$ is minimum. Then, $T(j)\leqslant T(x)+1$.
\end{lemma} \begin{proof}
To verify this inequality, observe that by adding $p_j$ to the optimal solution of $P_x$ we obtain a valid solution (of size $T(x)+1$) for $P_j$. Therefore, any optimal solution of $P_j$ has at most $T(x)+1$ points, and thus $T(j)\leqslant T(x)+1$.
\end{proof}
At every point $p_j$, in every block $B_s$, we store the index $i$ of the first point $p_i$ to the left of $p_j$ where $p_i\in B_s$ and $T(i)$ is strictly smaller than $T(j)$; if there is no such point $p_i$ then we store $j$ at $p_j$. These indices can be maintained in linear time while scanning the points from left to right. We use these indices to compute $T(k)$ in constant time as described below.
Notice that if the minimum, in the above calculation of $T(k)$, is obtained by a red point in $B_m$ then it always produces a valid solution, but if the minimum is obtained by a blue point then we need to verify its validity. In the former case, it follows from Lemma~\ref{diff-lemma} that the smallest $T(\cdot)$ for red points in $B_m\setminus\{p_k\}$ is obtained either by $p_{k-1}$ or by the point whose index is stored at $p_{k-1}$. Therefore we can find the smallest $T(\cdot)$ in constant time. Now consider the latter case where the minimum is obtained by a blue point in $B_{m-1}$. Let $p_a$ be the rightmost point of $B_{m-1}$, and let $p_b$ be the leftmost endpoint of $B_m$. Set $d_1=|p_bp_k|$ and $d_2=|p_ap_k|$ as depicted in Figure~\ref{collinear-fig}(b). Set $l=x(p_a)-d_2$ and $r=x(p_b)-d_1$, where $x(p_a)$ and $x(p_b)$ are the $x$-coordinates of $p_a$ and $p_b$. Any point $p_i\in B_{m-1}$ that is to the right of $r$ is invalid because otherwise $p_b$ would be closer to $p_i$ than to $p_k$. Any point $p_i\in B_{m-1}$ that is to the left of $l$ is also invalid because otherwise $p_a$ would be closer to $p_k$ than to $p_i$. However, every point $p_i\in B_{m-1}$, that is in the range $[l,r]$, is valid because it satisfies condition $(ii)$ above. Thus, to compute $T(k)$ it suffices to find a point of $B_{m-1}$ in range $[l,r]$ with the smallest $T(\cdot)$. By slightly abusing notation, let $p_r$ be the rightmost point of $B_{m-1}$ in range $[l,r]$. It follows from Lemma~\ref{diff-lemma} that the smallest $T(\cdot)$ is obtained either by $p_r$ or by the point whose index is stored at $p_{r}$. Thus, in this case also, we can find the smallest $T(\cdot)$ in constant time.
It only remains to identify, in constant time, the index that we should store at $p_k$ (to be used in next iterations). If $p_k$ is the leftmost point in $B_m$, then we store $k$ at $p_k$. Assume that $p_k$ is not the leftmost point in $B_m$, and let $x$ be the index stored at $p_{k-1}$. In this case, if $T(x)$ is smaller than $T(k)$ then we store $x$ at $p_k$, otherwise we store $k$. This assignment ensures that $p_k$ stores a correct index.
Based on the above discussion we can compute $T(k)$ and identify the index at $p_k$ in constant time. Therefore, our algorithm computes all values of $T(\cdot)$ in $O(n)$ total time. The following theorem summarizes our result in this section.
\begin{theorem}
A minimum consistent subset of $n$ collinear colored points can be computed in $O(n)$ time, provided that the points are given from left to right. \end{theorem}
\subsection{Points on Two Parallel Lines} \label{mix-section}
In this section we study the consistent subset problem on points that are placed on two parallel lines.
Let $P$ and $Q$ be two disjoint sets of colored points of total size $n$, such that the points of $P$ are on a straight line $L_P$ and points of $Q$ are on a straight line $L_Q$ that is parallel to $L_P$. The goal is to find a minimum consistent subset for $P\cup Q$. We present a top-down dynamic programming algorithm that solves this problem in $O(n^6)$ time. By a suitable rotation and reflection we may assume that $L_P$ and $L_Q$ are horizontal and $L_P$ lies above $L_Q$. If any of the sets $P$ and $Q$ is empty, then this problem reduces to the collinear version that is discussed in Section~\ref{collinear-section}. Assume that none of $P$ and $Q$ is empty. An optimal solution may contain points from only $P$, only $Q$, or from both $P$ and $Q$. We consider these three cases and pick one that gives the minimum number of points:
\begin{enumerate}
\item {\em The optimal solution contains points from only $Q$.} Consider any solution $S\subseteq Q$. For every point $p\in P$, let $p'$ be the vertical projection of $p$ on $L_Q$. Then, a point $s\in S$ is the closest point to $p$ if and only if $s$ is the closest point to $p'$. This observation suggests the following algorithm for this case: First project all points of $P$ vertically on $L_Q$; let $P'$ be the resulting set of points. Then, solve the consistent subset problem for points in $Q\cup P'$, which are collinear on $L_Q$, with this invariant that the points of $P'$ should not be included in the solution but should be included in the validity check. This can be done in $O(n)$ time by modifying the algorithm of Section~\ref{collinear-section}.
\item {\em The optimal solution contains points from only $P$.} The solution of this case is analogous to that of previous case.
\item {\em The optimal solution contains points from both $P$ and $Q$.} The description of this case is more involved. Add two dummy points $p^-$ and $p^+$ at $-\infty$ and $+\infty$ on $L_P$, respectively. Analogously, add $q^-$ and $q^+$ on $L_Q$. Color these four points by four new colors that are different from the colors of points in $P\cup Q$. See Figure~\ref{mix-high-level-fig}. Set $D=\{p^+,p^-,q^+,q^-\}$. Observe that every solution for $P\cup Q\cup D$ contains all points of $D$. Moreover, by removing $D$ from any optimal solution of $P\cup Q\cup D$ we obtain an optimal solution for $P\cup Q$. Therefore, to compute an optimal solution for $P\cup Q$, we first compute an optimal solution for $P\cup Q\cup D$ and then remove $D$. In the rest of this section we show how to compute an optimal solution for $P\cup Q\cup D$. Without loss of generality, from now on, we assume that $p^-$ and $p^+$ belong to $P$, and $q^-$ and $q^+$ belong to $Q$. For a point $p$ let $\ell_p$ be the vertical line through $p$.
\begin{figure}
\caption{The pair $(p,q)$ is the closest pair in the optimal solution where $p\in P\setminus\{p^+,p^-\}$ and $q\in Q\setminus\{q^+,q^-\}$. This pair splits the problem into two independent subproblems.}
\label{mix-high-level-fig}
\end{figure}
In the following description the term ``solution" refers to an optimal solution. Consider a solution for this problem with input pair $(P,Q)$, and let $p$ and $q$ be the closest pair in this solution such that $p\in P\setminus\{p^+,p^-\}$ and $q\in Q\setminus\{q^+,q^-\}$ (for now assume that such a pair exists; later we deal with all different cases). These two points split the problem into two subproblems $(P_1,Q_1)$ and $(P_2,Q_2)$ where $P_1$ contains all points of $P$ that are to the left of $p$ (including $p$), $P_2$ contains all points of $P$ that are to the right of $p$ (including $p$), and $Q_1, Q_2$ are defined analogously. Our choice of $p$ and $q$ ensures that no point in the solution lies between the vertical lines $\ell_p$ and $\ell_q$ because otherwise that point would be part of the closest pair. See Figure~\ref{mix-high-level-fig}. Thus, $(P_1,Q_1)$ and $(P_2,Q_2)$ are independent instances of the problem in the sense that for any point in $P_1\cup Q_1$ (resp. $P_2\cup Q_2$) its closest point in the solution belongs to $P_1\cup Q_1$ (resp. $P_2\cup Q_2$). Therefore, if $p$ and $q$ are given to us, we can solve $(P,Q)$ as follows: First we recursively compute a solution for $(P_1,Q_1)$ that contains $p^-,q^-,p,q$ and does not contain any point between $\ell_p$ and $\ell_q$. We compute an analogous solution for $(P_2,Q_2)$ recursively. Then, we take the union of these two solutions as our solution of $(P,Q)$. We do not know $p$ and $q$, and thus we try all possible choices.
Let $p_1,p_2,\dots,p_{|P|}$ and $q_1,q_2,\dots,q_{|Q|}$ be the points of $P$ and $Q$, respectively, from left to right, where $p_1=p^-$ and $q_1=q^-$. In later steps in our recursive solution we get subproblems of type $S(i,j,k,l)$ where the input to this subproblem is $\{p_i,\dots,p_j\}\cup\{q_k,\dots,q_l\}$ and we want to compute a minimum consistent subset that
\begin{itemize}
\item contains $p_i, p_j,q_k$, and $q_l$, and
\item does not contain any point between $\ell_{p_i}$ and $\ell_{q_k}$, nor any point between $\ell_{p_j}$ and $\ell_{q_l}$.
\end{itemize}
To simplify our following description, we may also refer to $S(\cdot)$ as a four dimensional matrix where each of its entries stores the size of the solution for the corresponding subproblem; the solution itself can also be retrieved from $S(\cdot)$. The solution of the original problem will be stored in $S(1,|P|,1,|Q|)$. In the rest of this section we show how to solve $S(i,j,k,l)$ by a top-down dynamic programming approach. Let $p_{i'}$ and $q_{k'}$ be the first points of $P$ and $Q$, respectively, that are to the right sides of both $\ell_{p_i}$ and $\ell_{q_k}$, and let $p_{j'}$ and $q_{l'}$ be the first points of $P$ and $Q$, respectively, that are to the left sides of both $\ell_{p_j}$ and $\ell_{q_l}$; see Figure~\ref{mix-fig-1}. Depending on whether or not the solution of $S(i,j,k,l)$ contains points from $\{p_{i'},\dots,p_{j'}\}$ and $\{q_{k'},\dots,q_{l'}\}$ we consider the following three cases and pick one that minimizes $S(i,j,k,l)$.
\begin{enumerate}
\item {\em The solution does not contain points from any of $\{p_{i'},\dots,p_{j'}\}$ and $\{q_{k'},\dots,q_{l'}\}$.} Thus, the solution contains only $p_i$, $p_j$, $q_k$, and $q_l$. To handle this case, we verify the validity of $\{p_i, p_j,q_k, q_l\}$. If this set is a valid solution, then we assign $S(i,j,k,l)=4$, otherwise we assign $S(i,j,k,l)=+\infty$.
\item {\em The solution contains points from both $\{p_{i'},\dots,p_{j'}\}$ and $\{q_{k'},\dots,q_{l'}\}$.} Let $p_s\in\{p_{i'},\allowbreak \dots,\allowbreak p_{j'}\}$ and $q_t\in\{q_{k'},\dots,q_{l'}\}$ be two such points with minimum distance. Our choice of $p_s$ and $q_t$ ensures that no point of the solution lies between $\ell_{p_s}$ and $\ell_{q_t}$; see Figure~\ref{mix-fig-1}. Therefore, the solution of $S(i,j,k,l)$ is the union of the solutions of subproblems $S(i,s,k,t)$ and $S(s,j,t,l)$. Since we do not know $s$ and $t$, we try all possible pairs and pick one that minimizes $S(\cdot)$, that is
$$S(i,j,k,l)=\min\{S(i,s,k,t)+S(s,j,t,l)-2\mid i'\leqslant s \leqslant j',~k' \leqslant t \leqslant l'\},$$
where ``$-2$" comes from the fact that $p_s$ and $q_t$ are counted twice. The validity of this solution for $S(i,j,k,l)$ is ensured by the validity of the solutions of $S(i,s,k,t)$ and $S(s,j,t,l)$, and the fact that these solutions do not contain any point between $\ell_{p_s}$ and $\ell_{q_t}$.
\begin{figure}
\caption{$(p_s,q_t)$ is the closest pair in the solution where $s\in\{i',\dots,j'\}$ and $t\in\{k',\dots,l'\}$.}
\label{mix-fig-1}
\end{figure}
\item {\em The solution contains points from $\{q_{k'},\dots,q_{l'}\}$ but not from $\{p_{i'},\dots,p_{j'}\}$, or vice versa.} Because of symmetry, we only describe how to handle the first case. If the solution contains exactly one point form $\{q_{k'},\dots,q_{l'}\}$, then we can easily solve this subproblem by trying every point $q_t$ in this set and pick one for which $\{p_i,p_j,q_k,q_t,q_l\}$ is valid solution, then we set $S(i,j,k,l)=5$. Hereafter assume that the solution contains at least two points from $\{q_{k'},\dots,q_{l'}\}$. Let $q_s$ and $q_t$ be the leftmost and rightmost such points, respectively. Consider the Voronoi diagram of $p_i,q_k,q_s$ and the Voronoi diagram of $p_j,q_l,q_t$. Depending on whether or not the Voronoi cells of $q_k$ and $q_l$ intersect the line segment $p_ip_j$ we consider the following two cases.
\begin{enumerate}
\item {\em The Voronoi cell of $q_k$ or the Voronoi cell of $q_l$ does not intersect $p_ip_j$.} Because of symmetry we only describe how to handle the case where the Voronoi cell of $q_k$ does not intersect $p_ip_j$. See Figure~\ref{mix-fig-2}. In this case, $q_k$ cannot be the closest point to any of the points $p_{i+1},\dots, p_{j-1}$, and thus, the solution of $S(i,j,k,l)$ consists of $q_k$ together with the solution of $S(i,j,s,l)$. Since we do not know $s$, we try all possible choices. An index $s\in\{k',\dots,l'-1\}$ is {\em valid} if the Voronoi cell of $q_k$---in the Voronoi diagram of $p_i,q_k,q_s$---does not intersect the line segment $p_ip_j$, and every point in $\{q_{k+1},\dots,q_{s-1}\}$ has the same color as its closest point among $p_i$, $q_k$, and $q_s$. We try all possible choices of $s$ and pick one that is valid and minimizes $S(i,j,k,l)$. Thus,
$$S(i,j,k,l)=\min \{S(i,j,s,l)+1\mid i'\leqslant s\leqslant l'-1 \text{ and $s$ is valid}\}.$$
\begin{figure}
\caption{The Voronoi cell of $q_k$ does not intersect the line segment $p_ip_j$.}
\label{mix-fig-2}
\end{figure}
\item {\em The Voronoi cells of both $q_k$ and $q_l$ intersect $p_ip_j$.} In this case the Voronoi cells of both $q_t$ and $q_s$ also intersect $p_ip_j$; see Figure~\ref{mix-fig-3}.
In the following description we slightly abuse the notation and refer to the input points $\{p_i,\dots,p_j\}$ and $\{q_k,\dots,q_l\}$ by $P$ and $Q$, respectively. Let $p_{s'}$ be the first point of $P$ to the right of $\ell_{q_s}$, and let $p_{t'}$ be the be the first point of $P$ to the left of $\ell_{q_t}$. Let $P'=\{p_{s'},\dots,p_{t'}\}$ and $Q'=\{q_{s},\dots,q_{t}\}$. Consider any (not necessarily optimal) solution of $S(i,j,k,l)$ that consists of $V=\{p_i, p_j, q_k, q_t, q_s, q_l\}$ and some other points in $\{q_{s+1},\dots,q_{t-1}\}$. The closest point in this solution, to any point of $(P\cup Q)\setminus (P'\cup Q')$, is in $V$. Thus, the (optimal) solution of $S(i,j,k,l)$ consists of $V$ and the optimal solution $S'$ of the consistent subset problem on $P'\cup Q'$ provided that $q_s$ and $q_t$ are in $S'$ and no point of $P'$ is in $S'$. Let $T(s,t)$ denote this new problem on $P'\cup Q'$. We solve $T(s,t)$ by a similar method as in case 1: First we project points of $P'$ on $L_Q$ and then we solve the problem for collinear points. Let $P''$ be the set of projected points. To solve $T(s,t)$, we solve the consistent subset problem for $Q'\cup P''$, which are collinear, with this invariant that the solution contains $q_s$ and $q_t$, and does not contain any point of $P''$; see Figure~\ref{mix-fig-3}. This can be done simply by modifying the algorithm of Section~\ref{collinear-section}.
Therefore, $S(i,j,k,l)=T(s,t)+4$. A pair $(s,t)$ of indices is valid if for every point $x$ in $(P\cup Q)\setminus (P'\cup Q')$ it holds that the color of $x$ is the same as the color of $x$'s closest point in $V$.
Since we do not know $s$ and $t$ we try all possible pairs and pick one that is valid and minimizes $S(i,j,k,l)$. Therefore,
$$S(i,j,k,l)=\min \{T(s,t)+4\mid k< s<t<l \text{ and $(s,t)$ is valid}\}.$$
\begin{figure}
\caption{The Voronoi cells of both $q_k$ and $q_l$ intersect the line segment $p_ip_j$.}
\label{mix-fig-3}
\end{figure}
\end{enumerate}
\end{enumerate}
\end{enumerate}
\paragraph{Running Time Analysis:} Cases 1 and 2 can be handled in $O(n\log n)$ time. Case 3 involves four subcases (a), (b), (c)-i, and (c)-ii. We classify the subproblems in these subcases by types 3(a), 3(b), 3(c)-i, and 3(c)-ii, respectively. The number of subproblems of each type is $O(n^4)$. For every subproblem of type 3(a) we only need to verify the validity of $\{p_i, p_j, q_k, q_l\}$; this can be done in $O(n)$ time. Every subproblem of type 3(b) can be solved in $O(n^2)$ time by trying all pairs $(s,t)$. Every subproblem of type 3(c)-i can be solved in $O(n^2)$ time by trying $O(n)$ possible choices for $s$ and verifying the validity of each of them in $O(n)$ time.
\begin{wrapfigure}{r}{2.5in}
\centering
\includegraphics[width=2.4in]{fig/mix-d-2.pdf}
\end{wrapfigure} Now we show that every subproblem of type 3(c)-ii can also be solved in $O(n^2)$ time. To solve every such subproblem we try $O(n^2)$ pairs $(s,t)$ and we need to verify the validity of every pair. To verify the validity of $(s,t)$ we need to make sure that every point in $(P\cup Q)\setminus (P'\cup Q')$ has the same color as its closest point in $V=\{p_i, p_j, q_k, q_s,q_t,q_l\}$. The Voronoi diagrams of $p_i,q_k, q_s$ and $p_j,q_l, q_t$ together with the lines $\ell_{q_s}$ and $\ell_{q_t}$ partition the points of $(P\cup Q)\setminus (P'\cup Q')$ into 10 intervals, 6 intervals on $L_P$ and 4 intervals on $L_Q$; see the figure to the right. For $(s,t)$ to be feasible it is necessary and sufficient that all points in every interval $I$ have the same color as the point in $V$ that has $I$ in its Voronoi cell. If we know the color of points in each of these 10 intervals, then we can verify the validity of $(s,t)$ in constant time. The total number of such intervals is $O(n^2)$ and we can compute in $O(n^2)$ preprocessing time the color of all of them. Therefore, after $O(n^2)$ preprocessing time we can solve all subproblems of type 3(c)-ii in $O(n^6)$ time. Notice that the total number of subproblems of type $T(s,t)$ in case 3(c)-ii is $O(n^2)$ and we can solve all of them in $O(n^3\log n)$ time before solving subproblems $S(i,j,k,l)$. The following theorem summarizes our result in this section.
\begin{theorem}
A minimum consistent subset of $n$ colored points on two parallel lines can be computed in $O(n^6)$ time. \end{theorem}
\subsection{Bichromatic Points on Two Parallel Lines} \label{bichromatic-section}
Let $P$ be a set of $n$ points on two parallel lines in the plane such that all points on one line are colored red and all points on the other line are colored blue. We present a top-down dynamic programming algorithm that solves the consistent problem on $P$ in $O(n^4)$ time. By a suitable rotation and reflection we may assume that the lines are horizontal, and the red points lie on the top line. Let $R$ and $B$ denote the set of red and blue points respectively. Let $r_1,\dots,r_{|R|}$ and $b_1,\dots,b_{|B|}$ be the sequences of red points and blue points from left to right, respectively. For each $i\in\{1,\dots |R|\}$ let $R_i$ denote the set $\{r_1,\dots,r_i\}$, and for each $j\in\{1,\dots, |B|\}$ let $B_j$ denote the set $\{b_1,\dots,b_j\}$. For a point $p$ let $\ell_p$ be the vertical line through $p$.
Any optimal solution for this problem contains at least one blue point and one red point. Moreover, the two rightmost points in any optimal solution have distinct colors, because otherwise we could remove the rightmost one and reduce the size of the optimal solution. We solve this problem by guessing the two rightmost points in an optimal solution; in fact we try all pairs $(r_i, b_j)$ where $i\in\{1,\dots |R|\}$ and $j\in\{1,\dots, |B|\}$. For every pair $(r_i, b_j)$ we solve the consistent subset problem on $R_i\cup B_j$ provided that $r_i$ and $r_j$ are in the solution, and no point between the vertical lines $\ell_{r_i}$ and $\ell_{b_j}$ is in the solution (because $r_i$ and $b_j$ are the two rightmost points in the solution). Then, among all pairs $(r_i, b_j)$ we choose one whose corresponding solution is a valid consistent subset for $R\cup B$ and has minimum number of points. The solution corresponding to $(r_i, b_j)$ is a valid consistent subset for $R\cup B$ if for every $x\in\{i+1, \dots,|R|\}$, the point $r_x$ is closer to $r_i$ than to $b_j$, and for every $y\in\{j+1, \dots,|B|\}$, the point $b_y$ is closer to $b_j$ than to $r_i$. To analyze the running time, notice that we guess $O(n^2)$ pairs $(r_i, b_j)$. In the rest of this section we show how to solve the subproblem associated with each pair $(r_i, b_j)$ in $O(n^2)$ time. The validity of the solution corresponding to $(r_i,b_j)$ can be verified in $O(|R|+|B|-i-j)$ time. Therefore, the total running time of our algorithm is $O(n^4)$.
\begin{figure}
\caption{Illustration of the recursive computation of $T(i,j)$, where (a) $b_j$ is the only blue point in the solution that is to the right of $\ell_{r_s}$, and (b) $b_t$ and $b_j$ are the only two blue points in the solution that are to the right of $\ell_{r_s}$. The crossed points cannot be in the solution.}
\label{two-line-fig}
\end{figure}
To solve subproblems associated with pairs $(r_i,b_j)$, we maintain a table $T$ with $|R|\cdot|B|$ entries $T(i,j)$ where $i\in\{1,\dots |R|\}$ and $j\in\{1,\dots, |B|\}$. Each entry $T(i,j)$ represents the number of points in a minimum consistent subset of $R_i\cup B_j$ provided that $r_i$ and $b_j$ are in this subset and no point of $R_i\cup B_j$, that lies between $\ell_{r_i}$ and $\ell_{b_j}$, is in this subset. We use dynamic programming and show how to compute $T(i,j)$ in a recursive fashion. By symmetry we may assume that $r_i$ is to the right of $\ell_{b_j}$. In the following description the term ``solution" refers to an optimal solution associated with $T(i,j)$. Let $r_{i'}$ be the first red point to the left of $\ell_{b_j}$. Observe that if the solution does not contain any red point other than $r_i$, then $\{r_i,b_j\}$ is the solution, i.e., the solution does not contain any other blue point (other than $b_j$) either. Assume that the solution contains some other red points, and let $r_s$, with $s\in\{1,\dots, i'\}$, be the rightmost such point. Let $b_{j'}$ be the first blue point to the right of $\ell_{r_s}$. Now we consider two cases depending on whether or not the solution contains any blue point (other than $b_j$) to the right of $\ell_{r_s}$.
\begin{itemize}
\item The solution does not contain any other blue point to the right of $\ell_{r_s}$. In this case $T(i,j)=T(s,j) +1$; see Figure~\ref{two-line-fig}(a).
\item The solution contains some other blue points to the right of $\ell_{r_s}$. Let $b_t$, with $t\in\{j',\dots, j-1\}$, be the rightmost such point. In this case the solution does not contain any blue point that is to the left of $b_t$ and to the right of $\ell_{r_s}$ because otherwise we could remove $b_t$ from the solution. Therefore $T(i,j)=T(s,t)+2$; see Figure~\ref{two-line-fig}(b). \end{itemize} Since we do not know $s$ and $t$, we try all possible values and choose one that is {\em valid} and that minimizes $T(i,j)$. Therefore \[ T(i,j) = \min \begin{cases} T(s,j)+1: s\in\{1,\dots, i'\} \text{ and } s \text{ is valid}\\ T(s,t)+2: s\in\{1,\dots, i'\}, t\in\{j',\dots, j-1\} \text{ and } (s,t) \text{ is valid}. \end{cases} \] In the first case, an index $s$ is valid if for every $x\in\{s+1, \dots,i-1\}$ the point $r_x$ is closer to $r_s$ or $r_i$ than to $b_j$. In the second case, a pair $(s,t)$ is valid if for every $x\in\{s+1, \dots,i-1\}$ the point $r_x$ is closer to $r_s$ or $r_i$ than to $b_t$ and $b_j$, and for every $y\in\{t+1, \dots,j-1\}$ the point $b_y$ is closer to $b_t$ or $b_j$ than to $r_s$ and $r_i$.
To compute $T(i,j)$, we perform $O(n^2)$ look-ups into table $T$, and thus, the time to compute $T(i,j)$ is $O(n^2)$. There is a final issue that we need to address, which is checking the validity of $s$ and $t$ within the same time bound. In the first case we have $O(n)$ look-ups for finding $s$. We can verify the validity of each choice of $s$, in $O(n)$ time, by simply checking the distances of all points in $R'=\{r_{s+1},\dots,r_{i'}\}$ from $r_s$, $r_i$ and $b_j$. Now we consider the second case and describe how to verify, for a fixed $t$, the validity of all pairs $(s,t)$ in $O(n)$ time. First of all observe that in this case, any point $b_y$ with $y\in\{t+1, \dots,j-1\}$, is closer to $b_t$ or $b_j$ than to $r_s$ and $r_i$. Therefore, to check the validity of $(s,t)$ it suffices to consider the points in $R'$. Let $r_{t_1}$ be the first point of $R'$ that is to the left of $\ell_{b_t}$, and let $r_{t_2}$ be the first point of $R'$ that is to the right of $\ell_{b_t}$. Define $r_{j_1}$ and $r_{j_2}$ accordingly but with respect to $\ell_{b_j}$. If there is a point in $R'$ that is closer to $b_t$ than to $r_s$ and $r_i$, then $r_{t_1}$ or $r_{t_2}$ is closer to $b_t$ than to $r_s$ and $r_i$. A similar claim holds for $r_{j_1}$, $r_{j_2}$, and $b_j$. Therefore, to check the validity of $(s,t)$ it suffices to check the distances of $r_{t_1}$, $r_{t_2}$, $r_{j_1}$ and $r_{j_2}$ from the points $r_s$, $r_i$, $b_t$ and $b_j$. This can be done in $O(n)$ time for all $s$ and a fixed $t$. (If any of the points $r_{t_1}$, $r_{t_2}$, $r_{j_1}$ and $r_{j_2}$ is undefined then we do not need to check that point.) The following theorem wraps up this section.
\begin{theorem}
Let $P$ be a set of $n$ bichromatic points on two parallel lines, such that all points on the same line have the same color. Then, a minimum consistent subset of $P$ can be computed in $O(n^4)$ time. \end{theorem}
\section{Point-Cone Incidence} \label{point-cone-section} In this section we will prove the following theorem. \begin{theorem}
\label{point-cone-thr}
Let $\mathcal{C}$ be a cone in $\mathbb{R}^3$ with non-empty interior that is given as the intersection of $n$ halfspaces. Given $n$ translations of $\mathcal{C}$ and a set of $n$ points in $\mathbb{R}^3$, we can decide in $O(n\log n)$ time whether or not there is a point-cone incidence. \end{theorem}
We first provide an overview of the approach and its key ingredients. Let $\mathcal{C}_1,\dots ,\mathcal{C}_n$ be the $n$ cones that are translations of $\mathcal{C}$, and let $P$ denote the set of $n$ input points that we want to check their incidence with these cones. Consider a direction $d$ such that $\mathcal{C}$ contains an infinite ray from its apex in direction $d$. After a transformation, we may assume that $d$ is vertically upward. Consider the lower envelope of the $n$ cones; we want to decide whether there is a point of $P$ above this lower envelope; see Figure~\ref{fig:domain}(a). To that end, first we find for every point $p\in P$, the cone $\mathcal{C}_i$ for which the vertical line through $p$ intersects the lower envelope at $\mathcal{C}_i$. Then, we check whether or not $p$ lies above $\mathcal{C}_i$.
Since all the cones are translations of a common cone, their lower envelope can be interpreted as a Voronoi diagram with respect to a distance function defined by a convex polygon obtaining by intersecting $\mathcal{C}$ with a horizontal plane. Furthermore, in this interpretation, the sites have additive weights that correspond to the vertical shifts in the translations of the cones. Therefore, the lower envelope of the cones can be interpreted as an additively-weighted Voronoi diagram with respect to a convex distance function; the {\em sites} of such diagram are the projections of the apices of the cones into the plane. In order to find the cone (on the lower envelope), that is intersected by the vertical line through $p$, it suffices to locate $p$ in such a Voronoi diagram, i.e., to find $p$'s closest site.
We adapt the sweep-line approach used by McAllister, Kirkpatrick and Snoeyink~\cite{McAllister1996} for computing compact Voronoi diagrams for disjoint convex regions with respect to a convex metric. In a compact Voronoi diagram, one has a linear-size partition of the plane into cells, where each cell has two possible candidates to be the closest site. Such a structure is enough to find the closest site to every point $p$: first we locate $p$ in this partition to identify the cell that contains $p$, and then we compute the distance of $p$ to the two candidate sites of that cell to find the one that is closer to $p$. The complexity of such a compact Voronoi diagram, in the worst case, is smaller than the complexity of the traditional Voronoi diagram. Now, we describe our adaption, which involves some modifications of the approach of McAllister {et~al. } \cite{McAllister1996}. Here are the key differences encountered in our adaptation: \begin{itemize}
\item The additive weights on the sites can be interpreted as regions defined by
convex polygons, but then they are not necessarily disjoint (as required in \cite{McAllister1996}).
\item In our case, the Voronoi vertices can be computed faster because the metric and the site regions (encoding the weights) are defined by the same polygon.
\item By splitting $\mathcal{C}$ into two cones that have direction $d$ on their boundaries,
we can assume that the sweep line and the front line (also referred to as the beach line) coincide; this makes the computation of the Voronoi diagram easier.
\item Since the query points (the points of $P$) are already known, we do not need to make a data structure
for point location or to construct the compact Voronoi diagram
explicitly. It suffices to make point location on the
front line (which is the sweep line in our case) when it passes over a point of $P$. \end{itemize}
Notice that some of the cones can be contained in some other, and thus, do not appear on the lower envelope of the cones. Bhattacharya et al. \cite{Bhattacharya2010} claimed a randomized algorithm to find, in $O(n\log n)$ expected time, the apices of the cones that appear in the lower envelope of the cones. They discussed a randomized incremental construction, which is also an adaptation of another algorithm also presented by McAllister, Kirkpatrick and Snoeyink~\cite{McAllister1996}. Nevertheless, a number of aspects in the construction of \cite{Bhattacharya2010} are not clear. Our approach is deterministic, and also solves their problem in $O(n \log n)$ worst-case time.
Now we provide the details of our adapted approach. Consider the cone $\mathcal{C}$ and let $a$ be its apex. Let $r$ be a ray emanating from $a$ in the interior of $\mathcal{C}$ such that the plane $\pi$, that is orthogonal to $r$ at $a$, intersects $\mathcal{C}$ only in $a$. We make a rigid motion where the apex of $\mathcal{C}$ becomes the origin, $r$ becomes vertical, and $\pi$ becomes the horizontal plane defined by $z=0$. The ray $r$, the plane $\pi$, and the geometric transformation can be computed in $O(n)$ time using linear programming in fixed dimension~\cite{Megiddo1984}. From now on, we will assume that the input is actually given after the transformation. Let $\mathcal{C}'$ be the intersection of $\mathcal{C}$ with the halfspace $x\geqslant 0$, and let $\mathcal{C}''$ be the intersection of $\mathcal{C}$ with the halfspace $x\leqslant 0$. See Figure~\ref{fig:domain}(a). Since we took $r$ in the interior of $\mathcal{C}$, both $\mathcal{C}'$ and $\mathcal{C}''$ have nonempty interiors.
Let $\mathcal{C}_1,\dots ,\mathcal{C}_n$ be the cones after the above transformation. For each $i$, let $(a_i,b_i,c_i)$ be the apex of $\mathcal{C}_i$. Recall that, by assumption, each cone $\mathcal{C}_i$ is the translation of $\mathcal{C}$ that brings $(0,0,0)$ to $(a_i,b_i,c_i)$. We split each cone $\mathcal{C}_i$ into two cones, denoted $\mathcal{C}'_i$ and $\mathcal{C}''_i$, using the plane $x=a_i$. Notice that $\mathcal{C}'_1,\dots,\mathcal{C}'_n$ are translations of $\mathcal{C}'$, and $\mathcal{C}''_1,\dots,\mathcal{C}''_n$ are translations of $\mathcal{C}''$. We split the problem into two subproblems, in one of them want to find a point-cone incidence between $P$ and $\mathcal{C}'_1,\dots,\mathcal{C}'_n$, and in the other we want to find a point-cone incidence between $P$ and $\mathcal{C}''_1,\dots,\mathcal{C}''_n$. Any point-cone incidence $(p,\mathcal{C}'_i)$ or $(p,\mathcal{C}''_i)$ corresponds to a point-cone incidence $(p,\mathcal{C}_i)$, and vice versa. We explain how to solve the point cone incidence for $P$ and cones $\mathcal{C}'_i$; the incidence for cones $\mathcal{C}''_i$ is similar.
Recall that the origin is the apex of $\mathcal{C}'$. We define $M$ to be the polygon obtained by intersecting $\mathcal{C}'$ with the horizontal plane $z=1$. Note that $M$ lies on the halfspace $x\ge 0$. Our choice of $r$ in the interior of $\mathcal{C}$ implies that $M$ has a nonempty interior and $(0,0,1)$ lies on the (relative) interior of a boundary edge of $M$.
See Figure~\ref{fig:domain}(a). Since $M$ is a convex polygon that is the intersection of $n$ halfplanes with the plane $z=1$, it can be computed in $O(n\log n)$ time. In the rest of description, we consider $M$ being in $\mathbb{R}^2$, where we just drop the $z$-coordinate as in Figure~\ref{fig:domain}(b).
\begin{figure}
\caption{(a) The cone $\mathcal{C}$ that is splitted to $\mathcal{C}'$ and $\mathcal{C}''$, and the polygon $M$ which is the intersection of the plane $z=1$ with $\mathcal{C}'$. (b) The domain $H_i$, and the translation of $M$ that brings $(0,0)$ to $(a_i,b_i)$ followed by a scale with factor $\lambda$.}
\label{fig:domain}
\end{figure}
Let $H_i$ denote the projection of $\mathcal{C}'_i$ on the $xy$-plane. Note that $H_i$ is the halfplane defined by $x\ge a_i$ because we took $r$ in the interior of $\mathcal{C}$. See Figure~\ref{fig:domain}. Let $H$ be the union of all halfplanes $H_i$, and note that $H$ is defined by $x\geqslant \min \{ a_1,\dots, a_n\}$.
The boundary of every cone $\mathcal{C}'_i$ can be interpreted as a function $f_i\colon H_i\rightarrow \mathbb{R}$ where $f_i(x,y)= \min \{ \lambda \in \mathbb{R}_{\ge 0} \mid (x,y,\lambda)\in \mathcal{C}'_i \}$. Alternatively, for every $(x,y)\in H_i\setminus \{ (a_i,b_i)\}$ we have \[ f_i(x,y)=c_i+\min \{\lambda\geqslant 0\mid (x,y)\in (a_i,b_i)+\lambda M\} = c_i+\min \{\lambda>0\mid \frac{(x,y)-(a_i,b_i)}{\lambda} \in M\}, \] where $\lambda$ is the smallest amount that $M$ must be scaled, after a translation to $(a_i,b_i)$, to include $(x,y)$; see Figure~\ref{fig:domain}(b). Note that if $\mathcal{C}'_i$ contains a point $(x,y,z)$, it also contains $(x,y,z')$ for all $z'\geqslant z$. Therefore, the surface $\{ (x,y,f_i(x,y)\mid (x,y)\in H_i \}$ precisely defines the boundary of $\mathcal{C}'_i$. Based on this, to decide whether a point $(x,y,z)$ lies in $\mathcal{C}'_i$, it suffices to check whether $(x,y)\in H_i$ and $z\geqslant f_i(x,y)$. Notice that every $f_i$ is a convex function on domain $H_i$. We extend the domain of each $f_i$ to $H$ by setting $f(x,y)=\infty$ for all $(x,y)\in H\setminus H_i$. In this way all the functions $f_i$ are defined in the same domain $H$.
Let us denote by $F$ the family of functions $\{ f_1,\dots,f_n \}$, and define the pointwise minimization function $f_{\min}(x,y)=\min \{ f_1(x,y),\dots,f_n(x,y) \}$ for every $(x,y)\in\mathbb{R}^2$. For simplicity, we assume that the surfaces defined by $F$ are in \emph{general position} in the sense that, before extending the domains of the functions $f_i$, the following conditions hold: (i) no apex of a cone lies on the boundary of another cone, that is, {$f_j(a_i,b_i)\neq c_i$ for all $i\neq j$, (ii) any three surfaces defined by $F$ have a finite number of points in common, and (iii) no four surfaces defined by $F$ have a common point. Such assumptions can be enforced using infinitesimal perturbations.
For each $i$, let $\mathcal{R}_i$ be the subset of $H$ where $f_i$ gives the minimum among all functions in $F$, that is \[ \mathcal{R}_i = \{ (x,y)\in H\mid f_i(x,y)=f_{\min}(x,y)\}. \] This introduces a partition the plane into regions $\mathcal{R}_i$; we will refer to this partition by {\em minimization diagram}. We note that $\mathcal{R}_i$ can be empty; this occurs when the apex $(a_i,b_i,c_i)$ of $\mathcal{C}_i$ is contained in the interior of some cone $\mathcal{C}'_j$, with $j\neq i$. We show some folklore properties of the regions $\{\mathcal{R}_1,\dots,\mathcal{R}_n\}$. In our following description, $[n]$ denotes the set $\{1,2,\dots, n\}$.
\begin{lemma}
\label{le:starshaped}
For any index $i\in [n]$, the region $\mathcal{R}_i$ is star-shaped with respect to the point $(a_i,b_i)$.
For any three distinct indices $i,j,k\in [n]$, the intersection $\mathcal{R}_i\cap \mathcal{R}_j\cap \mathcal{R}_k$
contains at most two points.
For any two distinct indices $i,j\in [n]$ and every line $\ell$ in $\mathbb{R}^2$ where $(a_i, b_i)$ and $(a_j,b_j)$ lie on the same side of $\ell$, the intersection $\ell\cap \mathcal{R}_i\cap \mathcal{R}_j$ contains at most two points. \end{lemma} \begin{proof}
To verify $\mathcal{R}_i$ being star-shaped with respect to $(a_i,b_i)$,
the proof of~\cite[Corollary~2.5]{McAllister1996} applies.
To verify the second claim, notice that if $\mathcal{R}_i\cap \mathcal{R}_j\cap \mathcal{R}_k$ have three or more points, then by connecting those points to $(a_i,b_i)$, $(a_j,b_j)$ and $(a_k,b_k)$ with line segments, we would get a planar drawing of the graph $K_{3,3}$, which is impossible. To verify the third claim, note that if for some line $\ell $ the intersection $\ell\cap \mathcal{R}_i\cap \mathcal{R}_j$ have three or more points, then again we would get an impossible planar drawing of $K_{3,3}$ as follows: we connect those points to $(a_i,b_i)$, $(a_j,b_j)$, and to an arbitrary point to the side of $\ell$ that does not contain $(a_i,b_i)$ and $(a_j,b_j)$. \end{proof}
\begin{lemma}
\label{le:basic-operations}
After $O(n \log n)$ preprocessing time on $M$ we can solve the following problems in $O(\log n)$ time:
\begin{itemize}
\item Given a point $p$ and an index $i\in [n]$, decide whether or not $p\in \mathcal{C}_i$.
\item Given three distinct indices $i,j,k\in [n]$, compute $\mathcal{R}_i\cap \mathcal{R}_j\cap \mathcal{R}_k$.
\item Given two distinct indices $i,j\in [n]$ and a vertical line $\ell$ in $\mathbb{R}^2$,
compute $\ell\cap \mathcal{R}_i\cap \mathcal{R}_j$.
\end{itemize} \end{lemma} \begin{proof}
We compute $M$ explicitly in $O(n \log n)$ time
and store its vertices and edges cyclically ordered in an array. Let $-M$ denote $\{(-x,-y)\mid (x,y)\in M\}$.
For each vertex $v$ of $M$ we choose an outer normal $\overrightarrow{n_v}$ vector.
We also store for each vertex $v$ of $M$
the vertex of $-M$ that is extremal in the direction $\overrightarrow{n_v}$.
Having $M$, this can be done in linear time by walking
through the boundaries of $M$ and $-M$ simultaneously.
This finishes the preprocessing.
For the first claim, we are given an index $i$ and a point $p=(p_x,p_y,p_z)$. By performing binary search on the edges of $M$ we can find the edge that intersects the ray with direction $(p_x,p_y)-(a_i,b_i)$ in $O(\log n)$ time. This edge determines
the value $\min \{\lambda\ge 0\mid (p_x,p_y)\in (a_i,b_i)+\lambda M\}$, which in turn gives
$f_i(p_x,p_y)$. By comparing $f_i(p_x,p_y)$ with $p_z$ we can decide whether or not $p\in \mathcal{C}_i$.
Now we prove the second claim. By Lemma~\ref{le:starshaped}, $\mathcal{R}_i\cap \mathcal{R}_j\cap \mathcal{R}_k$
contains at most two points. Assume, without loss of generality,
that $i=1$, $j=2$, $k=3$ and $c_1=\max \{ c_1,c_2,c_3\}$.
Let $P_1$ be the (degenerate) polygon with a single vertex $(a_1,b_1)$.
Let $P_2$ and $P_3$ be the convex polygons $(a_2,b_2)+(c_1-c_2)M$ and $(a_3,b_3)+(c_1-c_3)M$, respectively. The polygons $P_2$ and $P_3$ might also be single points if $c_2=c_1$ and $c_3=c_1$.
A point $(x,y)$ belongs to $\mathcal{R}_1\cap \mathcal{R}_2\cap \mathcal{R}_3$
if and only if for some $\lambda$ the polygon $(x,y)+\lambda(-M)$ is tangent
to $P_1$, $P_2$ and $P_3$.
If there is some containment between the polygons $P_1$, $P_2$ and $P_3$, i.e., one polygon is totally contained in other polygon,
then $\mathcal{R}_i\cap \mathcal{R}_j\cap \mathcal{R}_k$ is empty. Assume that there is no containment between these polygons. Now we are going to find two convex polygons $P'_2$ and $P'_3$ such that $P_1$, $P'_2$, $P'_3$ are pairwise interior disjoint.
If $P_2$ and $P_3$ are interior disjoint, we take $P'_2=P_2$ and $P'_3=P_3$.
Otherwise, the boundaries of $P_2$ and $P_3$ intersect at most twice because
they are convex polygons. We compute the intersections $q_1,q_2$ between the boundary
of $P_2$ and $P_3$ in $O(\log n)$ time.
We use the the segment $q_1q_2$ to cut $P_2\cap P_3$ so that we obtain
two interior disjoint convex polygons $P'_2\subset P_2$ and $P'_3\subset P_3$
with $P'_2\cup P'_3= P_2\cup P_3$.
The polygon $P'_2$ is described implicitly by the segment $q_1q_2$
and the interval of indices of $M$ that describe the portion of $P_2$ between $q_1$ and $q_2$.
The description of $P'_3$ is similar. With this description, we can perform
binary search on the boundaries of $P'_2$ and $P'_3$.
Now we want to find the (at most two) scaled copies of $-M$ that can be translated
to touch $P_1$, $P'_2$ and $P'_3$. Since the polygons are disjoint, we can use
the tentative prune-and-search technique of
Kirkpatrick and Snoeyink~\cite{Kirkpatrick1995}
as used in~\cite[Lemma~3.15]{McAllister1996}. The procedure makes $O(\log n)$ steps,
where in each step we locate the extreme point of $-M$ in the direction $\overrightarrow{n_v}$ for some vertex $v$ of $P'_i$. Since such
vertices are precomputed, we spend $O(1)$ time in each of
the $O(\log n)$ steps used by the tentative prune-and-search.\footnote{
The running time in~\cite[Lemma~3.15]{McAllister1996} has an extra logarithmic factor
because they spend $O(\log m)$ time to find the extremal vertex in a polygon $M$
with $m$ vertices.}
The proof of the third claim is similar to that of previous claim, where we treat $\ell$ as a degenerate polygon. \end{proof}
Let $A$ be the set of points $\{ (a_1,b_1),\dots (a_n,b_n)\}$ defined by the apices of the cones. We use the sweep-line algorithm of~\cite{McAllister1996} to compute a representation of the minimization diagram. More precisely, we sweep $H$ with a vertical line $\ell\equiv \{ (x,y)\mid x=t\}$, where $t$ goes from $-\infty$ to $+\infty$. In our case, the sweep line and the sweep front (the beach line) are the same because future points of $A$ do not affect the current minimization diagram. During the sweep, we maintain (in a binary search tree) the intersection of $\ell$ with the regions $\mathcal{R}_i$, sorted as they occur along the line $\ell$, possibly with repetitions.
There are two types of events. A {\em vertex event} (or {\em circle event}) occurs when the sweep front goes over a point of $\mathcal{R}_i\cap \mathcal{R}_j\cap \mathcal{R}_k$. In our case, this is when $\ell$ goes over such a point. A \emph{site event} occurs when the sweep line $\ell$ (and thus the sweep front) goes over a point of $A$. The total number of these events is linear.
Vertex events will be handled in the same way as in McAllister {et~al. } \cite{McAllister1996}. We describe how to handle site events. At a site event, we locate the point $(a_i,b_i)$ in the current region $\mathcal{R}_j$ that contains it, as $\mathcal{R}_i$ could be empty. As shown in~\cite[Section~3.2]{McAllister1996}, this location can be done in $O(\log n)$ time using auxiliary information that is carried over during the sweep. Once we have located $(a_i,b_i)$ in $\mathcal{R}_j$, we compare $f_j(a_i,b_i)$ with $c_i$ to decide whether or not $(a_i,b_i,c_i)$ is contained in $\mathcal{C}'_j$. If $(a_i,b_i,c_i)$ belongs to $\mathcal{C}'_j$, with $j\neq i$, then the region $\mathcal{R}_i$ is empty, and we can just ignore the existence of $\mathcal{C}_i$. Otherwise, $\mathcal{R}_i$ is not empty, and we have to insert it into the minimization diagram and update the information associated to $\ell$. Overall, we spend $O(\log n)$ time per site event $(a_i,b_i)$, plus the time needed to find future vertex events triggered by the current site event.
Whenever the line $\ell$ passes through a point $(p_x,p_y)$, where $(p_x,p_y,p_z)\in P$, we can apply the same binary search on the sweep line as for site events. This means that in time $O(\log n)$ we locate the region $\mathcal{R}_j$ that contains $(p_x,p_y)$. Then we check whether or not $(p_x,p_y,p_z)$ belongs to $\mathcal{C}'_j$; this would take an additional $O(\log n)$ time by the first claim in Lemma~\ref{le:basic-operations}. Therefore, we can decide in $O(\log n)$ time whether or not the point $(p_x,p_y,p_z)\in P$ belongs to any of the cones $\mathcal{C}'_1,\dots,\mathcal{C}'_n$.
At any event (site or vertex event) that changes the sequence of regions $\mathcal{R}_i$ intersected by $\ell$, we have to compute possible new vertex events. By Lemma~\ref{le:basic-operations}, this computation takes $O(\log n)$ time; the third claim in this lemma takes care of so-called vertices at infinity in~\cite{McAllister1996}. We note that in~\cite{McAllister1996} this step takes $O(\log n \log m)$ time because of their general setting.
To summarize, we have a linear number of events each taking $O(\log n)$ time. Therefore, we can decide the existence of a point-cone incidence in $O(n\log n)$ time. This finishes the proof of Theorem~\ref{point-cone-thr}
\paragraph{Acknowledgement.} This work initiated at the {\em Sixth Annual Workshop on Geometry and Graphs}, March 11-16, 2018, at the Bellairs Research Institute of McGill University, Barbados. The authors are grateful to the organizers and to the participants of this workshop. Segrio Cabello was supported by the Slovenian Research Agency, program P1-0297 and projects J1-8130, J1-8155.
Ahmad Biniaz was supported by NSERC Postdoctoral Fellowship. Sergio Cabello was supported by the Slovenian Research Agency, program P1-0297 and projects J1-8130, J1-8155. Paz Carmi was supported by grant 2016116 from the United States – Israel Binational Science Foundation. Jean-Lou De Carufel, Anil Maheshwari, and Michiel Smid were supported by NSERC. Saeed Mehrabi was supported by NSERC and by Carleton-Fields Postdoctoral Fellowship.
\end{document} | arXiv |
Chen, Zhangxin ; Espedal, Magne ; Ewing, Richard E.
Continuous-time finite element analysis of multiphase flow in groundwater hydrology. (English). Applications of Mathematics, vol. 40 (1995), issue 3, pp. 203-226
MSC: 65M60, 65N30, 76M10, 76S05 | MR 1332314 | Zbl 0847.76030 | DOI: 10.21136/AM.1995.134291
mixed method; finite element; compressible flow; porous media; error estimate; air-water system
A nonlinear differential system for describing an air-water system in groundwater hydrology is given. The system is written in a fractional flow formulation, i.e., in terms of a saturation and a global pressure. A continuous-time version of the finite element method is developed and analyzed for the approximation of the saturation and pressure. The saturation equation is treated by a Galerkin finite element method, while the pressure equation is treated by a mixed finite element method. The analysis is carried out first for the case where the capillary diffusion coefficient is assumed to be uniformly positive, and is then extended to a degenerate case where the diffusion coefficient can be zero. It is shown that error estimates of optimal order in the $L^2$-norm and almost optimal order in the $L^\infty $-norm can be obtained in the nondegenerate case. In the degenerate case we consider a regularization of the saturation equation by perturbing the diffusion coefficient. The norm of error estimates depends on the severity of the degeneracy in diffusivity, with almost optimal order convergence for non-severe degeneracy. Existence and uniqueness of the approximate solution is also proven.
[1] J. Bear: Dynamics of Fluids in Porous Media. Dover, New York, 1972.
[2] F. Brezzi, J. Douglas, Jr., R. Durán, and M. Fortin: Mixed finite elements for second order elliptic problems in three variables. Numer. Math. 51 (1987), 237–250. DOI 10.1007/BF01396752 | MR 0890035
[3] F. Brezzi, J. Douglas, Jr., M. Fortin, and L. Marini: Efficient rectangular mixed finite elements in two and three space variables. RAIRO Modèl. Math. Anal. Numér 21 (1987), 581–604. DOI 10.1051/m2an/1987210405811 | MR 0921828
[4] F. Brezzi, J. Douglas, Jr., and L. Marini: Two families of mixed finite elements for second order elliptic problems. Numer. Math. 47 (1985), 217–235. DOI 10.1007/BF01389710 | MR 0799685
[5] M. Celia and P. Binning: Two-phase unsaturated flow: one dimensional simulation and air phase velocities. Water Resources Research 28 (1992), 2819–2828.
[6] G. Chavent and J. Jaffré: Mathematical Models and Finite Elements for Reservoir Simulation. North-Holland, Amsterdam, 1978.
[7] Z. Chen: Analysis of mixed methods using conforming and nonconforming finite element methods. RAIRO Modèl. Math. Anal. Numér. 27 (1993), 9–34. DOI 10.1051/m2an/1993270100091 | MR 1204626 | Zbl 0784.65075
[8] Z. Chen: Finite element methods for the black oil model in petroleum reservoirs. IMA Preprint Series $\#$ 1238, submitted to Math. Comp.
[9] Z. Chen and J. Douglas, Jr.: Approximation of coefficients in hybrid and mixed methods for nonlinear parabolic problems. Mat. Aplic. Comp. 10 (1991), 137–160. MR 1172090
[10] Z. Chen and J. Douglas, Jr.: Prismatic mixed finite elements for second order elliptic problems. Calcolo 26 (1989), 135–148. DOI 10.1007/BF02575725 | MR 1083050
[11] Z. Chen, R. Ewing, and M. Espedal: Multiphase flow simulation with various boundary conditions. Numerical Methods in Water Resources, Vol. 2, A. Peters, et als. (eds.), Kluwer Academic Publishers, Netherlands, 1994, pp. 925–932.
[12] S. Chou and Q. Li: Mixed finite element methods for compressible miscible displacement in porous media. Math. Comp. 57 (1991), 507–527. DOI 10.1090/S0025-5718-1991-1094942-7 | MR 1094942
[13] P. Ciarlet: The Finite Element Method for Elliptic Problems. North-Holland, Amsterdam, 1978. MR 0520174 | Zbl 0383.65058
[14] J. Douglas, Jr.: Finite difference methods for two-phase incompressible flow in porous media. SIAM J. Numer. Anal. 20 (1983), 681–696. DOI 10.1137/0720046 | MR 0708451 | Zbl 0519.76107
[15] J. Douglas, Jr. and J. Roberts: Numerical methods for a model for compressible miscible displacement in porous media. Math. Comp. 41 (1983), 441–459. DOI 10.1090/S0025-5718-1983-0717695-3 | MR 0717695
[16] J. Douglas, Jr. and J. Roberts: Global estimates for mixed methods for second order elliptic problems. Math. Comp. 45 (1985), 39–52. MR 0771029
[17] N. S. Espedal and R. E. Ewing: Characteristic Petrov-Galerkin subdomain methods for two phase immiscible flow. Comput. Methods Appl. Mech. Eng. 64 (1987), 113–135. DOI 10.1016/0045-7825(87)90036-3 | MR 0912516
[18] R. Ewing and M. Wheeler: Galerkin methods for miscible displacement problems with point sources and sinks-unit mobility ratio case. Mathematical Methods in Energy Research, K. I. Gross, ed., Society for Industrial and Applied Mathematics, Philadelphia, 1984, pp. 40–58. MR 0790511
[19] K. Fadimba and R. Sharpley: A priori estimates and regularization for a class of porous medium equations. Preprint, submitted to Nonlinear World. MR 1376946
[20] K. Fadimba and R. Sharpley: Galerkin finite element method for a class of porous medium equations. Preprint. MR 2025071
[21] D. Hillel: Fundamentals of Soil Physics. Academic Press, San Diego, California, 1980.
[22] C. Johnson and V. Thomée: Error estimates for some mixed finite element methods for parabolic type problems. RAIRO Anal. Numér. 15 (1981), 41–78. DOI 10.1051/m2an/1981150100411 | MR 0610597
[23] H. J. Morel-Seytoux: Two-phase flows in porous media. Advances in Hydroscience 9 (1973), 119–202. DOI 10.1016/B978-0-12-021809-7.50009-2
[24] J. C. Nedelec: Mixed finite elements in $\Re ^3$. Numer. Math. 35 (1980), 315–341. DOI 10.1007/BF01396415 | MR 0592160
[25] J. Nitsche: $L_\infty $-Convergence of Finite Element Approximation. Proc. Second Conference on Finite Elements, Rennes, France, 1975. MR 0568857
[26] D. W. Peaceman: Fundamentals of Numerical Reservoir Simulation. Elsevier, New York, 1977.
[27] O. Pironneau: On the transport-diffusion algorithm and its application to the Navier-Stokes equations. Numer. Math. 38 (1982), 309–332. DOI 10.1007/BF01396435 | MR 0654100
[28] P.A. Raviart and J.M. Thomas: A mixed finite element method for second order elliptic problems. Lecture Notes in Math. 606, Springer, Berlin, 1977, pp. 292–315. MR 0483555
[29] M. Rose: Numerical Methods for flow through porous media I. Math. Comp. 40 (1983), 437–467. DOI 10.1090/S0025-5718-1983-0689465-6 | MR 0689465
[30] A. Schatz, V. Thomée, and L. Wahlbin: Maximum norm stability and error estimates in parabolic finite element equations. Comm. Pure Appl. Math. 33 (1980), 265–304. DOI 10.1002/cpa.3160330305 | MR 0562737
[31] R. Scott: Optimal $L^\infty $ estimates for the finite element method on irregular meshes. Math. Comp. 30 (1976), 681–697. MR 0436617
[32] D. Smylie: A near optimal order approximation to a class of two sided nonlinear degenerate parabolic partial differential equations. Ph. D. Thesis, University of Wyoming, 1989.
[32] M. F. Wheeler: A priori $L_2$ error estimates for Galerkin approximation to parabolic partial differential equations. SIAM J. Numer. Anal. 10 (1973), 723–759. DOI 10.1137/0710062 | MR 0351124 | CommonCrawl |
Define a positive integer $n$ to be a factorial tail if there is some positive integer $m$ such that the decimal representation of $m!$ ends with exactly $n$ zeroes. How many positive integers less than $1992$ are not factorial tails?
Let the number of zeros at the end of $m!$ be $f(m)$. We have $f(m) = \left\lfloor \frac{m}{5} \right\rfloor + \left\lfloor \frac{m}{25} \right\rfloor + \left\lfloor \frac{m}{125} \right\rfloor + \left\lfloor \frac{m}{625} \right\rfloor + \left\lfloor \frac{m}{3125} \right\rfloor + \cdots$.
Note that if $m$ is a multiple of $5$, $f(m) = f(m+1) = f(m+2) = f(m+3) = f(m+4)$.
Since $f(m) \le \frac{m}{5} + \frac{m}{25} + \frac{m}{125} + \cdots = \frac{m}{4}$, a value of $m$ such that $f(m) = 1991$ is greater than $7964$. Testing values greater than this yields $f(7975)=1991$.
There are $\frac{7975}{5} = 1595$ distinct positive integers, $f(m)$, less than $1992$. Thus, there are $1991-1595 = \boxed{396}$ positive integers less than $1992$ that are not factorial tails. | Math Dataset |
How many natural-number factors does $\textit{N}$ have if $\textit{N} = 2^3 \cdot 3^2 \cdot 5^1$?
Any positive integer divisor of $N$ must take the form $2^a \cdot 3^b \cdot 5^c$ where $0 \le a \le 3$, $0 \le b \le 2$ and $0 \le c \le 1$. In other words, there are 4 choices for $a$, 3 choices for $b$ and 2 choices for $c$. So there are $4 \cdot 3 \cdot 2 = \boxed{24}$ natural-number factors of $N$. | Math Dataset |
Mathematical Models (Fischer)
Mathematical Models: From the Collections of Universities and Museums – Photograph Volume and Commentary is a book on the physical models of concepts in mathematics that were constructed in the 19th century and early 20th century and kept as instructional aids at universities. It credits Gerd Fischer as editor, but its photographs of models are also by Fischer.[1] It was originally published by Vieweg+Teubner Verlag for their bicentennial in 1986, both in German (titled Mathematische Modelle. Aus den Sammlungen von Universitäten und Museen. Mit 132 Fotografien. Bildband und Kommentarband) [2] and (separately) in English translation,[3][4] in each case as a two-volume set with one volume of photographs and a second volume of mathematical commentary.[2][3][4] Springer Spektrum reprinted it in a second edition in 2017, as a single dual-language volume.[1]
Not to be confused with Mathematical Models (Cundy and Rollett).
Topics
The work consists of 132 full-page photographs of mathematical models,[4] divided into seven categories, and seven chapters of mathematical commentary written by experts in the topic area of each category.[1]
These categories are:
• Wire and thread models, of hypercubes of various dimensions, and of hyperboloids, cylinders, and related ruled surfaces, described as "elementary analytic geometry" and explained by Fischer himself.[1][3]
• Plaster and wood models of cubic and quartic algebraic surfaces, including Cayley's ruled cubic surface, the Clebsch surface, Fresnel's wave surface, the Kummer surface, and the Roman surface, with commentary by W. Barth and H. Knörrer.[1][2][3]
• Wire and plaster models illustrating the differential geometry and curvature of curves and surfaces, including surfaces of revolution, Dupin cyclides, helicoids, and minimal surfaces including the Enneper surface, with commentary by M. P. do Carmo, G. Fischer, U. Pinkall, H. and Reckziegel.[1][3]
• Surfaces of constant width including the surface of rotation of the Reuleaux triangle and the Meissner bodies, described by J. Böhm.[1][2][3]
• Uniform star polyhedra, described by E. Quaisser.
• Models of the projective plane, including the Roman surface (again), the cross-cap, and Boy's surface, with commentary by U. Pinkall that includes its realization by Roger Apéry as a quartic surface (disproving a conjecture of Heinz Hopf).[1][3]
• Graphs of functions, both with real and complex variables, including the Peano surface, Riemann surfaces, exponential function and Weierstrass's elliptic functions, with commentary by J. Leiterer.[1][2][3]
Audience and reception
This book can be viewed as a supplement to Mathematical Models by Martyn Cundy and A. P. Rollett (1950), on instructions for making mathematical models, which according to reviewer Tony Gardiner "should be in every classroom and on every lecturer's shelf" but in fact sold very slowly. Gardiner writes that the photographs may be useful in undergraduate mathematics lectures, while the commentary is best aimed at mathematics professionals in giving them an understanding of what each model depicts. Gardiner also suggests using the book as a source of inspiration for undergraduate research projects that use its models as starting points and build on the mathematics they depict. Although Gardiner finds the commentary at times overly telegraphic and difficult to understand,[4] reviewer O. Giering, writing about the German-language version of the same commentary, calls it detailed, easy-to-read, and stimulating.[2]
By the time of the publication of the second edition, in 2017, reviewer Hans-Peter Schröcker evaluates the visualizations in the book as "anachronistic", superseded by the ability to visualize the same phenomena more easily with modern computer graphics, and he writes that some of the commentary is also "slightly outdated". Nevertheless, he writes that the photos are "beautiful and aesthetically pleasing", writing approvingly that they use color sparingly and aim to let the models speak for themselves rather than dazzling with many color images. And despite the fading strength of its original purpose, he finds the book valuable both for its historical interest and for what it still has to say about visualizing mathematics in a way that is both beautiful and informative.[1]
References
1. Schröcker, Hans-Peter, "Review of Mathematical Models (1st edition)", zbMATH, Zbl 1386.00007
2. Giering, O., "Review of Mathematische Modelle", zbMATH, Zbl 0585.51001
3. Banchoff, T. (1988), "Review of Mathematical Models (1st edition)", Mathematical Reviews, MR 0851009
4. Gardiner, Tony (March 1987), "Review of Mathematical Models (1st edition)", The Mathematical Gazette, 71 (455): 94, doi:10.2307/3616334, JSTOR 3616334, S2CID 165554250
| Wikipedia |
Only show open access (11)
Materials Research (78)
Journal of Materials Research (31)
British Journal of Nutrition (11)
Chinese Journal of Agricultural Biotechnology (8)
Journal of Plasma Physics (6)
Laser and Particle Beams (6)
The Journal of Laryngology & Otology (5)
Geological Magazine (4)
The Journal of Navigation (4)
International Psychogeriatrics (3)
MRS Communications (3)
The European Physical Journal - Applied Physics (3)
Disaster Medicine and Public Health Preparedness (2)
The University of Adelaide Press (1)
Nutrition Society (12)
Australian Mathematical Society Inc (3)
International Psychogeriatric Association (3)
Asian Association of Social Psychology (2)
Canadian Neurological Sciences Federation (2)
Global Science Press (2)
Neuroscience Education Institute (2)
Ryan Test (2)
Society for Economic Measurement (SEM) (2)
Weed Science Society of America (1)
Special Publications of the International Union of Geodesy and Geophysics (1)
Solidification structure evolution of immiscible Al–Bi–Sn alloys at different cooling rates
Jinchuan Jie, Zhilin Zheng, Shichao Liu, Shipeng Yue, Tingju Li
Journal: Journal of Materials Research , First View
Under conventional solidification conditions, immiscible alloy melt would undergo large-scale composition segregation after liquid–liquid phase separation, resulting in the loss of properties and application value. In the present study, the ternary immiscible Al70Bi10Sn20 alloy was chosen to study the effect of cooling rate on its resultant microstructure by casting the melt under different cooling conditions. The results indicated that the Al–Bi–Sn alloy with a slow cooling rate exhibits a strong spatial phase separation trend during solidification. However, as the cooling rate increases, the decreasing volume fraction of the segregated Bi–Sn-rich regions indicates the efficient suppression of spatial phase separation. The relatively dispersed distribution of Bi–Sn phase in the Al-rich matrix can be obtained by quenching the melt into water. The influence mechanism of cooling rate on the microstructure of the alloy is also discussed. The present study is beneficial to further tailoring the microstructure of immiscible alloys.
Hierarchical mesoporous Zn–Ni–Co–S microspheres grown on reduced graphene oxide/nickel foam for asymmetric supercapacitors
Uwamahoro Evariste, Guohua Jiang, Bo Yu, Yongkun Liu, Zheng Huang, Qiuling Lu, Pianpian Ma
Published online by Cambridge University Press: 02 July 2019, pp. 1-11
In this work, hierarchical mesoporous Zn–Ni–Co–S–rGO/NF microspheres have been prepared by hydrothermal, sulfurization, and subsequent calcination process. The effect of different sulfurization time on the morphology and capacitance of composites was tested. The high electrochemical performance of (Zn–Ni–Co–S–rGO/NF) composite was obtained when the sulfurization time was 3 h (Zn–Ni–Co–S–rGO/NF-3h), where a specific capacitance of 627.7 F/g at 0.25 A/g and excellent rate capability of about 97.8% capacitance retention at 2 A/g after 4000 cycles were achieved. Moreover, an asymmetric supercapacitor fabricated by (Zn–Ni–Co–S–rGO/NF-3h) composite and activated carbon (AC) as the positive and the negative electrodes, respectively, showed a high energy density of 75.96 W h/kg at a power density of 362.49 W/kg with a remarkable cycle stability performance of 91.2% capacitance retention over 5000 cycles. This incredible electrochemical behavior illustrates that the hierarchical mesoporous Zn–Ni–Co–S–rGO/N-3h microsphere electrodes are promising electrode materials for application in high-performance supercapacitors.
A COMPREHENSIVE STUDY OF THE 14C SOURCE TERM IN THE 10 MW HIGH-TEMPERATURE GAS-COOLED REACTOR
X Liu, W Peng, L Wei, M Lou, F Xie, J Cao, J Tong, F Li, G Zheng
Journal: Radiocarbon , First View
Published online by Cambridge University Press: 24 June 2019, pp. 1-15
While assessing the environmental impact of nuclear power plants, researchers have focused their attention on radiocarbon (14C) owing to its high mobility in the environment and important radiological impact on human beings. The 10 MW high-temperature gas-cooled reactor (HTR-10) is the first pebble-bed gas-cooled test reactor in China that adopted helium as primary coolant and graphite spheres containing tristructural-isotropic (TRISO) coated particles as fuel elements. A series of experiments on the 14C source terms in HTR-10 was conducted: (1) measurement of the specific activity and distribution of typical nuclides in the irradiated graphite spheres from the core, (2) measurement of the activity concentration of 14C in the primary coolant, and (3) measurement of the amount of 14C discharged in the effluent from the stack. All experimental data on 14C available for HTR-10 were summarized and analyzed using theoretical calculations. A sensitivity study on the total porosity, open porosity, and percentage of closed pores that became open after irradiating the matrix graphite was performed to illustrate their effects on the activity concentration of 14C in the primary coolant and activity amount of 14C in various deduction routes.
Visualizing the History and Perspectives of Disaster Medicine: A Bibliometric Analysis
Xinxin Hao, Yunling Liu, Xiaoxue Li, Jingchen Zheng
Journal: Disaster Medicine and Public Health Preparedness , First View
To analyze the development of disaster medicine and to identify the main obstacles to improving disaster medicine research and application.
A topic search strategy was used to search the Web of Science Core Collection database. The 100 articles with the highest local citation scores were selected for bibliometric analysis; summarizing informetric indicators; and preparing a historiography, themes network, and key word co-occurrence map.
The 100 articles with the highest local citation scores were published from 1983 to 2013 in 9 countries, mainly in the United States. The most productive authors were Koenig and Rubinson. The lead research institution was Columbia University. The most commonly cited journal was the Annals of Emergency Medicine. The development of disaster medicine could be separated into 3 consecutive periods. All results indicate that the development of disaster medicine faces some obstacles that need to be addressed.
Research works have provided a solid foundation for disaster medicine, but its development has been in a slow growth period for a long time. Obstacles to the development of disaster medicine include the lack of scientist communities, transdisciplinary research, innovative research perspectives, and continuous research. Future research should overcome these obstacles so as to make further advances in this field.
Annexin A2 Enhances the Progression of Colorectal Cancer and Hepatocarcinoma via Cytoskeleton Structural Rearrangements
Huimin He, Li Xiao, Sinan Cheng, Qian Yang, Jinmei Li, Yifan Hou, Fengying Song, Xiaorong Su, Huijuan Jin, Zheng Liu, Jing Dong, Ruiye Zuo, Xigui Song, Yanyan Wang, Kun Zhang, Wei Duan, Yingchun Hou
Journal: Microscopy and Microanalysis / Volume 25 / Issue 4 / August 2019
Annexin A2 (ANXA2) is reported to be associated with cancer development. To investigate the roles ANXA2 plays during the development of cancer, the RNAi method was used to inhibit the ANXA2 expression in caco2 (human colorectal cancer cell line) and SMMC7721 (human hepatocarcinoma cell line) cells. The results showed that when the expression of ANXA2 was efficiently inhibited, the growth and motility of both cell lines were significantly decreased, and the development of the motility relevant microstructures, such as pseudopodia, filopodia, and the polymerization of microfilaments and microtubules were obviously inhibited. The cancer cell apoptosis was enhanced without obvious significance. The possible regulating pathway in the process was also predicted and discussed. Our results suggested that ANXA2 plays important roles in maintaining the malignancy of colorectal and hepatic cancer by enhancing the cell proliferation, motility, and development of the motility associated microstructures of cancer cells based on a possible complicated signal pathway.
Improvement of ion acceleration in radiation pressure acceleration regime by using an external strong magnetic field
H. Cheng, L. H. Cao, J. X. Gong, R. Xie, C. Y. Zheng, Z. J. Liu
Journal: Laser and Particle Beams / Volume 37 / Issue 2 / June 2019
Print publication: June 2019
Two-dimensional particle-in-cell (PIC) simulations have been used to investigate the interaction between a laser pulse and a foil exposed to an external strong longitudinal magnetic field. Compared with that in the absence of the external magnetic field, the divergence of proton with the magnetic field in radiation pressure acceleration (RPA) regimes has improved remarkably due to the restriction of the electron transverse expansion. During the RPA process, the foil develops into a typical bubble-like shape resulting from the combined action of transversal ponderomotive force and instabilities. However, the foil prefers to be in a cone-like shape by using the magnetic field. The dependence of proton divergence on the strength of magnetic field has been studied, and an optimal magnetic field of nearly 60 kT is achieved in these simulations.
Altered brain functional networks in Internet gaming disorder: independent component and graph theoretical analysis under a probability discounting task
Ziliang Wang, Xiaoyue Liu, Yanbo Hu, Hui Zheng, Xiaoxia Du, Guangheng Dong
Journal: CNS Spectrums , First View
Published online by Cambridge University Press: 10 April 2019, pp. 1-13
Internet gaming disorder (IGD) is becoming a matter of concern around the world. However, the neural mechanism underlying IGD remains unclear. The purpose of this paper is to explore the differences between the neuronal network of IGD participants and that of recreational Internet game users (RGU).
Imaging and behavioral data were collected from 18 IGD participants and 20 RGU under a probability discounting task. The independent component analysis (ICA) and graph theoretical analysis (GTA) were used to analyze the data.
Behavioral results showed the IGD participants, compared to RGU, prefer risky options to the fixed ones and spent less time in making risky decisions. In imaging results, the ICA analysis revealed that the IGD participants showed stronger functional connectivity (FC) in reward circuits and executive control network, as well as lower FC in anterior salience network (ASN) than RGU; for the GTA results, the IGD participants showed impaired FC in reward circuits and ASN when compared with RGU.
These results suggest that IGD participants were more sensitive to rewards, and they were more impulsive in decision-making as they could not control their impulsivity effectively. This might explain why IGD participants cannot stop their gaming behaviors even when facing severe negative consequences.
Subaerial sulfate mineral formation related to acid aerosols at the Zhenzhu Spring, Tengchong, China
Lianchao Luo, Huaguo Wen, Rongcai Zheng, Ran Liu, Yi Li, Xiaotong Luo, Yaxian You
Journal: Mineralogical Magazine / Volume 83 / Issue 3 / June 2019
The Zhenzhu Spring, located in the Tengchong volcanic field, Yunnan, China, is an acid hot spring with high SO42− concentrations and intense acid aerosol generation. In order to understand the formation mechanism of sulfate minerals at the Zhenzhu Spring and provide a better insight into the sulfur isotope geochemistry of the associated Rehai hydrothermal system, we investigated the spring water hydrochemistry, mineralogy and major-element geochemistry of sulfate minerals at the Zhenzhu Spring together with the sulfur-oxygen isotope geochemistry of sulfur-containing materials at the Rehai geothermal field and compared the isotope results with those in other steam-heated environments. Subaerial minerals include a wide variety of sulfate minerals (gypsum, alunogen, pickeringite, tamarugite, magnesiovoltaite and a minor Mg–S–O phase) and amorphous SiO2. The δ34S values of the subaerial sulfate minerals at the Zhenzhu Spring varied subtly from –0.33 to 1.88‰ and were almost consistent with the δ34S values of local H2S (–2.6 to 0.6‰) and dissolved SO42− (–0.2 to 5.8‰), while the δ18O values (–8.94 to 20.1‰) were between that of the spring waters (–10.19 to –6.7‰) and atmospheric O2 (~23.88‰). The results suggest that most of the sulfate minerals are derived from the oxidation of H2S, similar to many sulfate minerals from modern steam-heated environments. However, the rapid environmental change (different ratio of atmospheric and water oxygen) at the Zhenzhu Spring accounts for the large variation of δ18O. The formation of subaerial sulfate minerals around the Zhenzhu Spring is related to acid aerosols (vapour and acid water droplets). The intense activity of spring water around vents supply the aerosol with H2SO4 (H2S oxidation and acid water droplets formed by bubble bursting) and few cations. Deposition of the acid sulfate aerosol forms the acid condensate, which attacks the underlying rocks and releases many cations and anions to form subaerial sulfate minerals at the Zhenzhu Spring.
$p$ -ADIC $L$ -FUNCTIONS FOR ORDINARY FAMILIES ON SYMPLECTIC GROUPS
Zheng Liu
Journal: Journal of the Institute of Mathematics of Jussieu , First View
Published online by Cambridge University Press: 14 January 2019, pp. 1-61
We construct the $p$ -adic standard $L$ -functions for ordinary families of Hecke eigensystems of the symplectic group $\operatorname{Sp}(2n)_{/\mathbb{Q}}$ using the doubling method. We explain a clear and simple strategy of choosing the local sections for the Siegel Eisenstein series on the doubling group $\operatorname{Sp}(4n)_{/\mathbb{Q}}$ , which guarantees the nonvanishing of local zeta integrals and allows us to $p$ -adically interpolate the restrictions of the Siegel Eisenstein series to $\operatorname{Sp}(2n)_{/\mathbb{Q}}\times \operatorname{Sp}(2n)_{/\mathbb{Q}}$ .
Petrogenesis and tectonic implications of Late Mesoproterozoic A1- and A2-type felsic lavas from the Huili Group, southwestern Yangtze Block
Dong-Bing Wang, Bao-Di Wang, Fu-Guang Yin, Zhi-Ming Sun, Shi-Yong Liao, Yuan Tang, Liang Luo, Zheng Liu
Journal: Geological Magazine , First View
This paper presents new LA-ICP-MS zircon U–Pb chronology, whole-rock geochemical and zircon Hf isotopic data for the felsic lavas of the Huili Group from the southwestern Yangtze Block. LA-ICP-MS zircon U–Pb dating shows that these rocks were emplaced in Late Mesoproterozoic time (∼1028 to 1019 Ma). Relative to typical I-type and S-type granitoids, all the samples are characterized by low Sr and Eu, and high high-field-strength element contents, high TFeO/MgO, enriched rare earth element compositions and negative Eu anomalies, indicating that they share the geochemical signatures of A-type granitoid. They can be further divided into two groups: Group I and Group II. Group I are A1-type felsic rocks and were produced by fractional crystallization of alkaline basaltic magmas. The Group II felsic lavas belong to the A2-type and were derived by partial melting of a crustal source with mixing of mantle-derived magmas. Both Group I and Group II felsic lavas may erupt in a continental back-arc setting. The coexistence of A1- and A2-type rocks in the southwestern Yangtze Block suggests that they can occur in the same tectonic setting.
Leucine regulates α-amylase and trypsin synthesis in dairy calf pancreatic tissue in vitro via the mammalian target of rapamycin signalling pathway
L. Guo, J. H. Yao, C. Zheng, H. B. Tian, Y. L. Liu, S. M. Liu, C. J. Cai, X. R. Xu, Y. C. Cao
Journal: animal , First View
Published online by Cambridge University Press: 08 January 2019, pp. 1-8
Starch digestion in the small intestines of the dairy cow is low, to a large extent, due to a shortage of syntheses of α-amylase. One strategy to improve the situation is to enhance the synthesis of α-amylase. The mammalian target of rapamycin (mTOR) signalling pathway, which acts as a central regulator of protein synthesis, can be activated by leucine. Our objectives were to investigate the effects of leucine on the mTOR signalling pathway and to define the associations between these signalling activities and the synthesis of pancreatic enzymes using an in vitro model of cultured Holstein dairy calf pancreatic tissue. The pancreatic tissue was incubated in culture medium containing l-leucine for 3 h, and samples were collected hourly, with the control being included but not containing l-leucine. The leucine supplementation increased α-amylase and trypsin activities and the messenger RNA expression of their coding genes (P <0.05), and it enhanced the mTOR synthesis and the phosphorylation of mTOR, ribosomal protein S6 kinase 1 and eukaryotic initiation factor 4E-binding protein 1 (P <0.05). In addition, rapamycin inhibited the mTOR signal pathway factors during leucine treatment. In sum, the leucine regulates α-amylase and trypsin synthesis in dairy calves through the regulation of the mTOR signal pathways.
Amplitude modulation between multi-scale turbulent motions in high-Reynolds-number atmospheric surface layers
Hongyou Liu, Guohua Wang, Xiaojing Zheng
Journal: Journal of Fluid Mechanics / Volume 861 / 25 February 2019
Published online by Cambridge University Press: 27 December 2018, pp. 585-607
Long-term measurements were performed at the Qingtu Lake Observation Array site to obtain high-Reynolds-number atmospheric surface layer flow data ( $Re_{\unicode[STIX]{x1D70F}}\sim O(10^{6})$ ). Based on the selected high-quality data in the near-neutral surface layer, the amplitude modulation between multi-scale turbulent motions is investigated under various Reynolds number conditions. The results show that the amplitude modulation effect may exist in specific motions rather than at all length scales of motion. The most energetic motions with scales larger than the wavelength of the lower wavenumber peak in the energy spectra play a vital role in the amplitude modulation effect; the small scales shorter than the wavelength of the higher wavenumber peak are strongly modulated, whereas the motions with scales ranging between these two peaks neither contribute significantly to the amplitude modulation effect nor are strongly modulated. Based on these results, a method of decomposing the fluctuating velocity is proposed to accurately estimate the degree of amplitude modulation. The corresponding amplitude modulation coefficient is much larger than that estimated by establishing a nominal cutoff wavelength; moreover, it increases log-linearly with the Reynolds number. An empirical model is proposed to parametrize the variation of the amplitude modulation coefficient with the Reynolds number and the wall-normal distance. This study contributes to a better understanding of the interaction between multi-scale turbulent motions and the results may be used to validate and improve existing numerical models of high-Reynolds-number wall turbulence.
Non-destructive detection and classification of in-shell insect-infested almonds based on multispectral imaging technology
J. Yu, S. Ren, C. Liu, B. Wei, L. Zhang, S. Younas, L. Zheng
Journal: The Journal of Agricultural Science / Volume 156 / Issue 9 / November 2018
Published online by Cambridge University Press: 11 February 2019, pp. 1103-1110
The feasibility of non-destructive detection and classification of in-shell insect-infested almonds was examined by using multispectral imaging (MSI) technology combined with chemometrics. Differentiation of reflectance spectral data between intact and insect-infested almonds was attempted by using analytical approaches based on principal component analysis and support vector machines, classification accuracy rates as high as 99.1% in the calibration set and 97.5% in the prediction set were achieved. Meanwhile, the in-shell almonds were categorized into three classes (intact, slightly infested and severely infested) based on the degree of damage caused by insect infestation and were characterized quantitatively by the analysis of shell/kernel weight ratio. A three-class model for the identification of intact, slightly infested and severely infested almonds yielded acceptable classification performance (95.6% accuracy in the calibration set and 93.3% in the prediction set). These results revealed that MSI technology combined with chemometrics may be a promising approach for the non-destructive detection of hidden insect damage in almonds and could be used for industrial applications.
Layer pullet preferences for light colors of light-emitting diodes
G. Li, B. Li, Y. Zhao, Z. Shi, Y. Liu, W. Zheng
Journal: animal / Volume 13 / Issue 6 / June 2019
Published online by Cambridge University Press: 12 October 2018, pp. 1245-1251
Light colors may affect poultry behaviors, well-being and performance. However, preferences of layer pullets for light colors are not fully understood. This study was conducted to investigate the pullet preferences for four light-emitting diode colors, including white, red, green and blue, in a lighting preference test system. The system contained four identical compartments each provided with a respective light color. The pullets were able to move freely between the adjacent compartments. A total of three groups of 20 Chinese domestic Jingfen layer pullets (54 to 82 days of age) were used for the test. Pullet behaviors were continuously recorded and summarized for each light color/compartment into daily time spent (DTS), daily percentage of time spent (DPTS), daily times of visit (DTV), duration per visit, daily feed intake (DFI), daily feeding time (DFT), feeding rate (FR), distribution of pullet occupancy and hourly time spent. The results showed that the DTS (h/pullet·per day) were 3.9±0.4 under white, 1.4±0.3 under red, 2.2±0.3 under green and 4.5±0.4 under blue light, respectively. The DTS corresponded to 11.7% to 37.6% DPTS in 12-h lighting periods. The DTV (times/pullet·per day) were 84±5 under white, 48±10 under red, 88±10 under green and 94±8 under blue light. Each visit lasted 1.5 to 3.2 min. The DFI (g/pullet·per day) were 27.6±1.7 under white, 7.1±1.6 under red, 15.1±1.1 under green and 23.1±2.0 under blue light. The DFT was 0.18 to 0.65 h/pullet·per day and the FR was 0.57 to 0.75 g/min. For most of the time during the lighting periods, six to 10 birds stayed under white, and one to five birds stayed under red, green and blue light. Pullets preferred to stay under blue light when the light was on and under white light 4 h before the light off. Overall, pullets preferred blue light the most and red light the least. These findings substantiate the preferences of layer pullets for light colors, providing insights for use in the management of light-emitting diode colors to meet pullet needs.
An electronic antimicrobial stewardship intervention reduces inappropriate parenteral antibiotic therapy
Sean T. H. Liu, Mark J. Bailey, Allen Zheng, Patricia Saunders-Hao, Adel Bassily-Marcus, Maureen Harding, Meenakshi Rana, Roopa Kohli-Seth, Gopi Patel, Shirish Huprikar, Talia H. Swartz
Journal: Infection Control & Hospital Epidemiology / Volume 39 / Issue 11 / November 2018
Published online by Cambridge University Press: 20 September 2018, pp. 1396-1397
Development and characterization of Triticum turgidum–Aegilops umbellulata amphidiploids
Zhongping Song, Shoufen Dai, Yanni Jia, Li Zhao, Liangzhu Kang, Dengcai Liu, Yuming Wei, Youliang Zheng, Zehong Yan
Journal: Plant Genetic Resources / Volume 17 / Issue 1 / February 2019
Published online by Cambridge University Press: 18 September 2018, pp. 24-32
Print publication: February 2019
The U genome of Aegilops umbellulata is an important basic genome of genus Aegilops. Direct gene transfer from Ae. umbellulata into wheat is feasible but not easy. Triticum turgidum–Ae. umbellulata amphidiploids can act as bridges to circumvent obstacles involving direct gene transfer. Seven T. turgidum–Ae. umbellulata amphidiploids were produced via unreduced gametes for spontaneous doubling of chromosomes of triploid T. turgidum–Ae. umbellulata F1 hybrid plants. Seven pairs of U chromosomes of Ae. umbellulata were distinguished by fluorescence in situ hybridization (FISH) probes pSc119.2/(AAC)5 and pTa71. Polymorphic FISH signals were detected in three (1U, 6U and 7U) of seven U chromosomes of four Ae. umbellulata accessions. The chromosomes of the tetraploid wheat parents could be differentiated by probes pSc119.2 and pTa535, and identical FISH signals were observed among the three accessions. All the parental chromosomes of the amphidiploids could be precisely identified by probe combinations pSc119.2/pTa535 and pTa71/(AAC)5. The T. turgidum–Ae. umbellulata amphidiploids possess valuable traits for wheat improvement, such as strong tillering ability, stripe rust resistance and seed size-related traits. These materials can be used as media in gene transfers from Ae. umbellulata into wheat.
Nanoscale magnetization reversal by electric field-induced ion migration
Qilai Chen, Gang Liu, Shuang Gao, Xiaohui Yi, Wuhong Xue, Minghua Tang, Xuejun Zheng, Run-Wei Li
Journal: MRS Communications / Volume 9 / Issue 1 / March 2019
Nanoscale magnetization modulation by electric field enables the construction of low-power spintronic devices for information storage applications and, etc. Electric field-induced ion migration can introduce desired changes in the material's stoichiometry, defect profile, and lattice structure, which in turn provides a versatile and convenient means to modify the materials' chemical-physical properties at the nanoscale and in situ. In this review, we provide a brief overview on the recent study on nanoscale magnetization modulation driven by electric field-induced migration of ionic species either within the switching material or from external sources. The formation of magnetic conductive filaments that exhibit magnetoresistance behaviors in resistive switching memory via foreign metal ion migration and redox activities is also discussed. Combining the magnetoresistance and quantized conductance switching of the magnetic nanopoint contact structure may provide a future high-performance device for non-von Neumann computing architectures.
Prediction of psychosis in prodrome: development and validation of a simple, personalized risk calculator
TianHong Zhang, LiHua Xu, YingYing Tang, HuiJun Li, XiaoChen Tang, HuiRu Cui, YanYan Wei, Yan Wang, Qiang Hu, XiaoHua Liu, ChunBo Li, Zheng Lu, Robert W. McCarley, Larry J. Seidman, JiJun Wang, on behalf of the SHARP (ShangHai At Risk for Psychosis) Study Group
Published online by Cambridge University Press: 14 September 2018, pp. 1-9
This study aim to derive and validate a simple and well-performing risk calculator (RC) for predicting psychosis in individual patients at clinical high risk (CHR).
From the ongoing ShangHai-At-Risk-for-Psychosis (SHARP) program, 417 CHR cases were identified based on the Structured Interview for Prodromal Symptoms (SIPS), of whom 349 had at least 1-year follow-up assessment. Of these 349 cases, 83 converted to psychosis. Logistic regression was used to build a multivariate model to predict conversion. The area under the receiver operating characteristic (ROC) curve (AUC) was used to test the effectiveness of the SIPS-RC. Second, an independent sample of 100 CHR subjects was recruited based on an identical baseline and follow-up procedures to validate the performance of the SIPS-RC.
Four predictors (each based on a subset of SIPS-based items) were used to construct the SIPS-RC: (1) functional decline; (2) positive symptoms (unusual thoughts, suspiciousness); (3) negative symptoms (social anhedonia, expression of emotion, ideational richness); and (4) general symptoms (dysphoric mood). The SIPS-RC showed moderate discrimination of subsequent transition to psychosis with an AUC of 0.744 (p < 0.001). A risk estimate of 25% or higher had around 75% accuracy for predicting psychosis. The personalized risk generated by the SIPS-RC provided a solid estimate of conversion outcomes in the independent validation sample, with an AUC of 0.804 [95% confidence interval (CI) 0.662–0.951].
The SIPS-RC, which is simple and easy to use, can perform in the same manner as the NAPLS-2 RC in the Chinese clinical population. Such a tool may be used by clinicians to counsel appropriately their patients about clinical monitor v. potential treatment options.
β-Casomorphin increases fat deposition in broiler chickens by modulating expression of lipid metabolism genes
W. H. Chang, A. J. Zheng, Z. M. Chen, S. Zhang, H. Y. Cai, G. H. Liu
Journal: animal / Volume 13 / Issue 4 / April 2019
β-Casomorphin is an opioid-like bioactive peptide derived from β-casein of milk that plays a crucial role in modulating animal's feed intake, growth, nutrient utilization and immunity. However, the effect of β-casomorphin on lipid metabolism in chickens and its mechanism remain unclear. The aim of this study was to investigate the effects of β-casomorphin on fat deposition in broiler chickens and explore its mechanism of action. A total of 120 21-day-old Arbor Acres male broilers (747.94±8.85 g) was chosen and randomly divided into four groups with six replicates of five birds per replicate. Three groups of broilers were injected with 0.1, 0.5 or 1.0 mg/kg BW of β-casomorphin in 1 ml saline for 7 days, whereas the control group received 1 ml saline only. The results showed that subcutaneous administration of β-casomorphin to broiler chickens increased average daily gain, average daily feed intake and fat deposition, and decreased feed : gain ratio (P<0.05). The activity of malate dehydrogenase in the pectoral muscle, liver and abdominal adipose tissue was also increased along with the concentrations of insulin, very-low-density lipoprotein and triglyceride in the plasma (P<0.05). The activity of hormone-sensitive lipase in the liver and abdominal adipose tissue and the concentration of glucagon in the plasma were decreased by injection with β-casomorphin (P<0.05). Affymetrix gene chip analysis revealed that administering 1.0 mg/kg BW β-casomorphin caused differential expression of 168 genes in the liver with a minimum of fourfold difference. Of those, 37 genes are directly involved in lipid metabolism with 18 up-regulated genes such as very low density lipoprotein receptor gene and fatty acid synthase gene, and 19 down-regulated genes such as lipoprotein lipase gene and low density lipoprotein receptor gene. In conclusion, β-casomorphin increased growth performance and fat deposition of broilers. Regulation of fat deposition by β-casomorphin appears to take place through changes in hormone secretion and enzyme activities by controlling the gene expression of lipid metabolism and feed intake, increasing fat synthesis and deposition.
Ultrastructure of Female Antennal Sensilla of an Endoparasitoid Wasp, Quadrastichus mendeli Kim & La Salle (Hymenoptera: Eulophidae: Tetrastichinae)
Zong-You Huang, Yu-Jing Zhang, Jun-Yan Liu, Zhen-De Yang, Wen Lu, Xia-Lin Zheng
The antennal sensilla of female Quadrastichus mendeli Kim & La Salle (Hymenoptera: Eulophidae: Tetrastichinae) were observed with scanning and transmission electron microscopy in this study. The antenna of Q. mendeli was geniculate, and the flagellum was composed of seven subsegments. Six distinct types of sensory receptors were observed, including sensilla basiconic capitate peg, sensilla böhm, sensilla chaetica, sensilla campaniformia, sensilla placodea and sensilla trichodea. Sensilla basiconic capitate pegs were found on the flagellomeres, and Böhm sensilla were found on the basal part of scape and the pedicel. Two morphological subtypes of sensilla chaetica were found on the antennae, and sensilla campaniformia were only found on the pedicel. Sensilla placodea were divided into two morphological subtypes that were found on the flagellomeres. Sensilla trichodea were found on the 2nd–6th flagellomere. By comparison to existing antennal sensilla, it was found that sensilla basiconic capitate peg, sensilla chaetica, sensilla placodea and sensilla trichodea were the most common sensilla of the parasitoids of Eulophidae. The external and internal morphology, types, number, distribution, length, and width of these sensilla were described, and their possible functions are discussed in conjunction with the host-detection behavior. Future studies on the host location mechanisms in Q. mendeli will be facilitated by these observations. | CommonCrawl |
arXiv.org > hep-ex > arXiv:1607.01176v4
hep-ex
hep-ph
High Energy Physics - Experiment
Title:Search for Sterile Neutrinos Mixing with Muon Neutrinos in MINOS
Authors:P. Adamson, I. Anghel, A. Aurisano, G. Barr, M. Bishai, A. Blake, G. J. Bock, D. Bogert, S. V. Cao, T. J. Carroll, C. M. Castromonte, R. Chen, S. Childress, J. A. B. Coelho, L. Corwin, D. Cronin-Hennessy, J. K. de Jong, S. De Rijck, A. V. Devan, N. E. Devenish, M. V. Diwan, C. O. Escobar, J. J. Evans, E. Falk, G. J. Feldman, W. Flanagan, M. V. Frohne, M. Gabrielyan, H. R. Gallagher, S. Germani, R. A. Gomes, M. C. Goodman, P. Gouffon, N. Graf, R. Gran, K. Grzelak, A. Habig, S. R. Hahn, J. Hartnell, R. Hatcher, A. Holin, J. Huang, J. Hylen, G. M. Irwin, Z. Isvan, C. James, D. Jensen, T. Kafka, S. M. S. Kasahara, G. Koizumi, M. Kordosky, A. Kreymer, K. Lang, J. Ling, P. J. Litchfield, P. Lucas, W. A. Mann, M. L. Marshak, N. Mayer, C. McGivern, M. M. Medeiros, R. Mehdiyev, J. R. Meier, M. D. Messier, W. H. Miller, S. R. Mishra, S. Moed Sher, C. D. Moore, L. Mualem, J. Musser, D. Naples, J. K. Nelson, H. B. Newman, R. J. Nichol, J. A. Nowak, J. O'Connor, M. Orchanian, R. B. Pahlka, J. Paley, R. B. Patterson, G. Pawloski, A. Perch, M. M. Pfutzner, D. D. Phan, S. Phan-Budd, R. K. Plunkett, N. Poonthottathil, X. Qiu, A. Radovic, B. Rebel, C. Rosenfeld, H. A. Rubin, P. Sail, M. C. Sanchez, J. Schneps, A. Schreckenberger, P. Schreiner, R. Sharma, A. Sousa, N. Tagg
, R. L. Talaga, J. Thomas, M. A. Thomson, X. Tian, A. Timmons, J. Todd, S. C. Tognini, R. Toner, D. Torretta, G. Tzanakos, J. Urheim, P. Vahle, B. Viren, A. Weber, R. C. Webb, C. White, L. Whitehead, L. H. Whitehead, S. G. Wojcicki, R. Zwaska
et al. (20 additional authors not shown)
(Submitted on 5 Jul 2016 (v1), last revised 10 Oct 2016 (this version, v4))
Abstract: We report results of a search for oscillations involving a light sterile neutrino over distances of 1.04 and $735\,\mathrm{km}$ in a $\nu_{\mu}$-dominated beam with a peak energy of $3\,\mathrm{GeV}$. The data, from an exposure of $10.56\times 10^{20}\,\textrm{protons on target}$, are analyzed using a phenomenological model with one sterile neutrino. We constrain the mixing parameters $\theta_{24}$ and $\Delta m^{2}_{41}$ and set limits on parameters of the four-dimensional Pontecorvo-Maki-Nakagawa-Sakata matrix, $|U_{\mu 4}|^{2}$ and $|U_{\tau 4}|^{2}$, under the assumption that mixing between $\nu_{e}$ and $\nu_{s}$ is negligible ($|U_{e4}|^{2}=0$). No evidence for $\nu_{\mu} \to \nu_{s}$ transitions is found and we set a world-leading limit on $\theta_{24}$ for values of $\Delta m^{2}_{41} \lesssim 1\,\mathrm{eV}^{2}$.
Comments: 7 pages, 4 figures
Subjects: High Energy Physics - Experiment (hep-ex); High Energy Physics - Phenomenology (hep-ph)
Journal reference: Phys. Rev. Lett. 117, 151803 (2016)
DOI: 10.1103/PhysRevLett.117.151803
Report number: FERMILAB-PUB-16-233-ND
Cite as: arXiv:1607.01176 [hep-ex]
(or arXiv:1607.01176v4 [hep-ex] for this version)
From: Justin Evans [view email]
[v1] Tue, 5 Jul 2016 10:17:34 UTC (176 KB)
[v2] Wed, 6 Jul 2016 17:06:46 UTC (179 KB)
[v3] Fri, 8 Jul 2016 10:42:06 UTC (180 KB)
[v4] Mon, 10 Oct 2016 13:19:03 UTC (178 KB) | CommonCrawl |
Structure of pauses in speech in the context of speaker verification and classification of speech type
Magdalena Igras-Cybulska1,
Bartosz Ziółko1,2,
Piotr Żelasko1,2 &
Marcin Witkowski1,2
Statistics of pauses appearing in Polish as a potential source of biometry information for automatic speaker recognition were described. The usage of three main types of acoustic pauses (silent, filled and breath pauses) and syntactic pauses (punctuation marks in speech transcripts) was investigated quantitatively in three types of spontaneous speech (presentations, simultaneous interpretation and radio interviews) and read speech (audio books). Selected parameters of pauses extracted for each speaker separately or for speaker groups were examined statistically to verify usefulness of information on pauses for speaker recognition and speaker profile estimation. Quantity and duration of filled pauses, audible breaths, and correlation between the temporal structure of speech and the syntax structure of the spoken language were the features which characterize speakers most. The experiment of using pauses in speaker biometry system (using Universal Background Model and i-vectors) resulted in 30 % equal error rate. Including pause-related features to the baseline Mel-frequency cepstral coefficient system has not significantly improved its performance. In the experiment with automatic recognition of three types of spontaneous speech, we achieved 78 % accuracy, using GMM classifier. Silent pause-related features allowed distinguishing between read and spontaneous speech by extreme gradient boosting with 75 % accuracy.
A set of common disfluencies interferes with discourse consistency in spontaneous speech. The most important ones are filled pauses, restarts, changes of syntax during the utterance, and inclusions of intervening sentences. Within words, the most frequent disfluencies are repetitions, repairs and prolongations of conjunctives, prepositions, and final syllables. As far as human perception can focus on the meaning of the utterance and extract the desired information, an automatic speech recognition system literally recognizes whole acoustic content of the speech signal. As a result, the transcription is redundant with notation of disfluencies or slips of the tongue but diminished of the other types of information present in a signal, like punctuation. Pause detection is usually used only to extract voice activity regions for further processing or to remove undesirable disfluencies. On the other hand, the information on pauses properties patterns can significantly enrich high-level information of speech signal. In recent years, analysis of multi-layered linguistic and paralinguistic metadata of recordings received focused attention [1].
We assume that the pauses properties in speech signal are strongly individualized between speakers and influenced by situational context and cognitive task. This study aims to verify if the information on pauses can be useful for speaker biometry systems (experiment 1) and for recognition of different types of spontaneous speech (experiment 2) as well as distinguishing between read and spontaneous speech (experiment 3).
The information is meaningful for creating a speaker psycho-social profile. Additionally, it helps in discourse analysis for different kinds of situational context or linguistic task [27]. The types of speech differ in situational context, the task involving a cognitive load as well as level of spontaneity and have direct impact on speech fluency. For the first type, presentations prepared on a given subject represent typical informative speech in a formal situation. For the second type, oral translations performed by professional interpreters are partly imitative against the original speech. Simultaneity of listening and speaking engages complex cognitive functions. For the third type, radio interviews represents spontaneous speech extracted from dialogues with slow turn-taking, mainly storytelling (indicating more informal situation). Although the pauses in each type of speech were characterized in numerous analyses (e.g., [33–35]), there has been a lack of automatic classification of speech type based on silent pause-related cues only. In a similar study dealing with three classes of spontaneous speech [32], a comparable accuracy was obtained, though much more features was utilized.
Research on the recognition of read and spontaneous speech can have an impact in the field of automatic assessment of speaker preparation for the task, and elocution abilities. In this application, a feedback on similarity to fluent read speech would help speaker improve their level of oratorical skills.
Modeling of pauses in spoken language can be also applied to a more natural-sounding speech synthesis systems. The impact of pauses analysis for speech technology is particularly important for spontaneous speech recognition, which remains a challenging task [28]. Some results of presented works have already been used for building pauses models for automatic speech recognition system which is developed in AGH University and Techmo [29].
The paper is organized as follows: in the rest of Section 1, the background of the pauses appearance and role in speech is presented and the state-of-the-art of speaker recognition systems is briefly discussed. Next, the collected database is described in Section 2. In Section 3, we summarize adopted method of database processing, features extraction and statistical tools. Section 4 contains results of our experiments, which are discussed in Section 5. The paper is concluded in Section 6.
The research shows three types of acoustic pauses in spoken language. The most intuitive is silent pauses (s_p), as regions of signal where no voice activity is recorded.
The second types are filled pauses (f_p)—pseudo-words—that do not affect sentence meaning, like yyy, eee, hmm, mmm, ym, yh (in SAMPA notation: III, eee, xmm, mmm, Im, Ix) but perturb utterance fluency. The sounds of filled pauses are specific for language (in Polish, the most common are yyy/yh and mmm, while for English—um) and specific for speaker's habits. They can appear even 10–20 per minute in case of inexperienced speakers.
The third sort of pauses that we consider are breath pauses (b_p). In case of normal physiological condition, the value of breath per minute is 12–20 while resting, and as prior work showed, it is 10–12 during speech production [2].
Considering the origin of the pause usage we marked out: (1) regular natural pauses caused by respiration activity (breath pauses), (2) irregular intentional pauses, purposely used as a stylistic form, especially by professional speakers (silent pauses), and (3) irregular, unintentional disfluencies, effects of uncertainty, hesitations or short reflections (acoustic events like silent pauses or filled pauses).
Pauses vs. paralinguistic information
Depending on the speaker and situational/social context, pauses may be characterized by different properties. One of them is a type of personality of the speaker and his speaking habits. Another important factor is speaker preparation for the task, level of oratorical skills, and elocution abilities. Durations of pauses depend also on the kind of linguistic task. One can easily assume that stress during speaking is an important factor dictating the frequency and lengths of pauses. Pauses can be also considered in the terms of performative aspects of speech. Filled pauses, among other disfluencies, were successfully used for recognition of three levels of spontaneity and applied to speaker role recognition with over 70 % precision [32]. Pauses were also described as traces of cognitive activity or a mirror of cognitive processes. In the situation of simultaneous interpretation, they were studied in [33] and [34].
Pause duration was reported to correlate with social attributes of speaker, even ones such as region, ethnicity, age, and gender [3]. Cross-cultural study of silent pauses in selected European languages (Polish was not included) revealed differences in pause durations between languages [4], but their distribution is usually similar and can be well estimated by bi-Gaussian model [5].
Some medical aspects of different types of pauses were investigated in context of affective state [6] and physical [7] or mental [8] condition of the speaker, e.g. schizophrenics make pauses around 10 % more often, which are also around 10 % longer [8].
In speech technology, information on pauses is used in majority of algorithms of automatic punctuation detection [9, 10]. It has been shown that 95 % of silent pauses longer than 350 ms are the sentence boundaries [11].
Pausing behavior in speech, although conditioned by articulatory processes, was proved to be partly related to cognitive processes [12]. It implies that it can be changed by learning. Nevertheless, the fact was proved only for the grammatical pauses, while for the ungrammatical ones the opposite was observed.
Speaker recognition is the process of analysis of the speaker identity based on voice characteristics. The main tasks of a speaker recognition system include verification and/or identification. The aim of identification is to choose one of many speakers based on a speech signal, whereas verification is the process of determination whether assigned speaker was chosen correctly. In line with the particular usage specification, those systems may be divided into text-dependent or text-independent. A text-dependent system assumes that recognition process is based on a specific fixed phrase, i.e., each analyzed recording contains the same sentence. In text-independent scenario, speakers may be identified or verified by a random utterance [13]. The second system is more challenging, since it is much more complicated due to phonetically mismatched voice samples in training and recognition phases.
Automatic speaker recognition systems consist of two main functionalities—enrollment and verification. During enrollment phase, voiceprint or model of a speaker is calculated based on extracted features from voice samples. Verification is based on comparison of processed input speech signal against the speaker model enrolled previously. There are many discriminative features that may be used to distinguish a speaker. Low-level features, like formants or energy, contain information connected with voice generation. Mel-frequency cepstral coefficients (MFCCs) are used most frequently to parameterize voice signals [14, 21, 24].
Pause-related features in speaker recognition task
In past several years, there has been an observable tendency to include prosodic features to speaker recognition problem. High-level features are associated with linguistic and behavioral characteristics of each speaker [15]. In majority of approaches, features related to pitch, energy, and segmental duration were investigated [16–18] and by including these parameters, the system accuracy increased by about 10 %. Peskin et al. [16] experimented also with pause duration and frequency and found out that pause-related feature set was the least significant compared to other groups of prosodic features. In Sönmez et al. [17] experiment, pause duration was modeled with shifted exponential and together with voiced segments duration gave 3.5 % improvement in speaker recognition task. It was also proved that patterns of pauses in network traffic introduced when encoding an audio signal are speaker-specific, and that they are sufficient to weaken the anonymity of the speaker in encrypted voice communication [19].
Therefore, the goal of this research is to check if pauses may be used as one of the high-level factors and potentially improve existing systems. To the best of authors' knowledge, the pauses features were not yet directly used in any speaker recognition system, in particular, in any Polish one.
The prepared corpus of spontaneous Polish speech consisted of different types of monologues in formal or semi-formal situations. Total duration of recordings is 120 min, including utterances of 30 speakers (16 male, 14 female). Among them, there are both experienced or professional speakers (politicians, professors, professional translators, radio interviewees) and inexperienced speakers (students) [38].
The first group of recordings (30 min) is formed by utterances from orations or public presentations: speeches and reports from European Parliament [20], sessions of a faculty council, students' lectures, and reviews. All the speeches, although preceded by preparation of the speakers or supported by slides, were not read and are characterized by all the typical features of spontaneous speech.
The second part of the corpus (30 min) consisted of recordings of real-time translation of orations during European Parliament sessions [20]. This sort of utterances is specific kind of spontaneous speech, where the speech rate of the translator is determined by the style of the speaker being translated. However, they are situations of formularization of own utterance, which causes their spontaneous character and induces presence of imperfections specific for spontaneous speech.
The third type of recordings (60 min) is radio broadcasts, which were prepared by removing the voice of interviewer, leaving only the expressions of interviewees. The length of recordings after preparation was 10 min for each speaker (three females and three males).
Another corpus of read speech was prepared, for comparison with spontaneous speech and for evaluation tests. It consisted of recordings from audio books and AGH Audio-Visual Speech Database (50 speakers, 15 min of continuous speech for each speaker).
Since the recordings originate from different sources, they vary in quality, type, and level of background noise and SNR factor. Diversified conditions of recording (equipment, environment, transmission channel variability, and distance from the speaker to microphone) determined whether the signal contained events of our interest (e.g., recordings with low SNR or too big distance from mouth to microphone do not contain information on breath pauses).
The recordings were labeled with P for presentations/orations T for translations, R for radio dialogues, and A for audiobooks and other sources of read speech and described with number of the speaker and duration of utterance (in minutes).
Pauses tagging and annotation
First we transcribed orthographically the content of the recordings to clean (skipping disfluencies, filled pauses or repairs) and syntactically correct texts. On the basis of the observation of the process, the factors affecting the imprecision and ambiguity of inserting punctuation in the transcripts were collected. One of the impediments was ambiguous intonation, especially in case of inexperienced speakers. It manifested as "enumerating" tone of voice, which caused the speaker to preserve the same tone during commas and full stops. Another symptom was construction of multiple complex sentences with every clause starting with a conjunctive pronounced with extended phonation. In such cases, the decision of inserting comma or full stop remained subjective. When a speaker did not signalize the phrases and sentences border with their pronunciation, intonation or pauses, the punctuation was based on the meaning of the utterance. The last word of preceding sentence was often bonded with the first in the next one. In translators group we usually observed specific disorder of phonotactics involving artificial prolongations of whole words. Transposals of functional elements of sentences and reorganization of the sentence were also frequent events. It is common for inexperienced speakers to place intervening sentences during the speech or abusing certain words like let's say, just, simply (language-specific conversational fillers/discourse makers).
For each transcription, the number of words, full stops, and commas were counted. Then, the statistics of sentences and phrases lengths were computed: mean length of a sentence and a phrase, as well as a mean number of words in sentences and phrases. Then, in the places of punctuations signs, occurrences of pauses were verified. When a full stop was signalized by a silent pause, the time was tagged as s_p. (similarly for commas - s_p,), filled pause - f_p. (commas - f_p,), b_p. for a breath pause (b_p, for commas). When no type of pause appeared, the place was tagged as n_p. (n_p,). The parameters were included in the feature vectors (Table 1). Time annotation of breaths and filled pauses were prepared manually with half-automated Annotator software. As a result, Master Label Files (mlf, HTK standard) were attached to each recording.
Table 1 Explanation of adopted abbreviations
In order to find silent pauses in all recordings, an ITU-T G.729b compliant voice activity detector (VAD) was used, which relies on full band energy, low band energy, zero-crossing rate and a spectral measure to decide whether a 10-ms segment contains voice. Silent pauses were detected with different lower thresholds: 100, 150, and 200 ms.
For each speaker, the amount and duration of each type of pauses were used to calculate number of pauses per minute, percentage of pause duration in the recording, and mean pause duration with its standard deviation as well as its median and quartiles. The parameters are listed and explained in Table 1.
Values of extracted features f n (where n = 1,…,P) were standardized according to the equation
$$ \overline{{f_n}_{,s}}=\frac{f_{n,s}-\mu \left({f}_n\right)}{\sigma \left({f}_n\right)}, $$
where \( \overline{f_{n,s}} \) is a normalized value of the feature f n for sth speaker, μ and σ are mean and standard deviation of the variable f n in the examined population.
The properties of speaker s are specified by vector p s = f 1,s ,f 2,s ,…,f P,s of length P, and its distance to mean value vector p is computed as
$$ {\gamma}_s=\frac{1}{P}{\displaystyle \sum_{n=1}^P\left|\overline{f_{n,s}}\right|}. $$
In order to investigate potential correlation between speakers and parameters describing pauses in their speech, we computed correlation matrix
$$ Corr={\displaystyle \sum_{s=1}^N\Big(p-{p}_s}\Big){\left(p-{p}_s\right)}^T, $$
where p is an average vector for all speakers, and p s is a vector that characterizes sth speaker. In the experiment number of speakers N is equal to 30. We performed the operation for P = 57 parameters listed in Table 1.
For the parameters which could be obtained for every speaker in our corpus, (i.e., b_p and f_p(y) durations), we concluded an analysis of variance (ANOVA) in order to check the statistical significance of differences between speakers based upon only one of these parameters at a time. For clustering experiment, we used dendrogram method based on Euclidean metrics.
The scope of the first experiment was to verify if information on pauses can enhance a biometric verification system, therefore an i-vector based system was set up as a baseline to perform evaluation process. The i-vector approach assumes creation of a Universal Background Model (UBM) with a vast amount of data during setup phase. This process is performed by maximum likelihood estimation of a Gaussian Mixture Model (GMM), using an Expectation Maximization (EM) algorithm with K-Means initialization. The next stage of i-vector modeling is transition from the GMM supervector space into a low dimensional subspace, which is able to represent a whole utterance as a vector of coordinates, called the i-vector. To that end, a transformation matrix, called Total Variability matrix (TV), is estimated also using a maximum likelihood algorithm [30].
The aim of UBM is to represent common characteristics of all possible speakers, and the role of dimensionality reducing transformation is to select only the relevant ones for a given speaker. In enrollment process in the baseline system, the recording is first segmented to 20-ms frames and parameterized by Mel-frequency cepstral coefficients (MFCC), followed by feature warping [22]. Then, model of each system user is acquired by calculating frame posteriors using GMM-UBM and extracting i-vectors with a variational Bayes algorithm [30]. The enrolled model represents unique biometrical features of a given speaker. The Pauses system models each user by extracting prosodic features from a larger recording segment, and thus requires at least 1 min of audio for enrollment and verification.
The final system is acquired by combining the final scores of i-vector/MFCC and Pauses systems in evaluation process. Because of that, parameterization based on pauses was added as a parallel to i-vector system. General verification method is presented in Fig. 1. MFCC features extracted from an input signal are forwarded into i-vector extractor. Cosine distance scoring
Verification process in pauses—MFCC biometric speaker verification evaluation system
$$ \mathrm{C}\mathrm{D}\mathrm{S}\left(w1,w2\right)=\frac{w1*w2}{\left\Vert w1\right\Vert \left\Vert w2\right\Vert }, $$
where w1 and w2 are i-vectors, was used to obtain a likelihood measure between verified and tested i-vector.
The final score of combined systems is computed with the Bosaris Toolkit [31].
Speaker verification system is a binary classifier, since it determines whether analyzed signal is or is not produced by user related to model stored in a system database. The output of the system is the information that user is a target (the analyzed model is the user model) or an impostor (the model was created in other user enrollment process). Evaluation is therefore based on analysis of target and impostor likelihood distributions. The more separated are those sets the better system works, since it is easier to choose a threshold that divides those sets. In general, expected value of target distribution is greater than the impostor one. Basing on likelihood distributions, it is possible to calculate cumulative distribution functions for targets and impostors. Those functions may be used to determine false positive ratio (FPR) and false negative ratio (FNR) functions which determine respectively the probability that an impostor is classified as a target and a target is classified as an impostor for a particular likelihood. Decision about particular threshold i.e. choice of operating point, is dependent on the use case of such system. Increasing the threshold results in lower FPR, but also means that more targets verification will fail. The value where FPR = FNR is called equal error rate (EER) and is widely used to determine performance of a verification system as a single parameter. Other operating points of a system, where FNR is not equal to FPR are commonly evaluated with detection error trade-off (DET) plots [39], which include miss probability (FNR) at vertical axis and false alarm probability (FPR) at horizontal one.
In the second experiment, we verified if pause-related features are useful for automatic classification of three types of spontaneous speech—types P, T, and R. Classification is performed with 3-components GMMs. We applied leave-one-out cross-validation where each time one speaker's recording was the testing one and all the others formed the training set. The goal of the third experiment was to automatically distinguish two classes: read (50 audio books) and spontaneous speech (27 recordings of types P, T and R). Several classifiers were tested: decision tree, logistic regression, support vector machine (SVM), random forest classifier and Extreme Gradient Boosting (XGBoost) [36], using Scikit-learn toolkit [37]. Again, a leave-one-out cross-validation was applied.
Overall analysis of pauses appearance in speech
Speech rate in spontaneous monologues is about 117 words per minute (with standard deviation between speakers is about 24 words/min). Mean length of a sentence (containing average 18 words) was about 10 s, while mean length of a speech unit divided by punctuation (average 5 words) - 3.5 s. The results were similar for both orations/ presentations, real time translations and interviews (more results are presented in Table 2).
Table 2 Frequency of punctuation in transcripts: mean (standard deviation)
The most commonly used types of filled pauses are: prolonged "yyy" (50 %), short "yh" (41 %) and "mmm" (7 % of counts) (see Fig. 2). For the purpose of this research, we grouped together the "yyy" and "yh" categories and skipped other fillers, which are very rare (2 %).
Different types of filled pauses and frequency of their occurrence in 1 h corpus [23]
As for acoustically registered breath pauses, average for a speaker was about 11 breaths per minute. Quantity of filled pauses in a minute of recordings was often surprisingly high, especially for inexperienced speakers (even above 10 per minute). Mean frequency of different types of pauses are compared in Table 3.
Table 3 Frequency of silent, breath, and filled pauses in recordings: mean (standard deviation)
Analysis of correlation of pauses and punctuation marks
The information on frequency of using punctuation in spoken language was obtained by analyzing the quantity of full stops and commas in transcriptions. Figure 3 shows meaning of the pauses in determining punctuation in speech. Among all full stops in transcription, 39 % are correlated with occurrences of a breath pause, 27 % a silent pause, and 20 % a filled pause (Fig 3a). Among all commas, 28 % are pointed by a silent pause, 20 % a breath pause, and 6 % a filled pause (Fig 3b). Lack of any kind of a pause (words bonding in pronunciation) was registered in 20 % occurrences of full stops and 46 % commas for spontaneous speech and only for 1.3 % full stops and 42 % commas for read speech (Fig. 4). Among all occurrences of filled pauses, 8 % indicate full stops and 6 % indicate commas; among breath pauses the proportions are, respectively, 10 and 11 % (Fig. 5).
Different types of pauses determining a full stops and b commas in aspect of types of filled pauses signalizing punctuation [23]
Different types of pauses determining full stops and commas [23]
Proportions of filled pauses and breath pauses occurrences correlated with full stops or commas [23]
However, the usage of different types of pauses for signalization of punctuation is strongly individualized between speakers, as presented in Tab. 4. To facilitate the observation of inter-speaker differences, the intensity of connection between pauses and punctuation was graded with grayscale. Although the general tendency was signalization of full stops with breaths and lack of any kind of a pause in the place of comma, the variation between speakers is considerable.
Table 4 Percent of pauses events denoting full stops and commas for each speaker (P presentations/orations, T translation, R radio interviews)
Differences between speakers in quality and quantity of pauses
Using feature vectors p s specific for each speaker s, we investigated correlation of each pair of speakers (formula 3). The obtained correlation matrix is presented in Fig. 6. Distribution of correlation of the given speaker with the others is illustrated in Fig. 7.
a Correlation matrix for 30 speakers. b Cumulative distribution function and histogram of correlation coefficients for pairs of speakers
Boxplot for correlation coefficients distribution for each speaker
As it is presented in Figs. 5 and 6, speakers vectors were usually correlated to a small extent or not correlated. Speakers' distance from a mean vector was calculated according to formula (2). Distribution of results is presented in Fig. 8.
Histogram of coefficients γ s for all speakers
Having annotated a large quantity of breaths and filled pauses "yyy" for every speaker (with the latter not appearing only for 2 out of 30 speakers in our corpus), we decided to analyze inter-speaker differences. We observed that the durations of breaths in our corpus have mean 392 ms, standard deviation 118 ms, median 368 ms, and quantiles are as follows: 0.25 is 312 ms and 0.75 is 455 ms. For the filled "yyy" pauses, the mean is 398 ms, standard deviation is 183 ms, median is 362 ms, and quantiles are as follows: 0.25 is 278 ms and 0.75 is 484 ms. Analysis of variance showed, that both for breaths (p = 7.6E−50) and for filled "yyy" pauses (p = 7.62E−22) the mean duration differences between speakers are statistically significant.
Figure 9a shows the results of ANOVA of breath duration of different speakers, and Fig. 9b shows the same for filled "yyy" pause duration. Analysis of those plots leads to the conclusion that although the differences are not statistically significant for every speaker pair, it makes sense to group the speakers into 2 or 3 categories, as in: speakers taking short breaths, speakers taking breaths of average length, speakers taking long breaths, and the same for "yyy" fillers. If more recordings of a single speaker were available, similar analysis could be made for the frequency of pause occurrence.
Results of ANOVA analysis for 30 speakers: a Breath duration. b Filler "yyy" duration
For the parameters of breaths and fillers duration, Gaussian models were created (in Fig. 10, an example for nine speakers is presented). The preliminary analysis suggests that the features can be used for classification with Gaussian Mixture Models (GMM).
Gaussian models fitted for nine speakers for a breath duration and b filler "yyy" duration
An interesting observation is that 8 (out of 30) of the speakers in our corpus do not use "mmm" filled pauses at all, while most of them do it rarely (on average about 1 per minute, compared to almost 7 filled "yyy" pauses per minute). The most frequent use of them is made by speaker T9_2 at about 3.5 "mmm" pause per minute. This leads to a conclusion that frequent usage of "mmm" filled pause is a characteristic feature of a speaker.
In the next experiment, standardized values of each feature were quantized into three values: low, medium and high, in reference to the distribution of each feature. For the quantized feature matrix (Fig. 11a), a clusterization was performed, using Euclidean distance measure. The obtained dendrograms with a heatmap representation allow to easily observe in which features the most similar speakers were alike.
a Features matrix with quantized values. b Heatmap with dendrograms for speakers and features
Group analysis
To investigate the influence of experience and oratorical abilities on pauses and speech rate, we divided a corpus of spontaneous monologues into recordings of experienced speakers (professors and politicians) and inexperienced speakers (mainly students). Average values of selected temporal features of each group are compared in Table 5.
Table 5 Comparison of selected features for experienced and inexperienced speakers: average values and standard deviation (in brackets)
As expected intuitively, professionals speak more slowly, with less disfluencies and formulate shorter sentences, which makes their speech more adjusted for efficient listening and understanding by recipients. Also their dynamic breathing rhythms are much more concordant with sentences boundaries (a half of full stops were correlated with breath pauses). Such conscious dynamic breathing (taking a breath before beginning of a sentence or phrase) is one of the basic voice emission principles, often emphasized by authors of handbooks on speaking skills and techniques.
The comparison of presentations, translations and radio dialogues can be observed in Fig. 12. Some of differences are significant and can be interpreted in accordance to intuitive situational context conditions. In radio interviews speakers tend to speak much faster, which is conditioned by the determined time for the conversation. Surprisingly, their filled pauses are much longer than for the rest of analyzed recordings.
Differences in distribution of each feature between 3 groups of speakers (P presentations/orations, T translation, R radio transmissions)
Using pauses for speaker recognition—evaluation results
For the experiment, recordings from audiobooks were used (50 speakers, 15 min for each speaker). The choice was made in order to obtain a regular set of long enough recordings in the same situational context (reading a story) from a similar group of speakers (professional lectors). Entire set was used to train UBM and Total Variability matrix in the i-vector system.
Recordings were split into train part of 5 min and four test parts of 2.5 min each. It allowed performing 1200 cross-validation tests: 200 verification of authorized users (target trials) and 1000 simulation of impostors attacks (10 impostors with 2 random recordings were chosen randomly for each speaker). Parameters of silent pauses (f1–f21) were automatically extracted with VAD algorithm and the feature sets were processed as independent stream (as presented in Fig. 1) according to the methodology described in chapter 3.4.
The results of speaker verification task of a baseline i-vector system (using only MFCC features) are presented in Fig. 13, where the red and blue histograms are normalized distributions of target and impostor scores respectively. Vertical axis refers to FNR and FPR values. This result was obtained for the system based on 1024-component GMM-UBM and 400-dimensional total variability subspace. In this case EER reached the smallest value—3 %. The result is considered as sound for the number of impostor and target speakers used in the test. The performance of the system might be enhanced by gender-dependent approach or incorporation of score processing like PLDA or score normalization techniques, but the goal of this experiment was measuring efficacy of a simple baseline speaker verification system on the dataset.
Performance of the baseline system
Performance of the system based on pauses features is illustrated in Fig 14. For the evaluation three features were used: durations of the silent pauses, number of silent pauses per minute (f1) and ratio between sum of pause duration (f2) to entire signal length. Due to limited number of data points that was extracted from test samples it was necessary to use minimal number of components in GMM to prevent overfitting. Best results were obtained with 4-component GMM. Such configuration resulted in EER equal to 40 %. Overlapped distributions in Fig 14 suggest that used features are little discriminative in speaker comparison. This meager result implies that applied features, without any further processing, should not be used as a standalone input into GMM classifier in a speaker discrimination task.
Performance of system based on pause-related features only
Figure 15 presents DET plots for Pauses and i-vectors systems and for the fusion of the systems performed with a Bosaris Toolkit, where 20 % of the result scores were used to train fusion algorithm. Value of 20 % were chosen empirically, as an optimal point for DET plot. Modification of this value by enlarging training dataset did not change the positions of the curve but reduced its resolution due to lower number of test points. The fusion of the scores of the two systems caused no gain in overall performance and reveals reduction of efficacy by 1 % in terms of EER.
Detection Error Trade-off plot for pause-based and MFCC-based systems and their fusion.
Automatic recognition of type of spontaneous speech
For this task, as our features we used a time series of silent pause information extracted in an online manner, where each point indicates an appearance of a silent pause. Three features were used in the experiment: the duration of the silent pause instance, s_p per minute (f1) and percentage of s_p time in recording (f2), where f1 and f2 were calculated online, based on the silent pause instances gathered up to this point. To perform classification, we adopted a diagonal covariance GMM classifier with three mixtures (one for each class of spontaneous speech). The best result was achieved for three Gaussian components in each mixture. Experiment was carried out using a "leave-one-out" cross-validation scheme.
The classifier achieved 78 % accuracy. Table 6 shows the Precision, Recall and F1 score achieved by the classifier in this task. As illustrated by the confusion matrix in Fig. 16, the worst performing class is the T type (translations), as it tends to be mistaken as a P type (presentation/oration). We suspect this is due to speakers making long silent pauses in both scenarios—to make a rhetoric effect in case of P, and because of the necessity to wait for more context before translating an utterance in case of T.
Table 6 Results of automatic recognition of types of spontaneous speech
Confusion matrix for automatic recognition of type of spontaneous speech
Automatic classification of read and spontaneous speech
In this task, we confronted our spontaneous speech recordings (P, T, and R classes) with 50 audiobook recordings (A class). In the distinction between read and spontaneous speech, seven features (f1–f7) describing silent pauses were used. This choice was due to presence of this type of pauses in any recording (in contrast to filled pauses which are absent in read speech and breath pauses which are present only in good quality recordings) and easiness of detecting them automatically. The silent pauses were found automatically using VAD and then, for each recording, a vector of seven features was calculated. Logarithmization and normalization of the parameters improved the results.
The best accuracy in this task was obtained using XGBoost classifier (Table 7). However, it should be noted that the dataset is imbalanced in terms of classes, and this classifier exhibits bias toward the read speech class which shows as increasing read speech recall and decreasing spontaneous speech recall in regard to less complex classifiers such as the decision tree. Nonetheless, all classifiers perform better than if classification was done by chance (50 % accuracy), or by always indicating the class with higher count (65 % accuracy).
Table 7 Results of automatic recognition of read and spontaneous speech - comparison of classifiers
We observed that read speech was better recognizable than spontaneous speech (see Fig. 17), which we believe is partially a result of classification bias, but also a result of the higher diversity of spontaneous speech class examples.
Confusion matrix for automatic recognition of read/spontaneous speech
Majority of speaker recognition systems do not include suprasegmentals. High-level features of speech signal, like pauses, although statistically they were proved to be speaker-specific, are dependent also on other factors, like situational context, stress level, kind of linguistic task. This weakens their possible usage for speaker recognition. It should also be remembered that obtaining data on pauses requires much longer (than e.g. standard MFCC analysis) segment of continuous speech to perform analysis, which is not a desirable situation in a text-dependent system which operate on short utterances (e.g. one sentence). Relatively large observation period (at least one minute) sufficient for acquiring information on pausing style, constraints applying the approach only to certain sort of biometric systems (text-independent systems or forensic applications).
The results obtained from testing a system based on pauses only (40 %) are similar to EER achieved in Peskin et al. test (36.1 to 43.3 % EER) using different pause-related features alone [16]. However, more types of models and classifiers should be tested in future works. Probably pauses would be better included into model using HMM chain, as applied in [25] or as a part of n-gram models [26].
This data is not enough in itself to perform biometric verification or identification of a speaker; however, it can be used to enhance speech technology applications by including additional information in speaker's profile, such as the following: speaker breathes frequently, takes short breaths, makes filled pauses infrequently, etc.
In the second experiment, it was showed that the distribution and structure of pauses in speech, represented by three parameters, are specific for the type of speech and sufficient to automatically classify them with 78 % accuracy. We showed that parameters such as a number of each pause type occurrences per minute or statistics of pause duration bring important information about speaker's habits. Advantage of the approach is simplicity, low computational complexity and robust feature extraction. Breath events [2] and filled pauses [23] can be automatically detected in a speech signal. It allows including the features easily in speech technology systems.
The obtained knowledge on pauses meaning can be merged with analysis of other temporal features (phoneme length, energy, fundamental frequency) in order to build algorithms for punctuation detection in speech. Since lack of punctuation and occurrence of disfluencies in spontaneous speech transcripts are factors that disturb their processing by natural language processing systems, parsers or information extraction systems, automatic analysis of pauses can help to make spontaneous speech transcripts more readable for both human and NLP systems.
Finally, in the biomedical field, the research on pauses is meaningful in affect detection. All analyzed kinds of pauses carry information on speaker current emotional state. Frequency and regularity of pausing behavior, based on obtained models, is currently tested in the task of automatic emotion recognition. It can lead directly to include it in systems for monitoring mental illnesses, since quantity and duration of silent pauses can be indicators of emotional state of the speaker or a measurable symptom of psychic disorders like schizophrenia or bipolar affective disorders. Measuring breath frequency in acoustic signal can be a cheap and easily available method for estimation of physical effort level, measure of physical fitness or diagnostics of potential respiratory dysfunctions (e.g. sleep apnea).
In this paper, we deliver the numerical description of pauses in Polish speech. Three types of acoustic pauses (silence, breaths and fillers), two types of punctuation marks (full stops and commas), and co-occurrences of acoustic and syntactic pauses were proved to be speaker dependent. Pausing behavior was investigated in several contexts (spontaneous speech during presentation, simultaneous interpretation, interview and read speech - reading a novel).
Connotations between pauses and punctuation, as well as frequency and types of pauses vary between individuals and depend on speaking style of each person, speech quality, culture, experience and preparation for oral presentations. Thereby, the temporal features can possibly be used as a valuable source of paralinguistic information. However, even though our results were better than similar previous studies, the differences were not sufficient to differentiate speakers. Verification of the hypothesis that they improve speaker recognition system was negative for scenario of modeling pauses with UBM and GMM models. Another modeling methods will be evaluated in future works.
An attempt to automatically recognize three types of spontaneous speech resulted in 78 % accuracy and distinguishing read and spontaneous speech with 75 % accuracy, using pause-related features only. This result shows usefulness of pauses to distinguish between different situational context and cognitive task and therefore it could find application for automatic discourse analysis and conversation modeling purposes. Presented statistical models of pauses will be a fundament for studying usefulness of the information in different applications, like ASR or emotion recognition systems. Further research will cover also other reasons of pauses frequency and duration variability (a type of personality of the speaker and emotional arousal). Feature vector dimensionality will be reduced. Analyses will be also conducted on more regular sets of recordings, e.g., the same speaker in different situational contexts.
F Batista, H Moniz, I Trancoso, N Mamede, A Mata, Extending automatic transcripts in a unified data representation towards a prosodic-based metadata annotation and evaluation. Journal of Speech Sciences 2(2), 115–138 (2012)
M Igras, B Ziółko: Wavelet method for breath detection in audio signals. In: IEEE International Conference on Multimedia and Expo (ICME 2013), San Jose (2013). doi:10.1109/ICME.2013.6607428
T Kendall, Speech rate, pause and linguistic variation: an examination through the sociolinguistic archive and analysis project. Doctoral dissertation (Duke University, Durham, 2009)
E Campione, J Véronis.(2002). A large-scale multilingual study of silent pause duration. In: Proceedings of the Speech Prosody Conference, 199–202
M Demol, W Verhelst, P Verhoeve. (2006). A study of speech pauses for multilingual time-scaling applications. In: Proc. ISCA-ITRW Multiling, (Stellenbosch, South Africa).
I Homma, Y Masaoka, Breathing rhythms and emotions. Experimental physiology 93(9), 1011–1021 (2008)
American Thoracic Society and American College of Chest Physicians, ATS/ACCP Statement on cardiopulmonary exercise testing. American Journal of Respiratory and Critical Care Medicine 167(2), 211–277 (2003)
V Rapcan, S D'Arcy, S Yeap, N Afzal, J Thakore, RB Reilly, Acoustic and temporal analysis of speech: a potential biomarker for schizophrenia. Medical Engineering & Physics 32, 1074–1079 (2010)
D Baron, E Shriberg, A Stolcke. (2002). Automatic punctuation and disfluency detection in multi-party meetings using prosodic and lexical cues. In: Proceedings of the International Conference on Spoken Language Processing, 949–952
E Shriberg, A Stolcke, D Hakkani- Tür, G Tür, Prosody-based automatic segmentation of speech into sentences and topics. Journal Speech Communication - Special issue on accessing information in spoken audio archive 32(1–2), 127–154 (2000)
WA Lea, Trends in speech recognition (Academic Press, New York, 1980)
V Ramanarayanan, E Bresch, D Byrd, L Goldstein, SS Narayanan, Analysis of pausing behavior in spontaneous speech using real-time magnetic resonance imaging of articulation. The Journal of the Acoustical Society of America 126, 160–165 (2009)
T Kinnunen, H Li, An overview of text-independent speaker recognition: from features to supervectors. Speech communication 52(1), 12–40 (2010)
B Ziółko, W Kozłowski, M Ziółko, R Samborski, D Sierra, J Gałka, Hybrid wavelet-Fourier-HMM speaker recognition. International Journal of Hybrid Information Technology 4(4), 25–41 (2011)
E Shriberg, Higher-level features in speaker recognition. Speaker Classification I. Lecture Notes in Computer Science / Artificial Intelligence (Springer, Berlin/Heidelberg, 2007), pp. 241–259
B Peskin, J Navratil, J Abramson, D Klusacek, DA Reynolds, X Bing: Using prosodic and conversational features for high-performance speaker recognition: report from JHU WS'02. IEEE International Conference on Acoustics, Speech, and Signal Processing (2003). doi: 10.1109/ICASSP.2003.1202762
K Sönmez, E Shriberg, L Heck, M Weintraub. (1998). Modeling dynamic prosodic variation for speaker verification. In: Proc. ICSLP, 3189–3192
G Adami, Modeling prosodic differences for speaker recognition. Speech Communication 49(4), 277–291 (2007)
M Backes, G Doychev, M Dürmuth, B Köpf. (2010). Speaker recognition in encrypted voice streams. In: Proceedings of the 15th European Conference on Research in Computer Security, 508–523
J Lööf, C Gollan, H Ney. (2009). Cross-language bootstrapping for unsupervised acoustic model training: rapid development of a Polish speech recognition system. In: Proceedings of Interspeech, Brighton, 88–91
DA Reynolds, TF Quatieri, RB Dunn, Speaker verification using adapted Gaussian mixture models. Digital Signal Processing 10(1–3), 19–41 (2000)
J Pelecanos, S Sridharan: Feature warping for robust speaker verification. In: Proc. Speaker Odyssey: the Speaker Recognition Workshop (Odyssey 2001), Crete, Greece, 213–218 (2001)
M Igras, B Ziółko, Different types of pauses as a source of information for biometry. Models and analysis of vocal emissions for biomedical applications: 8th international workshop (Firenze University Press, Firenze, 2013), pp. 197–200
K Barczewska, M Igras, Detection of disfluencies in speech signal. Challenges of modern technology 32(1–2), 3–10 (2013)
F Beritelli, A Spadaccini. (2012). Performance evaluation of automatic speaker recognition techniques for forensic applications. New Trends and Developments in Biometrics, 129–148
E Shriberg, L Ferrer, S Kajarekar, A Venkataraman, A Stolcke, Modeling prosodic feature sequences for speaker recognition. Speech Communication 46(3), 455–472 (2005)
B Zellner, Pauses and the temporal structure of speech, in Fundamentals of speech synthesis and speech recognition, ed. by E Keller (Wiley, Chichester, 1994), pp. 41–62
E Shriberg: Spontaneous speech: How people really talk and why engineers should care. Proceedings of European Conference on Speech Communication and Technology, Eurospeech, 1781–1784 (2005)
B. Ziółko, T. Jadczyk, D. Skurzok, P. Żelasko, J. Gałka, T. Pędzimąż, I. Gawlik, S. Pałka .2015. "SARMATA 2.0 Automatic Polish Language Speech Recognition System", Interspeech, Dresden,
P Kenny. (2012). A small footprint i-vector extractor. Odyssey 2012: 1–6
https://sites.google.com/site/bosaristoolkit/ Accessed: 30 May 2016
R Dufour, Y Estève, P Deléglise, Characterizing and detecting spontaneous speech: application to speaker role recognition. Speech Communication 56, 1–18 (2014)
A Tóth, Speech disfluencies in simultaneous interpreting: a mirror on cognitive processes. SKASE Journal of Translation and Interpretation 5(2), 23–31 (2011)
B Tissi, Silent pauses and disfluencies in simultaneous interpretation: a descriptive analysis. The Interpreters' Newsletter 10, 103–127 (2000)
L Ten Bosch, N. Oostdijk, J P De Ruiter. (2004). Turn-taking in social talk dialogues: temporal, formal and functional aspects. In 9th International Conference Speech and Computer (SPECOM'2004). 454–461
J H Friedman, Greedy function approximation: a gradient boosting machine. Annals of statistics, 29(5), 1189–1232 (2001)
Pedregosa et al. (2011). Scikit-learn: Machine Learning in Python, JMLR 12, pp. 2825–2830
P Żelasko, B Ziółko, T Jadczyk, D Skurzok, "AGH Corpus of Polish Speech". Language Resources and Evaluation 50, 585–601 (2016)
A Martin, G Doddington, T Kamm, M Ordowski, M Przybocki, "The DET curve in assessment of detection task performance", in Proceedings of the 5th European Conference on Speech Communication and Technology (Greece, EUROSPEECH, Rhodes, 1997). pp. 1895–1898
The project was supported by The Polish National Centre for Research and Development granted by decision 072/R/ID1/2013/03 and the National Science Centre support allocated on the basis of a decision DEC-2011/03/D/ST6/00914. This work was supported by the AGH University of Science and Technology under Deans Grant 15.11.230.252 and Statutory works 11.11.230.017.
Department of Computer Science, Electronics and Telecommunications, AGH University of Science and Technology, Al. Adama Mickiewicza 30, 30-059, Kraków, Poland
Magdalena Igras-Cybulska, Bartosz Ziółko, Piotr Żelasko & Marcin Witkowski
Techmo, Kraków, Poland
Bartosz Ziółko, Piotr Żelasko & Marcin Witkowski
Magdalena Igras-Cybulska
Bartosz Ziółko
Piotr Żelasko
Marcin Witkowski
Correspondence to Magdalena Igras-Cybulska.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Igras-Cybulska, M., Ziółko, B., Żelasko, P. et al. Structure of pauses in speech in the context of speaker verification and classification of speech type. J AUDIO SPEECH MUSIC PROC. 2016, 18 (2016). https://doi.org/10.1186/s13636-016-0096-7
Biometry
Spontaneous speech
Speech classification | CommonCrawl |
Piezomagnetic switching and complex phase equilibria in uranium dioxide
Enhanced superconductivity and ferroelectric quantum criticality in plastically deformed strontium titanate
S. Hameed, D. Pelc, … M. Greven
Origin of giant electric-field-induced strain in faulted alkali niobate films
Moaz Waqar, Haijun Wu, … John Wang
Evidence for the Fulde–Ferrell–Larkin–Ovchinnikov state in bulk NbS2
Chang-woo Cho, Jian Lyu, … Rolf Lortz
Large electromechanical strain and unconventional domain switching near phase convergence in a Pb-free ferroelectric
Sarangi Venkateshwarlu, Lalitha K. Venkataraman, … Abhijit Pramanick
Anisotropic dislocation-domain wall interactions in ferroelectrics
Fangping Zhuo, Xiandong Zhou, … Jürgen Rödel
Mechanical-force-induced non-local collective ferroelastic switching in epitaxial lead-titanate thin films
Xiaoyan Lu, Zuhuang Chen, … Lane W. Martin
Origin of perpendicular magnetic anisotropy in amorphous thin films
Daniel Lordan, Guannan Wei, … Ansar Masood
Flexopiezoelectricity at ferroelastic domain walls in WO3 films
Shinhee Yun, Kyung Song, … Chan-Ho Yang
Nano-imaging of strain-tuned stripe textures in a Mott crystal
A. S. McLeod, A. Wieteska, … D. N. Basov
Daniel J. Antonio1,
Joel T. Weiss2,
Katherine S. Shanks2,
Jacob P. C. Ruff3,
Marcelo Jaime ORCID: orcid.org/0000-0001-5360-52204 nAff9,
Andres Saul ORCID: orcid.org/0000-0003-0540-703X5,
Thomas Swinburne ORCID: orcid.org/0000-0002-3255-42575,
Myron Salamon4,
Keshav Shrestha1 nAff10,
Barbara Lavina6,
Daniel Koury6,
Sol M. Gruner ORCID: orcid.org/0000-0002-1171-44262,3,
David A. Andersson7,
Christopher R. Stanek7,
Tomasz Durakiewicz1,
James L. Smith7,
Zahirul Islam8 &
Krzysztof Gofryk ORCID: orcid.org/0000-0002-8681-68571
Communications Materials volume 2, Article number: 17 (2021) Cite this article
Characterization and analytical techniques
Magnetic properties and materials
Phase transitions and critical phenomena
Actinide materials exhibit strong spin–lattice coupling and electronic correlations, and are predicted to host new emerging ground states. One example is piezomagnetism and magneto-elastic memory effect in the antiferromagnetic Mott-Hubbard insulator uranium dioxide, though its microscopic nature is under debate. Here, we report X-ray diffraction studies of oriented uranium dioxide crystals under strong pulsed magnetic fields. In the antiferromagnetic state a [888] Bragg diffraction peak follows the bulk magnetostriction that expands under magnetic fields. Upon reversal of the field the expansion turns to contraction, before the [888] peak follows the switching effect and piezomagnetic 'butterfly' behaviour, characteristic of two structures connected by time reversal symmetry. An unexpected splitting of the [888] peak is observed, indicating the simultaneous presence of time-reversed domains of the 3-k structure and a complex magnetic-field-induced evolution of the microstructure. These findings open the door for a microscopic understanding of the piezomagnetism and magnetic coupling across strong magneto-elastic interactions.
Strong coupling between magnetism and lattice vibrations can lead to many emerging phenomena, as have been shown in unconventional superconductors, heavy-fermions, multiferroics, and other new functional materials1,2,3,4,5. Due to strong spin–orbit coupling, correlated 5f-electron spin systems represent a perfect platform to scrutinize the phenomena related to spin–phonon interactions, especially when interacting with other degrees of freedom such as multipolar ordering or Jahn–Teller interactions6,7. An excellent example is uranium dioxide (UO2). This antiferromagnetic Mott–Hubbard insulator8 is the main nuclear fuel and the most studied actinide material to date9. Its correlated ground state is characterized by a competition among non-collinear magnetic dipoles, electric quadrupoles, and dynamic Jahn–Teller distortions10,11,12. Recently, it has been shown that, due to the magnetic symmetry of the non-collinear 3-k antiferromagnetic order (shown in the inset of Fig. 1a) and strong magneto-elastic coupling, UO2 undergoes a trigonal distortion under magnetic field and becomes a piezomagnet with exceptionally large coercive characteristics13. In piezomagnetic crystals, a magnetic moment can be induced by the application of physical stress14,15. This phenomenon has captured attention in recent years as a mechanism that could be used, in combination with multiferroics and piezoelectrics (especially at the nanoscale), to achieve control of magnetism by electric fields16. Piezomagnetism is also utilized in geology where the so-called volcano-magnetic effect is used for monitoring volcanic activities17,18,19. Despite intensive work, the microscopic nature and crystallographic evidence of piezomagnetism are still elusive20,21.
Fig. 1: Thermal expansion and magnetostriction in the paramagnetic state of UO2.
a The relative change of the lattice constant in the <111> direction of the UO2 single crystal as a function of temperature, measured by dilatometry of the bulk crystal (dashed line) and X-ray diffraction of the [888] peak (red squares). The X-ray data were obtained by single crystal diffraction in high-angle back-reflection geometry using 15.85 keV X-rays. An abrupt collapse in unit cell volume can be seen at TN = 30.8 K, marked by an arrow. Insets show the structure of the UO2 cubic unit cell above TN (paramagnetic phase) and below TN (ordered phase). b The magnetic response of the [888] peak in the paramagnetic phase (T = 40 K) to an applied pulsed magnetic field in the <111> direction. The initial state at 0 T (black line) shifts to higher 2θ at maximum applied field of 21.2 T (red line), then back to the same initial value after the pulse (blue line). The shift in 2θ of the peak corresponds to a small ~50 p.p.m. strain contraction in the <111> direction, consistent with bulk single crystal magnetostriction measurements2.
Here we show a direct micro-structural probe of piezomagnetism using single-crystal X-ray diffraction. By using a high-resolution back-reflection geometry setup (see Supplementary Fig. 4 for details), we were able to resolve small changes in the [888] Bragg peak of UO2 when the magnetic field is applied along the [111] direction, which corresponds to the longitudinal lattice response along the parallel cube diagonal of its crystal lattice. In the paramagnetic state (above 30.5 K), the [888] Bragg peak moves toward larger 2Θ values (see "Methods") with the applied magnetic field, exhibiting negative magnetostriction. When the UO2 crystal is cooled below the magnetic transition temperature (below 30.5 K), the application of a magnetic field causes positive magnetostriction in agreement with the measurements of the macroscopic variation of the sample length using a fiber Bragg grating (FBG) technique13. When the magnetic field direction is reversed, the sample initially compresses until a critical field is reached and then rapidly expands. The overall behavior resembles the magneto-elastic "butterfly" previously seen with the FBG magnetostriction measurements13 and represents piezomagnetic switching between two magnetic structures connected by the time-reversal symmetry. X-rays offer additional insight not available via bulk FBG methods—specifically, we observe a splitting of the [888] peak under the magnetic field. This arises from time-reversed (TR) domains of 3-k magnetic structures that have different responses to the applied magnetic fields. To the best of our knowledge, this study represents the first crystallographic observation of piezomagnetism and the switching effect in general, and in a 5f-electron spin system in particular.
Results and disscusion
The thermal expansion and magnetostriction in the paramagnetic state of UO2
The presence of a sudden volume collapse in the unit cell of UO2 at low temperatures has been known for some time22,23. The small discontinuity corresponds to the rapid and simultaneous magnetic, electrical, and structural transition at TN = 30.5 K. Comparing the relative change in the d-spacing corresponding to the [888] peak with temperature, which was taken using back-reflection geometry X-ray diffraction, to previously taken dilatometry on single crystal UO2 in the <111> direction13 in Fig. 1a, one can see that the diffraction reproduces the expected behavior. The conversion from the angle of the Bragg peak to micro-strain (or p.p.m.) is made using: ΔL/L = [sin θ0/sin θ′]−1, where θ′ is the Bragg angle of the [888] peak at a given temperature or applied magnetic field; θ0 is its zero-field value. The precision of both techniques is comparable, showing approximately a −30 p.p.m. strain change in the length along the <111> direction at 30.5 K. This shows that the volume collapse at TN can be observed as the material goes from a paramagnetic state to the ordered 3-k antiferromagnetic state. When subjected to a pulsed magnetic field in the paramagnetic state (T = 40 K), the diffraction shows a small contraction of the unit cell along the <111> direction of about 50 p.p.m. at the maximum field of 21.2 T, as seen in Fig. 1b, which also matches the FBG magnetostriction measurements13. The structure then reversibly changes back after returning to zero field.
The magnetostriction in the antiferromagnetic state of UO2
When repeating the measurements in the magnetically ordered state, a different and unexpected response occurs. Represented schematically in Fig. 2a (not to scale), FBG magnetostriction measurements in a pulsed magnetic field below TN revealed that the bulk crystal expanded in the <111> direction, the reverse of the non-magnetic state. While measuring diffraction in the back-reflection geometry, the sample was cooled to 15 K, then subjected to a pulsed field in the <111> direction. Under the application of the magnetic field, the [888] peak splits and re-converges as the applied field rises and falls, reversibly returning to its original state after the pulse (see Supplementary Fig. 7 for more details). This surprising result contrasts with the single peak seen in the non-magnetic state. Moreover, the split peaks shift in opposite directions, i.e. higher and lower scattering angles (2Θ), indicating simultaneous contraction and expansion of the same crystal lattice. This can only occur if X-ray diffraction originates in two types of physically distinct, three-dimensionally ordered regions of the sample with a robust local symmetry producing fully coherent Bragg peaks. Furthermore, one of the domains (red peak) is much more sensitive to the applied magnetic field, i.e. reaching 700 p.p.m. strain in 20 T, than all the other peaks measured. Remarkably, the two regions respond with opposite sign just as one would expect from two magnetic domains related by time reversal in a magnetic material. This is a very unusual observation, hard to explain in the context of the fcc crystal structure of UO2 yet a natural consequence in a piezomagnetic system. We, hence, believe that the splitting arises from TR domains of the 3-k magnetic structure that have opposite responses to the applied magnetic fields. The integrated intensities of the peaks along 2θ shown in Fig. 2b reveal that there is a larger volume fraction of the sample that corresponds to expansion (blue peak), and a smaller component that corresponds to contraction (red peak). The peak positions of these two separate components plotted in Fig. 2c show that, though the contracting component has a lower intensity, the absolute value of its corresponding strain is larger. The peak positions in the rising and falling field overlap symmetrically as well, showing no hysteresis. Repeatedly applying the pulsed field in the same field direction produces the same result. As can be seen from the figure, the magnetostriction obtained here, governed by the blue peak, agrees well with the FBG pulsed field magnetostriction measurements (black line)13.
Fig. 2: The magnetostriction in the antiferromagnetic state of UO2—one field direction.
a A schematic diagram showing the bulk sample magnetostriction along the <111> direction to a magnetic field applied repeatedly to it in the same direction (bold blue line), showing the sample reversibly expanding. A pale blue pattern shows field dependence of the magnetostriction for positive and negative fields directions. b A waterfall plot of fits to the integrated intensity along 2θ of selected fields and at 15 K. The peak at lower 2θ (blue) corresponding to expansion is much larger than the peak at higher 2θ (red). c The peak positions of the two peaks at 15 K, with filled symbols corresponding to rising field and open symbols to falling field. Red circles represent the smaller peak and blue squares the larger one. The blue peak and its positive magnetostriction can be seen to match previous magnetostriction measurements using the fiber Bragg grating technique (black line), with about a 150 p.p.m. expansion at the maximum 21.1 T applied field. The red dash line is guide to the eye.
When the sample is subsequently exposed to a pulsed magnetic field applied in the direction opposite that of the previously applied field, startlingly different behavior is seen. Represented schematically in Fig. 3a, the FBG magnetostriction measurements show that in this reversed field direction in the magnetic state, the response along the <111> direction is a linear contraction, displaying broken time-reversal symmetry, a characteristic of the piezomagnetic effect13. After reaching a certain temperature-dependent critical applied field strength24, the sample rapidly returns to the previous expansion response seen in the single direction applied field used in the first part of the experiment, then follows that path back to the same initial state at zero field. Looking at the integrated intensities of the diffracted peaks in Fig. 3b, the behavior of the peaks is very different from the single field direction. As seen there, the overall response is compressive at approximately −10 T (see point b in Fig. 3). But then, at increasingly negative field values, the data return abruptly to behavior similar to that seen in Fig. 2b, albeit with different relative intensities. The key point is that upon returning from −20 T to zero, the overall compressive strain is not recovered near −10 T (point f in Fig. 3). Immediately switching the field direction again shows this same new behavior, resulting in a butterfly-like loop13. The peak is clearly seen to split again (first as a broadening of the peak and then as a separate peak), this time in a positive field following immediately after negative pulses at 25 K (a positive magnetic field part is shown in the Supplementary Fig. 9). The FBG experiments have shown that critical field of about 11 T is expected at this temperature, as marked by the solid line in Fig. 3b (see Supplementary Fig. 11 for more details and ref. 24). This behavior provides direct evidence that the "butterfly" hysteresis observed in the FBG experiment can therefore be related to piezomagnetic domain evolution revealed by X-rays. Figure 3c shows the relative positions of the peaks (blue squares and red circles). The lines are visual guides. The higher angle and intensity "blue peak" (point b in Fig. 3) undergoes a switching from contraction to expansion at the critical field of ~−11 T. This demonstrates that the sign of the strain in the dominating peak is rapidly reversed in this region so that the blue peak matches the butterfly pattern observed in the magnetostriction taken at the same temperature. If so, then only magnetic domains corresponding to the blue peaks have sufficiently low critical fields to exhibit the rapid reversal of strain behavior expected from the piezomagnetic effect. It has been proposed that, in UO2 crystals subject to pulsed magnetic fields of reversed polarity, the magnetic subsystem switches between two states connected by time reversal13. In Fig. 3c, this switching phenomenon between the two states is directly observed. It has to be noted that while the coloring of the peaks is not unique, the key point is that upon an initial field reversal only a compressive strain is observed while on the subsequent return to zero field, two peaks are seen. This behavior is the same whether the sample was first trained in positive or negative fields. The two types of behavior apparently cross on initial field reversal, and therefore, that the most sensible labeling of the peaks is as indicated in the text above (see Supplementary Fig. 12 for more details).
Fig. 3: The magnetostriction in the antiferromagnetic state of UO2—alternate field directions.
a A schematic diagram showing the bulk sample magnetostriction along the <111> direction in a magnetic field applied immediately in the direction opposite to that of the previously applied field (bold blue line). As opposed to Fig. 2a, the sample shows irreversible behavior by instead initially contracting in the <111> direction, but then rapidly switching back to the previous expanding behavior once a threshold field has been surpassed. b The integrated intensity in 2θ for selected fields and at temperature, T = 25 K. As opposed to 2c, the higher 2θ peak is initially larger, but then a change in relative intensities and peak position can be seen. c A plot of the relative strain calculated from the peak positions in the reversed field state at 25 K. The peak corresponding to the dominant strain behavior as determined by the bulk magnetostriction is the blue squares and the secondary peak corresponding to negative strain is the red circles. Note that the curves for positive fields are similar to the data in Fig. 3b. Open symbols are used for rising fields and closed for falling fields. The blue solid line is guide to the eye. The solid black line is the reversed field measurements using the FBG technique taken in positive and negative fields at 25 K.
In general, the piezomagnetic response, related to UO2's fcc, \(Pa\bar 3\) structure, can be explained using a model Hamiltonian that includes a strong magnetic anisotropy, elastic, Zeeman, Heisenberg exchange, and magnetoelastic contributions to the total energy. This simple model, where the degrees of freedom are the orientation of the magnetic moments of the four U atoms in the \(Pa\bar 3\) unit cell and the shear components of the strain tensor, successfully reproduces the intriguing experimental observations (see Supplementary materials in ref. 13 for more details regarding the model used). Minimizing the total energy with respect to the elastic shear components allows obtaining their dependence on the applied magnetic field:
$$\varepsilon _{xy} = \frac{E}{{c_{44}a^3}}M_{{\mathrm{st}}}H_z$$
with similar expressions for the other components. Here \(E = 0.280{\,} {\mathrm{meV}} {\mathrm{T}}^{ - 1}\) is the strength of the magnetoelastic interaction13, \(c_{44} = 60 {\,}{\mathrm{GPa}}\) and a = 5.47 Å are the experimental shear elastic and lattice constants respectively, and \(M_{{\mathrm{st}}}\) is the staggered magnetization that is different from zero below the Néel temperature. The change of sign of the staggered magnetization, which can be positive or negative depending on which AFM structure connected by time-reversal symmetry is stabilized, allows understanding the observed "butterfly" behavior of the magnetostriction (Fig. 3c).
The phenomenology of peaks splitting under applied magnetic field in UO2 is contrary to the more common case of magnetic detwinning. More typically, the application of magnetic fields to materials containing symmetry-related twin domains will result in a free energy difference between the domains, favouring one over the other and driving the material into a single-domain state. Conversely in the case of UO2, at least in the low-field regime, it is clear that the applied magnetic field drives opposing magnetostrictive responses in different TR domains without converting one domain entirely into the other. As seen in Fig. 1, only a single peak, shifted by magnetostriction, appears above the Neel temperature. Also, a single peak is observed above TN under high pressure (see Supplementary Fig. 2 for more details) and below TN in the absence of magnetic fields. Because the two-peak structure seen in Figs. 2 and 3 is only observed in the magnetic-field-polarized antiferromagnetic state, it is likely that the two magnetic domains consist of distinct TR versions of the 3-k structure. The piezomagnetism of UO2 (ref. 13) occurs as the 3-k structure of the ordered phase switches between TR versions under the action of applied magnetic fields. The red circles represent the negative magnetostriction of the −TR state in positive fields. That this state has not fully converted to the +TR state by a field of +20 T indicates that either this TR state is pinned, or that the switching field in this domain is larger than 20 T. The robustness of minority domains to field-conversion may be closely related to the exceptionally hard piezomagnetic response of UO2. Interestingly, the presence of two 180° antiferromagnetic domains has been observed before in another piezomagnet, MnF2, via polarized neutron tomography25,26. Furthermore, the domain configuration was determined to be sensitive to the strain condition of the specimen. It was suggested that for given local stress, the domain configuration (type A and/or B) depends on the direction of the applied magnetic field, such as the domain type being reversed when the magnetic field points out in the opposite direction25. A single domain state is possible if the stress is uniform over the entire crystal. This is a condition difficult to achieve, however, since an increase in the number of domains to attain small-scale variations of stress is prevented by the large wall anisotropy energy. In our experimental configurations, the strain effects in UO2 might play an even bigger role. A potential domain pinning and strain effect imposed by the stycast epoxy (see Fig. 3 in the Supplementary Information) could increase the piezomagnetic switching field so that the preparation of a single TR state becomes increasingly difficult. It has been shown that in other antiferromagnets, such as NiO crystal27, the domain structure has been very sensitive to mechanical stress and magnetic fields, where the twin wall dynamics appear to be limited by a spin-rotation energy loss. We speculate that the sensitivity of the switching field to mechanical stress might be related to the proximity of the mixed antiferromagnetic/antiferroquadrupolar state of UO2 to the non-magnetic, purely quadrupolar phase28. A virtual transition into the quadrupolar state allows for the conversion from one TR state to the other without the need to overcome an anisotropy barrier. In this scenario, internal constraints modify the local distortions that give rise to electric-field gradients and hence reduce the presence of the purely quadrupolar phase. Experiments are underway to explore the sensitivity of the switching field to uniaxial strain in this material. We would like to point out that the 888 peak splitting effect (the presence of two magnetic domains) observed in UO2 is real and does not originate in any way from the superposition of many fields–pulse combinations.
We report the low-temperature single crystal X-ray diffraction studies of UO2 in pulsed magnetic fields. The high-resolution single crystal diffraction allows us to study details of subtle unit-cell distortions below and above the structural and magnetic phase transition. We unveil direct microstructural observations of the piezomagnetic and switching effects being a direct consequence of the non-collinear 3-k magnetic order that breaks time-reversal symmetry in a non-trivial way. These results will help in better understanding the strong coupling between magnetic and structural degrees of freedom in this important nuclear material. We also observe the presence of magnetic domains with distinct magnetic-field evolution in the magnetically ordered state of UO2, both when the field is applied repeatedly in a single direction and when the field direction is alternated. We argue that the origin of this behavior is related to the presence of the distinct TR versions of the 3-k structure. This behavior, especially the role of quadrupoles on the piezomagnetic properties, should be investigated in the future to further explore the details of the field-induced broken symmetries.
It is worth noting that the quality of the data obtained in the experiment was only possible by bringing together a combination of unique factors. Through the use of an exceptional quality single-crystal sample (evidenced in very narrow diffraction peaks, see Supplementary Fig. 1), high-intensity synchrotron radiation, high, pulsed magnetic fields, and a fast prototype compound area detector, we were able to achieve high precision in our measurements during the short time frame of a pulsed fields (using one of the two choices of ~7 and 10 ms in total duration [start-to-finish] currently available). The single crystal of UO2 that was used in these studies is from the same batch as used in previous measurements (refs. 5 and 13). The crystal was aligned and cut into a plate approximately 500 × 500 × 200 µm, with the [111] crystal face normal to the plane. This allows us to study the longitudal magnetostriction of the UO2 crystal along the desired <111> crystolagraphic direction. The experiment was carried out in a dual cryostat single solenoid pulsed-magnet system described in ref. 29, in beamline 6 ID-C of the Advanced Photon Source at Argonne National Laboratory.
The detector used was a compound-type Mixed-Mode Pixel Array Detector (MM-PAD) with a silicon sensor30,31. This allowed for high dynamic range (up to 2 × 107 x-rays/pixel/frame at 15.85 keV), low background single photon counting, and a fast frame rate so that multiple images at different rising and falling field strengths during pulses could be obtained. The angular 2Θ range of the beam, where 2Θ is the angle between the incident and scattered rays, was calibrated on the area detector using the [888] peak of a silicon reference crystal, mounted at the center of sample theta rotation of the cryostat. The collection window for each frame was about 140 µs at a frame rate of 1 kHz, and the detector array was ca. 4 cm × 6 cm. For temperature scans in zero field, the sample was stabilized at various temperatures and then rotated through the Bragg condition at several hundred discrete sample theta positions as frames were taken at each position, then the frame intensities were added to obtain the full [888] peak profile. The center of mass in sample θ and Bragg peak position 2θ of the profile were found and converted to the change in d-spacing to find the change in unit cell length along the <111> direction. A similar process was used for measurements in the applied field, with fewer discrete sample θ rotation positions, limited by the long duty cycle of the magnet for each individual pulse, as elaborated on in the supplementary material (Supplementary Figs. 4–8).
The capacitor bank used in these experiments provided a maximum field of about 21 T and a pulse width of ~7 ms, as shown in Supplementary Fig. 5, with the ability to internally reverse the direction of the supplied current to the magnet coil. The reversed field part of the experiment was carried out in a similar manner, but after a first pulse in the positive direction (which was then discarded), successive pulses were taken in alternating directions along the <111> direction of the sample crystal. The resulting reversed-field direction frames were then analyzed the same way as the single direction pulses (Fig. 3b and c).
The data that support the findings of this study are available from the corresponding author upon request.
Pines, D. Emergent behavior in strongly correlated electron systems. Rep. Prog. Phys. 79, 9 (2016).
Sarrao, J. L. et al. Plutonium-based superconductivity with a transition temperature above 18K. Nature 420, 297–299 (2002).
Willers, T. et al. Correlation between ground state and orbital anisotropy in heavy fermion materials. Proc. Natl Acad. Sci. USA 112, 2384–2388 (2015).
Fleet, L. Multiferroics: in rare form. Nat. Phys. 13, 926 (2017).
Gofryk, K. et al. Anisotropic thermal conductivity in uranium dioxide. Nat. Commun. 5, 4551 (2014).
Santini, P. et al. Multipolar interactions in f-electron systems: the paradigm of actinide dioxides. Rev. Mod. Phys. 81, 807 (2009).
Moore, K. T. & van der Laan, G. Nature of the 5f-states in actinide metals. Rev. Mod. Phys. 81, 235 (2009).
Gilbertson, S. M. et al. Ultrafast photoemission spectroscopy of the uranium dioxide UO2 Mott insulator: evidence for a robust energy gap structure. Phys. Rev. Lett. 112, 087402 (2014).
Murty, K. L. & Charit, I. An Introduction to Nuclear Matgerials: Fundamentals and Applications (Wiley, 2013).
Prokhorov, A. S. & Rudashevskii, E. G. Magnetoelastic interactions and the single-domain antiferromagnetic state in cobalt fluoride. Kratk. Soobshch. Fiz. 11, 3–6 (1975).
Carretta, S., Santini, P., Caciuffo, R. & Amoretti, G. Quadrupolar waves in uranium dioxide. Phys. Rev. Lett. 105, 167201 (2010).
Caciuffo, R. et al. Multipolar, magnetic, and vibrational lattice dynamics in the low-temperature phase of uranium dioxide. Phys. Rev. B. 84, 104409 (2011).
Jaime, M. et al. Piezomagnetism and magnetoelastic memory in uranium dioxide. Nat. Commun. 8, 99 (2017).
Dzialoshinskii, I. E. The problem of piezomagnetism. Sov. Phys. JETP 6, 621 (1958).
Borovik-Romanov, A. S. Piezomagnetism in the antiferromagnetic fluorides of cobalt and manganese. Sov. Phys. JETP 11, 786–793 (1960).
He, Q. et al. Electrically controllable spontaneous magnetism in nanoscale mixed phase multiferroics. Nat. Commun. 2, 225 (2011).
Carmichael, R. S. Depth calculation of piezomagnetic effect for earthquake prediction. Earth Planet. Sci. Lett. 36, 309–316 (1977).
Okubo, Ayako & Kanda, Wataru Numerical simulation of piezomagnetic changes associated with hydrothermal pressurization. Geophys. J. Int. 181, 1343–1361 (2010).
Henyey, T. L., Pike, S. J. & Palmer, D. F. On the Measurement of Stress Sensitivity of NRM Using a Cryogenic Magnetometer. In: (eds Fuller, M., Johnston, M. J. S. & Yukutake, T.). Tectonomagnetics and Local Geomagnetic Field Variations. Advances in Earth and Planetary Sciences, vol 5. Springer, Dordrecht. https://doi.org/10.1007/978-94-010-9825-0_15 (1979).
Boldrin, D. et al. Giant piezomagnetism in Mn3NiN. ACS Appl. Mater. Interfaces 10, 18863–18868 (2018).
Tarakanov, V. V. Some remarks on the problem of piezomagnetism. Phys. B 284-288, 1213–1214 (2000).
Brandt, O. G. & Walker, C. T. Temperature dependence of elastic constants and thermal expansion for UO2. Phys. Rev. Lett. 18, 1 (1967).
White, G. K. & Sheard, F. W. The thermal expansion at low temperatures of UO2 and UO2/ThO2. J. Low Temp. Phys. 14, 445 (1974).
Jaime, M., Gofryk, K. & Bauer, E. D. Magnetoelastics of high field phenomena in antiferromagnets UO2 and CeRhIn5. IEEE Xplore https://doi.org/10.1109/MEGAGAUSS.2018.8722665 (2019).
Baruchel, J., Schlenker, M. & Barbara, B. 180 deg Antiferromagnetic domains in MnFe2 by neutron topography. J. Magn. Magn. Mater. 15, 1520 (1980).
Baruchel, J. et al. Piezomagnetism and domains in MnFe2. J. Phys. Colloq. 49, C8-1895–C8-1896 (1988).
Slack, G. A. Crystalography and domain walls in antiferromagnetic NiO crystals. J. Appl. Phys. 31, 1571 (1960).
Giannozzi, P. & Erdös, P. Theoretical analysis of the 3-k magnetic structure and distortion of uranium dioxide. J. Mag. Mag. Mater. 67, 75–87 (1987).
Islam, Z. et al. A single-solenoid pulsed-magnet system for single-crystal scattering studies. Rev. Sci. Instrum. 83, 035101 (2012).
Tate, M. W. et al. A medium-format, mixed-mode pixel array detector for kilohertz x-ray imaging. J. Phys. Conf. Ser. 425, 062004 (2013).
Schuette, D. R. "A Mixed Analog and Digital Pixel Array Detector for Synchrotron X-ray Imaging." PhD thesis, Cornell Univ. (2008).
Work by D.A., K.S. and K.G. was supported by the DOE's Early Career Research Program under the project "Actinide materials under extreme conditions". This research used resources of the Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357. High-field pulsed magnet and a choke coil were installed at the Advanced Photon Source through a partnership with International Collaboration Center at the Institute for Materials Research (ICC-IMR) and Global Institute for Materials Research Tohoku (GIMRT) at Tohoku University. Detector research in S.G.'s laboratory is supported by DOE grant DE-SC0017631. J.P.C.R.'s research at CHESS/CHEXS was supported by the National Science Foundation under awards DMR-1332208 and DMR-1829070. Work at the NHMFL was supported by the National Science Foundation through Cooperative Agreement No. DMR-1644779, the State of Florida, and the U.S. DOE Office of Basic Energy Science Project No. "Science at 100 T."
Marcelo Jaime
Present address: Electrical Quantum Metrology Dep., National Metrology Institute, Braunschweig, Germany
Keshav Shrestha
Present address: Department of Chemistry and Physics, West Texas A&M University, Canyon, TX, USA
Idaho National Laboratory, Idaho Falls, ID, USA
Daniel J. Antonio, Keshav Shrestha, Tomasz Durakiewicz & Krzysztof Gofryk
Laboratory of Atomic and Solid State Physics, Cornell University, Ithaca, NY, USA
Joel T. Weiss, Katherine S. Shanks & Sol M. Gruner
CHESS, Cornell University, Ithaca, NY, USA
Jacob P. C. Ruff & Sol M. Gruner
National High Magnetic Field Laboratory, Los Alamos, NM, USA
Marcelo Jaime & Myron Salamon
Aix-Marseille University, CINaM-CNRS UMR 7325 Campus de Luminy, Marseille, France
Andres Saul & Thomas Swinburne
University of Nevada, Las Vegas, Las Vegas, NV, USA
Barbara Lavina & Daniel Koury
Los Alamos National Laboratory, Los Alamos, NM, USA
David A. Andersson, Christopher R. Stanek & James L. Smith
Advanced Photon Source, Argonne National Laboratory, Lemont, IL, USA
Zahirul Islam
Daniel J. Antonio
Joel T. Weiss
Katherine S. Shanks
Jacob P. C. Ruff
Andres Saul
Thomas Swinburne
Myron Salamon
Barbara Lavina
Daniel Koury
Sol M. Gruner
David A. Andersson
Christopher R. Stanek
Tomasz Durakiewicz
James L. Smith
Krzysztof Gofryk
K.G. proposed the experimental studies and D.J.A., Z.I. and K.G. designed the research. D.J.A., Z.I., J.T.W., K.S.S., J.P.C.R., M.J., B.L., K.S. and K.G. performed the experiments. T.D., J.L.S., D.A.A. and C.R.S. provided the UO2 single-crystal samples and S.M.G. provided the MM-PAD detector. D.K. helped in the sample preparation. A.S., T.S., M.J., K.G. and M.S. discussed and developed the theoretical description. All authors helped prepare various pieces of the experimental apparatus, discussed and interpreted the results, and contributed to the preparation and writing of the manuscript.
Correspondence to Krzysztof Gofryk.
Peer review information Primary handling editor: Aldo Isidori.
Antonio, D.J., Weiss, J.T., Shanks, K.S. et al. Piezomagnetic switching and complex phase equilibria in uranium dioxide. Commun Mater 2, 17 (2021). https://doi.org/10.1038/s43246-021-00121-6
A Non-Equilibrium Nucleation Model to Calculate the Density of State and Its Application to the Heat Capacity of Stoichiometric UO2
Ivaldo Leão Ferreira
International Journal of Thermophysics (2021)
Communications Materials (Commun Mater) ISSN 2662-4443 (online) | CommonCrawl |
Statistic
A statistic (singular) or sample statistic is any quantity computed from values in a sample which is considered for a statistical purpose. Statistical purposes include estimating a population parameter, describing a sample, or evaluating a hypothesis. The average (or mean) of sample values is a statistic. The term statistic is used both for the function and for the value of the function on a given sample. When a statistic is being used for a specific purpose, it may be referred to by a name indicating its purpose.
When a statistic is used for estimating a population parameter, the statistic is called an estimator. A population parameter is any characteristic of a population under study, but when it is not feasible to directly measure the value of a population parameter, statistical methods are used to infer the likely value of the parameter on the basis of a statistic computed from a sample taken from the population. For example, the sample mean is an unbiased estimator of the population mean. This means that the expected value of the sample mean equals the true population mean.[1]
A descriptive statistic is used to summarize the sample data. A test statistic is used in statistical hypothesis testing. A single statistic can be used for multiple purposes – for example, the sample mean can be used to estimate the population mean, to describe a sample data set, or to test a hypothesis.
Examples
Some examples of statistics are:
• "In a recent survey of Americans, 52% of Republicans say global warming is happening."
In this case, "52%" is a statistic, namely the percentage of Republicans in the survey sample who believe in global warming. The population is the set of all Republicans in the United States, and the population parameter being estimated is the percentage of all Republicans in the United States, not just those surveyed, who believe in global warming.
• "The manager of a large hotel located near Disney World indicated that 20 selected guests had a mean length of stay equal to 5.6 days."
In this example, "5.6 days" is a statistic, namely the mean length of stay for our sample of 20 hotel guests. The population is the set of all guests of this hotel, and the population parameter being estimated is the mean length of stay for all guests.[2] Whether the estimator is unbiased in this case depends upon the sample selection process; see the inspection paradox.
There are a variety of functions that are used to calculate statistics. Some include:
• Sample mean, sample median, and sample mode
• Sample variance and sample standard deviation
• Sample quantiles besides the median, e.g., quartiles and percentiles
• Test statistics, such as t-statistic, chi-squared statistic, f statistic
• Order statistics, including sample maximum and minimum
• Sample moments and functions thereof, including kurtosis and skewness
• Various functionals of the empirical distribution function
Properties
Observability
Statisticians often contemplate a parameterized family of probability distributions, any member of which could be the distribution of some measurable aspect of each member of a population, from which a sample is drawn randomly. For example, the parameter may be the average height of 25-year-old men in North America. The height of the members of a sample of 100 such men are measured; the average of those 100 numbers is a statistic. The average of the heights of all members of the population is not a statistic unless that has somehow also been ascertained (such as by measuring every member of the population). The average height that would be calculated using all of the individual heights of all 25-year-old North American men is a parameter, and not a statistic.
Statistical properties
Important potential properties of statistics include completeness, consistency, sufficiency, unbiasedness, minimum mean square error, low variance, robustness, and computational convenience.
Information of a statistic
Information of a statistic on model parameters can be defined in several ways. The most common is the Fisher information, which is defined on the statistic model induced by the statistic. Kullback information measure can also be used.
See also
Look up statistic in Wiktionary, the free dictionary.
• Statistics
• Statistical theory
• Descriptive statistics
• Statistical hypothesis testing
• Summary statistic
• Well-behaved statistic
References
1. Kokoska 2015, p. 296-308.
2. Kokoska 2015, p. 296-297.
• Kokoska, Stephen (2015). Introductory Statistics: A Problem-Solving Approach (2nd ed.). New York: W. H. Freeman and Company. ISBN 978-1-4641-1169-3.
• Parker, Sybil P (editor in chief). "Statistic". McGraw-Hill Dictionary of Scientific and Technical Terms. Fifth Edition. McGraw-Hill, Inc. 1994. ISBN 0-07-042333-4. Page 1912.
• DeGroot and Schervish. "Definition of a Statistic". Probability and Statistics. International Edition. Third Edition. Addison Wesley. 2002. ISBN 0-321-20473-5. Pages 370 to 371.
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
| Wikipedia |
\begin{document}
\iffalse \setpagewiselinenumbers \modulolinenumbers[1] \linenumbers \fi \begin{titlepage} \title{Sandpile groups of generalized de Bruijn and Kautz graphs and circulant matrices over finite fields} \date{\today} \author{ Swee Hong Chan\thanks{ Division of Mathematical Sciences, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore, email: \{sweehong, henk.hollmann\}@ntu.edu.sg} \and Henk D.L.~ Hollmann\footnotemark[1] \and Dmitrii V.~ Pasechnik\thanks{ Department of Computer Science, University of Oxford, UK, email: [email protected] (corresponding author)} } \maketitle \begin{abstract} A maximal minor $M$ of the Laplacian of an $n$-vertex Eulerian digraph $\Gamma$ gives rise to a finite group $\mathbb{Z}^{n-1}/\mathbb{Z}^{n-1}M$ known as the
{\em sandpile} (or {\em critical}) {\em group} $S(\Gamma)$ of $\Gamma$. We determine $S(\Gamma)$ of the {\em generalized de Bruijn graphs} $\Gamma=\mathrm{DB}(n,d)$ with vertices $0,\dots,n-1$ and arcs $(i,di+k)$ for $0\leq i\leq n-1$ and $0\leq k\leq d-1$, and closely related {\em generalized Kautz graphs}, extending and completing earlier results for the classical de Bruijn and Kautz graphs.
Moreover, for a prime $p$ and an $n$-cycle permutation matrix $X\in\mathrm{GL}_n(p)$ we show that $S(\mathrm{DB}(n,p))$ is isomorphic to the quotient by $\langle X\rangle$ of the centraliser of $X$ in $\mathrm{PGL}_n(p)$. This offers an explanation for the coincidence of numerical data in sequences A027362 and A003473 of the OEIS, and allows one to speculate upon a possibility to construct normal bases in the finite field $\mathbb{F}_{p^n}$ from spanning trees in~$\mathrm{DB}(n,p)$. \end{abstract}
\begin{center} {\it Dedicated to the memory of \'{A}kos Seress.} \end{center}
\end{titlepage}
\section{\label{sect:int}Introduction}
The {\em critical group\/} $S(G)$ of a directed graph~$G$ is an abelian group obtained from the Laplacian matrix $\Delta$ of~$G$. It carries the same information as the Smith Normal Form (SNF) of~$\Delta$. (For precise definitions of these and other terms, we refer to the next section.) The {\em sandpile group\/} $S(G,v)$ of a
directed graph or {\em digraph\/}~$G$ at a vertex $v$ is an abelian group obtained from the reduced Laplacian $\Delta_v$ of~$G$; by the Matrix Tree Theorem \cite{AaDB-tree}, its order is equal to
the number of directed
trees rooted at~$v$, see for example \cite{LeLI-sand}.
If the graph~$G$ is Eulerian, then the sandpile group does not depend on the vertex~$v$ and is equal to the critical group $S(G)$ of~$G$ \cite{HLMPPW-chip}. Most of the literature on sandpile groups is concerned with
{\em undirected\/} graphs,
which can be considered as a special case, namely for directed graphs that are obtained by replacing each undirected edge in a graph by a pair of directed edges oriented in opposite directions.
The critical group has been studied in
other contexts under variuos other names, such as group of components (in arithmetic geometry),
Jacobian group and Picard group (for algebraic curves), and Smith group (for matrices). For more details and background, see, for example, \cite{Lo-fin, Ru-phd, AlVa-cone} for the undirected case and \cite{ HLMPPW-chip, wag-crit} for the directed case.
Critical groups have been determined for many families of (mostly undirected) graphs. For some examples, see \cite{ChHou-pc, HWC-sqc,Ra-W3, RaWa-W4,CHW-mob,DaFiFr-dihe,SH-bra, Me-eq,Bi-chip}, and the references in \cite{AlVa-cone}.
Here, we determine the
critical group of the generalized de Bruijn graphs $\mathrm{DB}(n,d)$ and generalized Kautz graphs $\mathrm{Ktz}(n,d)$ (which are in fact both directed graphs), thereby extending and completing the results from \cite{LeLI-sand} for the binary de Bruijn graphs $\mathrm{DB}(2^\ell,2)$ and Kautz graphs $\mathrm{Ktz}((p-1)p^{\ell-1},p)$ for primes $p$, and \cite{BiKi-knuth} for the classical de Bruijn graphs $\mathrm{DB}(d^\ell,d)$ and Kautz graphs $\mathrm{Ktz}((d-1)d^{\ell-1},d)$. Unlike the classical case, the generalized versions are not necessarily iterated line graphs, so to obtain their critical groups, techniques different from these in \cite{LeLI-sand} and \cite{BiKi-knuth} have to be applied.
As set out in \cite{DuPa-neck}, our original motivation for studying the sandpile groups of generalized de Bruijn graphs
was to explain their apparent relation to some other algebraic objects, such as the groups $C(n,p)$ of invertible $n\times n$-circulant matrices over $\mathbb{F}_{p}$ (mysterious numerical coincidences of the OEIS entries A027362 and A003473 were noted by the third author \cite{oeis:A027362}, while running extensive computer experiments using Sage~\cite{sage,sagesandpiles}), and {\em normal bases} (cf. e.g. \cite{MR1429394}) of finite fields $\mathbb{F}_{p^n}$, in the case where $(n,p)=1$. The latter were noted to be closely related to circulant matrices and to {\em necklaces\/} by Reutenauer \cite[Sect.~7.6.2]{Reu-lie}, see also \cite{DuPa-neck} and the related numeric data collected in~\cite{Arn-comp}. Here we show that the critical group of $\mathrm{DB}(n,p)$ is isomorphic to $C(n,p)/\langle Q_n\rangle \times \mathbb{F}_p^*$, where $Q_n$ denotes the permutation matrix of the $n$-cycle. Although we were not able to construct an explicit bijection between the former and the latter, we could speculate that potentially one might be able to design a new deterministic way to construct normal bases of $\mathbb{F}_{p^n}$ from spanning trees in~$\mathrm{DB}(n,p)$. For more details and background on this connection with aperiodic necklaces, we refer the interested reader to~\cite{DuPa-neck}. Most of the results for the case $d$ prime were first derived in the undergraduate thesis \cite{chan-db} by the first author, supervised by the third author. Results in this text were announced in an extended abstract \cite{sandpiles_eurocomb13}.
\section{Preliminaries}\label{sect:prelims}
In this section, we introduce the necessary terminology and background. First, in Section~\ref{ssect:scs}, we discuss the Smith Normal
Form and the Smith group, and the critical group and sandpile group of a directed graph, as well as the relations between these notions. Then in Section~\ref{ssect:gdbk}, we define the generalized De Bruijn and Kautz graphs. The group of invertible circulants is defined in Section~\ref{ssect:Icm}. Finally, in Section~\ref{ssect:sd}, we derive expressions for the sandpile group of generalized De Bruijn and Kautz graphs as embeddings in a group that we refer to as sand dune group.
\subsection{\label{ssect:scs}Smith group, Critical Group, and Sandpile Group}
Let $M$ be an rank $r$ integer $m\times n$ matrix. There exist
positive integers $s_1, \ldots, s_r$
with $s_i|s_{i+1}$ for $i=1, \ldots, r$
and unimodular
matrices $P$ and $Q$
such that $PMQ=D={\rm diag}(s_1, \ldots, s_r, 0, \ldots, 0)$.
The diagonal matrix $D$ is called the {\em Smith Normal Form\/} (SNF) of $M$, and the numbers $s_1, \ldots, s_r$ are the
nonzero {\em invariant factors\/} of~$M$.
The SNF, and hence the invariant factors, are {\em uniquely\/} determined by the matrix~$M$. For background on the SNF and invariant factors, see, e.g.,~\cite{Ne-im}.
The {\em Smith group\/}~\cite{Ru-phd} of $M$ is
$\Gamma(M)=\mathbb{Z}^n/\mathbb{Z}^m M$;
the submodule
$\overline{\Gamma}(M)=\mathbb{Z}^n/\mathbb{Q}^mM\cap \mathbb{Z}^n$ of~$\Gamma(M)$ is the torsion subgroup of $\Gamma(M)$.
Indeed, if $M$ has rank~$r$, then
$\Gamma(M) = \mathbb{Z}^{n-r} \oplus \overline{\Gamma}(M)$
with
$\overline{\Gamma}(M) = \oplus_{i=1}^r \mathbb{Z}_{s_i}$, where $s_1, \ldots, s_r$ are the nonzero invariant factors of~$M$.
See \cite{Ru-phd} for further details and proofs.
Let $G=(V,E)$ be a finite directed graph with
with vertex set $V$ and edge set $E$ (we allow loops and multiple edges). The {\em adjacency matrix\/} of~$G$ is the $|V|\times |V|$ matrix $A=(A_{v,w})$, with rows and columns indexed by $V$, where the entry $A_{v,w}$ in position $(v,w)$ is the number of edges from $v$ to~$w$. The indegree $d^-(v)$ and outdegree $d^+(v)$ is the number of edges ending or starting in vertex~$v$, respectively.
The {\em Laplacian\/} of~$G$ is the matrix $\Delta=D-A$, where $D$ is diagonal with $D_{v,v}=d^+_v$. The {\em critical group\/} $S(G)$ of~$G$ is the torsion subgroup $\overline{\Gamma}(\Delta)$ of the Smith group of the Laplacian~$\Delta$ of~$G$. The {\em sandpile group\/} $S(G,v)$ of~$G$ at a vertex~$v$ is the torsion subgroup of the Smith group of the
{\em reduced Laplacian\/} $\Delta_v$, obtained from $\Delta$ by deleting the row and the column of $\Delta$ indexed by~$v$. Note that by the Matrix Tree Theorem for directed graphs, the order of $S(G,v)$ equals the number of directed spanning trees rooted at~$v$. The directed graph $G$ is called {\em Eulerian\/} if $d^+(v)=d^-(v)$ for every vertex~$v$.
In that case, $S(G,v)$ does not depend on the vertex~$v$ and is equal to the critical group $S(G)$ of~$G$, essentially because in that case, not only the columns, but also the rows of the Laplacian $\Delta$ of~$G$ sum to zero; for a detailed proof, see \cite{HLMPPW-chip} (note that
the proof as given there does not use the assumption that the directed graph is connected). For more details on sandpile groups and the critical group of a directed graph, we refer for example to \cite{HLMPPW-chip} or \cite{wag-crit}.
\subsection{\label{ssect:gdbk}Generalized de Bruijn and Kautz graphs}
Generalized de Bruijn graphs \cite{DuHw-GdB} and generalized Kautz graphs \cite{DCH-dBKgen} are known to have a relatively small diameter and attractive connectivity properties, and have been studied intensively due to their applications in interconnection networks. The generalized Kautz graphs were first investigated in \cite{ImIt-diam-ieee}, \cite{ImIt-diam-net}, and are
also known as {\em Imase-Itoh digraphs\/}. Both classes of directed graphs are Eulerian.
Let $n$ and $d$ be integers with $n\geq1$ and $d\geq0$.
The {\em generalized de Bruijn graph\/} $\mathrm{DB}(n,d)$ has vertex set $\mathbb{Z}_n$, the set of integers modulo~$n$, and $d$ (directed) edges $v\rightarrow dv+i$, for $0\leq i\leq d-1$, for $0\leq v\leq n-1$. The {\em generalized Kautz graph\/}
${\rm Ktz}(n,d)$ has vertex set $\mathbb{Z}_n$ and $d$ directed edges $v\rightarrow -d(v+1) +i$, for $0\leq i\leq d-1$, for $0\leq v\leq n-1$. (For both generalized de Bruijn and Kautz graphs, we allow multiple edges when $d>n$.) These directed graphs are important special cases of the so-called {\em consecutive-$d$ digraphs\/}. Here a consecutive-$d$ digraph $G(d, n, q, r)$, defined for $q\in \mathbb{Z}_n\setminus \{0\}$ and $r\in\mathbb{Z}_n$, has vertex set $\mathbb{Z}_n$ and directed edges $v\rightarrow qv+r +i$ for $0\leq i\leq d-1$ and $0\leq v\leq n-1$. Note that the generalized de Bruijn and Kautz graphs are the cases $q=d, r=0$ and $q=-d, r=-d$, respectively.
It is easily verified that both $\mathrm{DB}(n,d)$ and $\mathrm{Ktz}(n,d)$ are indeed Eulerian for all integers $n\geq1$ and $d\geq0$.
It is easily seen that for $d=0$ or $1$, the critical groups for both the de Bruijn graph and the Kautz graph are trivial.
\subsection{\label{ssect:Icm}The group of invertible circulant matrices}
Let $q=p^r$ be a prime power. An $n\times n$ circulant matrix over a finite field~$\mathbb{F}_q$ is a matrix~$C$ of the form \beql{Lcirc1def} C= \left( \begin{array}{cccc} c_0& c_{n-1} & \cdots & c_1 \\ c_1& c_0 & \ddots & \vdots \\ \vdots & \ddots& \ddots &c_{n-1}\\ c_{n-1}& \cdots &c_1 & c_0 \end{array} \right), \end{equation} where $c_0, \ldots, c_{n-1}\in \mathbb{F}_q$. With $C$ as in (\ref{Lcirc1def}) we associate the polynomial $c_C(x):=c(x)=c_0+c_1x+\cdots +c_{n-1}x^{n-1}\in\mathbb{F}_q[x]$. Let $X$ denote the matrix of the multiplication by $x$ map on the ring ${\cal R}:=\mathbb{F}_q[x]/(x^n-1)$ with respect to the basis $1, x, \ldots, x^{n-1}$. Every $C$ in~(\ref{Lcirc1def}) can be written as $C=\sum_{i=0}^{n-1} c_iX^i$, where $c(x)=c_C(x)$. Note that $c_X(x)=x$, and the map $x\mapsto X$ induces an isomorphism between ${\cal R}$ and the algebra $\mathbb{F}_q[X]$ of circular matrices.
The units of $\mathbb{F}_q[X]$ form a commutative group under multiplication; we denote this group by~$C(n,q)$. Under the isomorphism induced by $x\mapsto X$, the group $C(n,q)$ corresponds to \[{\cal R}^*=\{c(x)\in \mathbb{F}_q[x], \deg c<n \mid (c(x), x^n-1)=1\}.\] Indeed, by the extended Euclidean algorithm, $(c(x), x^n-1)=1$ if and only if there exist $u,v\in\mathbb{F}_q[x]$ such that $cu=1-(x^n-1)v$, i.e. $u=c^{-1}\in {\cal R}$. On the other hand, $c(X)u(X)=I-(X^n-I)v(X)=I$, i.e. $u(X)=c(X)^{-1}\in C(n,q)$.
Note that $C(n,q)$ contains a subgroup isomorphic to $\mathbb{Z}_{q-1}\oplus \mathbb{Z}_n$, namely the direct product of the group of scalar matrices $F_q^* I:=\{\lambda I\mid \lambda\in\mathbb{F}_q^*\}$ and the cyclic subgroup $\langle X\rangle$ generated by $X$.
Each $C\in\mathbb{F}_q[X]$ has the all-ones vector $\mathbf{1}:=(1,\dots,1)^\top$ as an eigenvector. Thus $C'(n,q):=\{C\in C(n,q)\mid C\mathbf{1}=\mathbf{1}\}\leq C(n,q)$, and we have the following direct product decomposition. \begin{equation}\label{Cnpdec} C(n,q)=C'(n,q)\times F_q^* I, \end{equation} where, as usual, $\mathbb{F}_q^*=\mathbb{F}_q\setminus\{0\}$. In view of \eqref{Cnpdec} one has $C'(n,q)\leq\mathrm{PLG}_n(q)$. Note that \beql{LECp} C'(n,q) \cong \{a(x)\in \mathbb{F}_q[x], \deg a<n \mid \mbox{$(a(x),x^n-1)=1$ and $a(1)=1$}\}. \end{equation} We also note that although $\langle X\rangle\leq C'(n,q)$, it is not necessarily a {\em direct summand\/} of $C'(n,q)$ (for example, take $n=2$ and $q=4$: then $C'(n,q)\cong\mathbb{Z}_4$ and $\langle X\rangle\cong\mathbb{Z}_2$).
\begin{remark} Note that $C(n,q)$ is the centralizer of $X$ in $\GL_n(q)$. It also can be viewed as the group of units of the group algebra $\mathbb{F}_q[\langle X\rangle]$. \end{remark}
\subsection{\label{ssect:sd}The sandpile and sand dune groups}
We determine the sandpile group $S_{\rm DB}(n,d)$ (resp. $S_{\rm Ktz}(n,d)$) of a generalized de Bruijn (resp. Kautz graph) on $n$ vertices by embedding this group as a subgroup of
index~$n$ in a group that, for lack of a better name, we refer to as the {\em sand dune\/} group of the corresponding directed graph. This embedding method can in fact be applied to the much wider class of {\em consecutive-$d$ digraphs\/}~\cite{DHP-cons}, and the idea may have other applications as well.
Let us represent the elements of $\mathbb{Z}_n$ by $1, x, \ldots x^{n-1}$, and think of elements in the group algebra $\mathbb{Q}[{\mathbb{Z}_n}]$ as Laurent polynomials modulo $x^n-1$, that is, identify $\mathbb{Q}[{\mathbb{Z}_n}]$ with $\mathbb{Q}[x,x^{-1}] \bmod x^n-1$, and its subring $\mathbb{Z}[{\mathbb{Z}_n}]$ with $\mathbb{Z}[x,x^{-1}] \bmod x^n-1$. Furthermore, we identify a vector $c=(c_0,\ldots, c_{n-1})$ in $\mathbb{Q}^n$ with its associated polynomial $c(x)=c_0+\cdots +c_{n-1}x^{n-1}$ in $\mathbb{Q}[x,x^{-1}] \bmod x^n-1$; note that this association is in fact an isomorphism between $\mathbb{Q}^n$ and $\mathbb{Q}[x,x^{-1}] \bmod x^n-1$, considered as vector spaces over $\mathbb{Q}$. The advantage of this identification is that we now also have a multiplication available. Given a collection of vectors $V=\{v_i \mid i\in I\}$ in $\mathbb{Z}^n$, we denote the $\mathbb{Z}$-span of the associated polynomials by $\langle v_i(x) \mid i\in I\rangle_{\mathbb{Z}}$.
We now derive a useful description of both $S_{\rm DB}(n,d)$ and $S_{\rm Ktz}(n,d)$ using the Smith group $\Gamma(\Delta)$ of the Laplacian $\Delta$ of these digraphs as defined in Section~\ref{ssect:scs}. To this end, for every integer $d$ we define the Laurent polynomial $f^{(n,d)}(x)$ in $\mathbb{Z}[x,x^{-1}]\bmod x^n-1$ as \beql{LEfv} f_v^{(n,d)}(x)= dx^v -x^{dv} (x^d-1)/x-1). \end{equation} For $d\geq0$, we have \[ f_v^{(n,d)}(x)= dx^v -x^{dv} \sum_{i=0}^{d-1} x^i = \sum_{w\in \mathbb{Z}_n} \Delta_{v,w}x^w \] with $\Delta=\Delta_{\rm DB}$ the Laplacian of $\mathrm{DB}(n,d)$, while for $d<0$, we have
\[ f_v^{(n,d)}(x)= -|d|x^v -x^{-|d|v}
(x^{-|d|}-1)/(x-1)= -|d|x^v + x^{-|d|(v+1)}
\sum_{i=0}^{|d|-1} x^i =-\sum_{w\in \mathbb{Z}_n} \Delta_{v,w}x^w \] with
$\Delta=\Delta_{\rm Ktz}$ now the Laplacian of $\mathrm{Ktz}(n,|d|)$. In what follows, we simply write $\Delta$ to denote the Laplacian of either $\mathrm{DB}(n,d)$ or $\mathrm{Ktz}(n,d)$.
For later use, we also define \[ g_v^{(n,d)}(x)= (x-1)f^{(n,d)}_v(x) = dx^v (x-1) -x^{dv}(x^d-1)\] for every $0\leq v<n$. For the remainder of this paper, we let \begin{equation}\label{def:e_v} e_v^{(n)}=x^v-1, \qquad \epsilon_v^{(n,d)}=de^{(n)}_v-e^{(n)}_{dv}, \quad \text{for every $0\leq v\leq n-1$}; \end{equation} note that $e_0^{(n)}=\epsilon_0^{(n,d)}=0$. We simply write $f_v(x)$, $g_v(x)$, $e_v$ and $\epsilon_v$ if the intended values for $n$ and $d$ are evident from the context. Finally, we set \[{\cal Z}_n =\langle e_v\mid 1\leq v\leq n-1\rangle_\mathbb{Z}, \qquad {\cal E}_{n,d}=\langle \epsilon_v\mid 1\leq v\leq n-1\rangle_\mathbb{Z}.\]
In the next lemma, we collect some simple facts. \ble{LLbasic}\mbox{}\\
(1) We have that $\sum_{v=0}^{n-1}f_v(x)=0$ and $f_v(1)=0$;\\ (2) ${\cal Z}_n$ consists of all polynomials $c(x)\in\mathbb{Z}[x]$ for which $\deg c\leq n-1$ and $c(1)=0$; \\
(3) We have that $\epsilon_v=g_0(x)+\cdots+g_{v-1}(x)$ and ${\cal E}_{n,d}=\langle g_v(x)\mid 0<v<n \rangle_\mathbb{Z}$.
\end{lem} \begin{proof} (1) Since the columns of the Laplacian $\Delta$ add up to 0, we have that $f_v(1)=0$; moreover, since both $\mathrm{DB}(n,d)$ and $\mathrm{Ktz}(n,d)$ are Eulerian, in both cases the rows of $\Delta$ also add up to 0, so that $\sum_{v\in \mathbb{Z}_n}f_v(x)=0$ (these claims are also easily verified directly). Part (2) is obvious from the observation that $e_{v+1}-e_v=(x-1)x^v$, and to see (3), simply note that from (1), after multiplication by $x-1$, we obtain $g_0(x)+\cdots+g_{n-1}=0$. \end{proof}
We can now derive an expression for the Smith group~$\Gamma(\Delta)$ in terms of polynomials. \ble{LLSmith}
For the Smith group $\Gamma(\Delta)$ we have \[ \Gamma(\Delta)=\mathbb{Z}^n/\mathbb{Z}^n\Delta=(\mathbb{Z}[x] \bmod x^n-1)/\langle f_v(x) \mid 0\leq v\leq n-1\rangle_{\mathbb{Z}} = \mathbb{Z}\oplus {\cal Z}_n /\langle f_v(x) \mid 1\leq v\leq n-1\rangle_{\mathbb{Z}}.\] \end{lem} \begin{proof} First note that the vectors $\mathbb{Z}^n$ correspond to the polynomials in $\mathbb{Z}[x]\bmod x^n-1$ and, since the rows of $\Delta$ correspond to the polynomials $f_v(x)$ or $-f_v(x)$, the vectors in the row space $\mathbb{Z}^n\Delta$ correspond to the elements of $\langle f_v(x) \mid 0\leq v\leq n-1\rangle_{\mathbb{Z}}$. This shows the second equality in the lemma. Then, by Lemma~\ref{LLbasic} (2) and the Chinese Remainder Theorem, we have that $\mathbb{Z}_n[x]\bmod x^n-1 \cong \mathbb{Z}\oplus {\cal Z}_n$. Again by Lemma~\ref{LLbasic}, every $f_v(x)$ is contained in~${\cal Z}_n$ and $f_0(x)$ depends on the other $f_v(x)$, so the lemma follows. \end{proof}
The next result is one of the key points in our approach. \btm{LTspan}
Let $n$ and $d$ be integers, with $n\geq 1$. If $|d|\geq 2$, then the polynomials $\epsilon_v$, for $1\leq v\leq n-1$ (resp., the polynomials $g_v(x)$ ($1\leq v\leq n-1$)) are independent over $\mathbb{Q}$, and $\dim_{\mathbb{Q}} {\cal E}_{n,d}=n-1$. \end{teor} \begin{proof}
In view of Lemma~\ref{LLbasic} (3), it suffices to show that the $g_v(x)$ for $1\leq v\leq n-1$ are independent over~$\mathbb{Q}$. To see this, suppose that \[0\bmod x^n-1=\sum_{v\neq 0} a_vg_v(x)=d(x-1)\sum_{v\neq0} a_vx^v- (x^d-1)\sum_{v\neq0} a_vx^{dv},\quad a_v\in\mathbb{Q}.\] Writing $a(x)=\sum_{v>0} a_v x^v$ and $c(x)=\sum_{v> 0} a_vx^v(x-1)=(x-1)a(x)
=\sum_{i} c_ix^i$,
we have that $c(x^d)=dc(x)\bmod x^n-1$. Subsituting $x$ with $x^d$, one obtains $c((x^d)^d)=c(x^{d^2})=dc(x^d)=d^2c(x)$. Similarly, $c(x^{d^3})=dc(x^{d^2})=d^3c(x)$, etc. Hence $c(x^{d^e})=d^ec(x)$ in $\mathbb{Q}[x]\bmod x^n-1$, for every integer $e\geq 1$. However, if
$|d|\geq 2$ and $c(x)\neq 0$, then the left-hand side has bounded coefficients while the right-hand side has an unbounded coefficient $d^ec_i$ if $c_i\neq 0$, a contradiction. It follows that $c(x)=(x-1)a(x)=0$, hence $a(x)\equiv 0\bmod 1+x+\cdots+x^{n-1}$. But as $a(0)=0$ and $\deg a<n$, it follows that $a_v=0$ for all $v$. \end{proof}
\bco{LCspan} For integers $d$ with $|d|\geq2$, the sandpile group (or equivalently, the critical group) $S(n,d)$ of the generalized de Bruijn graph $\mathrm{DB}(n,d)$ (if $d>0$) or of the generalized Kautz graph $\mathrm{DB}(n,|d|)$ (if $d<0$) can be expressed as \beql{LESPDB} S(n,d)={\cal Z}_n/ \langle f^{(n,d)}_v(x) \mid 1\leq v\leq n-1\rangle_{\mathbb{Z}} \end{equation}
\end{cor} \begin{proof} First, we claim that the polynomials $f_v$ for $1\leq v\leq n-1$
are independent over~$\mathbb{Q}$. Indeed, every nontrivial relation between the $f_v(x)$ implies (after multiplication by $x-1$) a similar relation between the $g_v(x)$; however, according to Theorem~\ref{LTspan}, such relation cannot exist if $|d|\geq2$. As a consequence, for $K:= \langle f_v(x) \mid 1\leq v\leq n-1 \rangle_{\mathbb{Z}}$ one has
$\dim_\mathbb{Q} K =n-1$.
In view of what was stated in Section~\ref{ssect:scs}, this implies that the quotient ${\cal Z}_n/K$ in Lemma~\ref{LLSmith} is a finite group, and the lemma follows. \end{proof}
It is not so easy to determine the structure of $S(n,d)$ by employing
(\ref{LESPDB}), due to the complicated form of the polynomials~$f_v(x)$. The polynomials $g_v(x)=(x-1)f_v(x)$ have a much easier structure, which motivates the following approach. We define the {\em sand dune group\/} $\gS(n,d)$ of the generalized de Bruijn graph $\mathrm{DB}(n,d)$ for $d\geq 2$ (resp. of the generalized Kautz graph $\mathrm{Ktz}(n,|d|)$ for $d\leq -2$) as \[ \gS(n,d)={\cal Z}_n/\langle g_v(x)\mid 1\leq v\leq n-1\rangle_{\mathbb{Z}} ={\cal Z}_n/{\cal E}_{n,d}. \]
The next result is crucial to our approach: it shows that $S(n,d)<\gS(n,d)$, and identifies the elements of $\gS(n,d)$ that are contained in $S(n,d)$.
\btm{Lsub} The sand dune group $\gS(n,d)$ is finite, and the sandpile group $S(n,d)$ is a subgroup of~$\gS(n,d)$. Moreover, if $a=\sum_{v=1}^{n-1} a_v e_v\in \gS(n,d)$, then $a\in S(n,d)$ if and only if $\sum_{v=1}^{n-1} va_v\equiv0 \bmod n$. \end{teor} \begin{proof} The finiteness of $\gS(n,d)$ follows from the fact that $\dim_\mathbb{Q} {\cal E}_{n,d}=n-1$, as proved in Lemma~\ref{LTspan}. Next, write $T_v(x)=e_v(x)/(x-1)=1+\cdots+x^{v-1}$ for $0\leq v\leq n-1$. Consider the map $\phi$ on $\mathbb{Q}[x]\bmod x^n-1$ for which $\phi(c(x))=(x-1)c(x)$. It is $\mathbb{Q}$-linear, and ${\rm Ker}\phi=\langle T_n(x)\rangle_{\mathbb{Q}}$. Since $T_n(1)=n\neq 0$, the restriction of~$\phi$ to ${\cal Z}_n$ is one-to-one, and $\phi$ maps $\langle f_v(x) \mid 1\leq v\leq n-1\rangle_{\mathbb{Z}}$ onto ${\cal E}_{n,d}$. As $\phi({\cal Z}_n)\subseteq {\cal Z}_n$, one sees that $\phi$ embeds $S(n,d)$ as a subgroup $\phi({\cal Z}_n)/{\cal E}_{n,d}$ in $\gS(n,d)$. To determine that subgroup, we need to determine $\phi({\cal Z}_n)$.
To this end, let $a(x)=\sum_v a_ve_v(x) \in {\cal Z}_n$.
Then we can write $a(x)=h(x)(x-1)$ with $h(x)=a(x)/(x-1)=\sum_v a_vT_v(x)$. Note that since $T_v(1)=v$, we have $h(1)=\sum_v va_v$.
Now $a(x)=\phi(b(x))=b(x)(x-1)$ with $\deg b<n$ precisely when $b(x)$ is of the form $b(x)=h(x)-\lambda T_n(x)$ with $\lambda\in \mathbb{Q}$; note that $b(x)\in {\cal Z}_n$ precisely when $\lambda\in \mathbb{Z}$ and $b(1)=h(1)-\lambda T_n(1)=\sum_v va_v-\lambda n=0$; such a $\lambda$ exists precisely when $\sum_v va_v\equiv 0\bmod n$. \end{proof}
\bco{LCord} We have $\gS(n,d)/S(n,d)=\mathbb{Z}_n$ and in particular
$|\gS(n,d)|=n|S(n,d)|$. \end{cor} \begin{proof} The map $\theta:\mathbb{Q}[x]/(x^n-1)\to \mathbb{Z}_n$ given by $\theta(\sum_v a_v e_v)=\sum_vva_v$ has the property that $\theta(\epsilon_v)=\theta(de_v-e_{dv})=0$. Hence it is well-defined as a map on~$\gS(n,d)$; it is obviously a homomorphism, and it is surjective since $\theta(e_v)=v$ for all $v\in \mathbb{Z}_n$. As a consequence,
$n=|\mathbb{Z}_n|=|\im(\theta)|=|\gS(n,d)|/|{\rm Ker}(\theta)|=|\gS(n,d)|/|S(n,d)|$. \end{proof}
We remark that the determination of $S(n,d)$ is complicated by the fact that $S(n,d)$ is not always a {\em direct summand\/} of $\gS(n,d)$, as is illustrated by the following. \bex{LExnods} Let $n=4$ and $d=3$. Then $\gS(n,d)=\mathbb{Z}_8\oplus \mathbb{Z}_2$ and $S(n,d)=\mathbb{Z}_4$, which is not a direct summand of $\mathbb{Z}_8\oplus \mathbb{Z}_2$. \end{examp}
The above descriptions of $S(n,d)$ and $\gS(n,d)$, and the embedding of $S(n,d)<\gS(n,d)$, are quite suitable for the determination of these groups. In that process, at several places information is required about the order of various group elements. Our next few results provide that information.
\ble{Lord} Every element $\alpha\in \gS(n,d)$ can be expressed as $\alpha=\sum_{v>0} \alpha_v\epsilon_v$, with $\alpha_v\in\mathbb{Q}$ satisfying $0\leq a_v<1$ for each $1\leq v\leq n-1$; then the order of $\alpha$ in $\gS(n,d)$ is the smallest positive integer~$m$ such that $m\alpha_v\in \mathbb{Z}$ for each $1\leq v\leq n-1$. \end{lem} \begin{proof} According to Theorem~\ref{LTspan}, the $\epsilon_v$ are independent in $\mathbb{Q}[x]\bmod x^n-1$. Therefore, every polynomial $f(x)$ in $\mathbb{Q}[x]$ with $f(1)=0$ and $\deg f<n$ has a unique expression $f=\sum_{v>0}f_v\epsilon_v$ as linear combination of the $\epsilon_v$. Such an expression is 0 modulo ${\cal E}_{n,d}$ if and only if all coefficients $f_v$ are integers. Now the claim is obvious. \end{proof}
In order to use this result, we must be able to express the polynomials $e_v$ in terms of the~$\epsilon_v$. This can be done as follows.
\bde{LDfe} Let $v\in \mathbb{Z}_n$. Given $d\in\mathbb{Z}$, there are unique $\mathbb{Z}\ni e>0$ and $\mathbb{Z}\ni f\geq0$ such that the $d^iv$ in~$\mathbb{Z}_n$ are distinct for $0\leq i\leq e+f-1$, while $d^{e+f}v=d^fv$. We say that $v$ has $d$-type $[f,e]$ in~$\mathbb{Z}_n$. \end{defi}
\ble{Lexp} Let $n$ and $d$ be integers with $n\geq1$ and $|d|\geq2$. If $v$ has $d$-type $[f,e]$ in $\mathbb{Z}_n$ then in~$\mathbb{Q}[x]\bmod x^n-1$, we have \[e_v =\sum_{i=0}^{f-1} \frac{1}{d^{i+1}}\epsilon_{d^iv}+\sum_{j=0}^{e-1}\frac{d^j}{d^f(d^e-1)}\epsilon_{d^{f+j}v}.\] \end{lem} \begin{proof} First note that by a simple ``telescoping'' summation \[e_v = d^{-1}\epsilon_{v}+\cdots+ d^{-f}\epsilon_{d^{f-1}v}+d^{-f}e_{d^fv}.\] Put $w=d^fv$. Then \[e_w = d^{-1}\epsilon_{w}+\cdots+ d^{-e}\epsilon_{d^{e-1}w}+d^{-e}e_{d^ew}, \] and since $d^ew=w$, we conclude that \[(d^e-1)e_w = d^{e-1}\epsilon_w+\cdots + \epsilon_{d^{e-1}w}.\] By combining these two results, the lemma follows. \end{proof}
In view of Lemma~\ref{Lord}, we immediately obtain the following.
\bco{Cord} Let $|d|\geq2$. If $v$ has $d$-type $[f,e]$ in $\mathbb{Z}_n$
then $e_v$ has order $|d^f(d^e-1)|$ in $\gS(n,d)$. \end{cor}
\iffalse For later use, we add the following results (note that trivially, the first one also holds for $|d|=1$).
\ble{Lordd}Let $|d|\geq2$, and suppose that $g\in\gS(n,d)$. If $dg$
has order $|d|^r$ with $r\geq1$, then $g$ has order $|d|^{r+1}$. \end{lem}
\begin{proof} If $g$ has order $e$, then $dg$ has order $e/(|d|,e)$. Now if
$e/(|d|,e)=|d|^r$ with $r\geq1$, then $|d|\,|e$, hence $(e,|d|)=|d|$
and $e=(|d|,e)|d|^r=|d|^{r+1}$. \end{proof} \ble{Lorddif} If $r\geq1$ and $d^{r-1}(e_v-e_w)\neq0$ but $d^r(e_v-e_w)=0$ in $\gS(n,d)$, then
$e_v-e_w$ has order $|d|^r$. \end{lem} \begin{proof} We use induction on~$r$. First, let $r=1$. By Lemma~\ref{Lexp}, $e_v=e_w$ in $\gS(n,d)$ if and only if $v=w$ in $\mathbb{Z}_n$. Now if $v\neq w$ and $d(e_v-e_w)=0$, then $de_v=e_{dv}$ and $de_w=e_{dw}$ are equal in $\gS(n,d)$, hence $dv=dw$. Since $e_v=d^{-1}\epsilon_v+d^{-2}e_{dv}$ and $e_w=d^{-1}\epsilon_w+d^{-2}e_{dw}$, we have that
$e_v-e_w=d^{-1}(\epsilon_v-\epsilon_w)$, which has order $|d|$ by Lemma~\ref{Lord}.
Next, let $r>1$ and suppose the lemma holds for all smaller~$r$. Let $v$ and $w$ satisfy the assumption of the lemma. Put $v'=dv$ and $w'=dw$. Then $v'$ and $w'$ satisfy the assumption of the lemma with $r$ replaced by $r-1$, so by induction, $e_{v'}-e_{w'}$ has order
$|d|^{r-1}$ with $r-1\geq 1$. Hence $e_v-e_w$ has order $|d|^r$ by Lemma~\ref{Lordd}. \end{proof}
\fi
\bre{LRjoint} \rm As we have seen, the group $S(n,d)$ is equal to the sandpile group of~$\mathrm{DB}(n,d)$ if $d\geq2$ (resp. of~$\mathrm{Ktz}(n,|d|)$ if $d\leq-2$). In what follows, we concentrate on the generalized de Bruijn graph and therefore we assume $d\geq2$. We leave it to the reader to make the necessary adaptations for the generalized Kautz graphs.
\end{rem}
\section{\label{sect:main}Main results}
Let $n$ and $d$ be fixed integers with $n\geq1$ and $|d|\geq 2$ . The description of the sandpile group
$S(n,d)$ and the sand-dune group $\gS(n,d)$
involves a sequence of numbers defined as follows. Put
$n_0=n$, and for $i=1, 2,\ldots$, define $d_i= (n_i,|d|)$ and $n_{i+1}=n_i/d_i$. We have $n_0>\cdots>n_k=n_{k+1}$, where $k\geq0$ is the smallest integer for which $d_k=1$. We refer to the sequence $n_0>\cdots >n_k=n_{k+1}$ as the {\em $d$-sequence\/} of $n$. In what follows, we write $m=n_k$.
Note that $n=d_0\cdots d_{k-1} m$ with $(m,d)=1$.
Since $(m,d)=1$, the map $x\mapsto dx$ is invertible and partitions $\mathbb{Z}_m$ into orbits of the form $O(v)=\{v,dv,\ldots, d^{o(v)-1}v\}$.
Here, $O(v)$ is sometimes referred to as the {\em $d$-ary cyclotomic coset of~$v$ modulo $m$\/}. We refer to $o(v)=|O(v)|$ as the
{\em order\/} of $v$.
For every prime $p|m$, we define $\pi_p(m)$ to be the largest power of $p$ dividing $m$. Let $\cV$ be a set of representatives of the orbits $O(v)$ different from $\{0\}$, where we ensure that for every prime divisor $p$ of $m$, all integer numbers
of the form $m/p^j$ are contained in~$\cV$.
(This is possible since no two of these numbers are in the same cyclotomic coset.)
\btm{LTmain} Let $n=d_0\cdots d_{k-1} m$ with $(m,d)=1$. The groups $S(n,d)$ and $\gS(n,d)$ are the sandpile and sand dune group of the generalized de Bruijn graph $\mathrm{DB}(n,d)$ if $d\geq2$ (resp.
of the generalized Kautz graph $\mathrm{Ktz}(n,|d|)$ if $d\leq-2$). With the above definitions and notation, \beql{LEdb-sd} \gS(n,d)= \biggl[ \bigoplus_{i=0}^{k-1}
\mathbb{Z}_{|d|^{i+1}}^{n_i-2n_{i+1}+n_{i+2}}\biggl] \oplus \biggl[
\bigoplus_{v\in \cV}\mathbb{Z}_{|d^{o(v)}-1|} \biggr] \end{equation} and \beql{LEdb-sp}S(n,d)= \biggl[ \bigoplus_{i=0}^{k-1}
\mathbb{Z}_{|d|^{i+1}/d_i}\oplus
\mathbb{Z}_{|d|^{i+1}}^{n_i-2n_{i+1}+n_{i+2}-1}\biggl] \oplus \left[
\bigoplus_{v\in \cV} \mathbb{Z}_{|d^{o(v)}-1|/c(v)}\right], \end{equation} where, for each prime $p\mid m$, \begin{equation*} c(v)= \begin{cases} \pi_p(m) & v=m/\pi_p(m), \text{ $p\neq 2$ or $d \equiv 1 \bmod 4$ or $4\nmid m$ }\\ \pi_2(m)/2 & v=m/\pi_2(m), \text{ $d\equiv 3 \bmod 4$, and $4\mid m$}\\ 2 & v=m/2, \text{ $4\mid m$ and $d\equiv 3\bmod 4$}\\ 1 & \text{ otherwise}. \end{cases} \end{equation*} \end{teor} Remark that since $n=d_0\cdots d_{k-1} m$
with $m=\prod_{p|m} \pi_p(m)$, the above result implies that $\gS(n,d)/S(n,d)=\mathbb{Z}_n$, in accordance with the results in Section~\ref{sect:prelims}.
With the notation from Section~\ref{ssect:Icm}, we have the following isomorphisms, connecting critical groups and circulant matrices. \btm{LTmain-circ}
Let $d>0$ be a prime. Then $\gS(n,d)\cong C'(n,d)$ and
$S(n,d)\cong C'(n,d)/\langle X\rangle$. For $d$ a proper prime power, this result also holds if $(n,d)=1$, but not always if $(n,d)\neq 1$.
\end{teor}
The above results are proved in a number of steps. In what follows, we outline the method for the generalized de Bruijn graphs; for the generalized Kautz graphs, a similar approach can be used.
First, we investigate the ``multiplication-by-$d$'' map $d$ given by $x\mapsto dx$ on the sandpile and sand dune groups. Let $\gS_0(n,d)$ and $S_0(n,d)$ denote the kernel of the map $x\mapsto d^kx$ on $\gS(n,d)$ and $S(n,d)$, respectively. It is not difficult to see that $\gS(n,d)\cong \gS_0(n,d)\oplus \gS(m,d)$ and $S(n,d)\cong S_0(n,d) \oplus S(m,d)$. Then, we use the map $d$ to determine $\gS_0(n,d)$ and $S_0(n,d)$. It is easy to see that for {\em any\/} $n$, we have $d\gS(n,d)\cong \gS(n/(n,d), d)$ and $dS(n,d)\cong S(n/(n,d), d)$. With some more effort, it can be shows that the kernel of the map $d$ on $\gS(n,d)$ (resp. on $S(n,d)$) is isomorphic to $\mathbb{Z}_d^{n-n/(n,d)}$ (resp. to $\mathbb{Z}_{d/(n,d)}\oplus \mathbb{Z}_d^{n-1-n/(n,d)}$). Then we use induction on the length $k+1$ of the $d$-sequence of~$n$ to show that $\gS_0(n,d)$ and $S_0(n,d)$ have the form of the left hand parts of (\ref{LEdb-sd}) and (\ref{LEdb-sp}), respectively. This part of the proof, although much more complicated, resembles the method used by \cite{LeLI-sand} and \cite{BiKi-knuth}.
Then it remains to handle the parts $\gS(m,d)$ and $S(m,d)$, where $(m,d)=1$. For the ``helper'' group $\gS(m,d)$ that embeds $S(m,d)$, this is trivial: it is easily seen that $\gS(m,d)= \oplus_{v\in \cV} \langle e_v\rangle$, and the order of $e_v$ is equal to the size $o(v)$ of its orbit $O(v)$ under the map $d$, so (\ref{LEdb-sd}) follows immediately. The $e_v$ are not contained in~$S(m,d)$, but we can try to modify them slightly to obtain a similar decomposition for $S(m,d)$. The idea is to replace $e_v$ by a modified version
$\te_v=e_v-\sum_{p|m} \lambda_p(v) e_{m \pi_p(v)/\pi_p(m)}$,
where the numbers $\lambda_p(v)$ are chosen such that $\te_v\in S(m,d)$, or by a suitable multiple of $e_v$, in some exceptional cases (these are the cases where $c(v)>1$). It turns out that this is indeed possible, and in this way the proof of Theorem~\ref{LTmain} can be completed.
The proof of Theorem~\ref{LTmain-circ} is by reducing to the case $(n,p)=1$ by an explicit construction, and then by diagonalizing $C(n,p)$ over an appropriate extension of $\mathbb{F}_p$. Essentially, as soon as $(n,p)=1$, one can read off a decomposition of $C(n,p)$ into cyclic factors from the irreducible factors of the polynomial $x^n-1$ over $\mathbb{F}_p$.
In the next sections, we provide the details of the proofs as outlined above.
\section{The multiplication-by-$d$ map}
In the remainder of this section, we use the map $x\mapsto dx$ on~$\gS(n,d)$ to determine the structure of $\gS_0(n,d)$ and $S_0(n,d)$, i.e. the kernels of the map $x\mapsto d^kx$. We require the following simple result.
\btm{LTd} For any pair $n,d$ of positive integers,
$d\gS(n,d) \cong \gS(n/(n,d),d)$ and $dS(n,d)=S(n/(n,d),d)$. \end{teor} \begin{proof}
Write $n_1=n/(n,d)$.
Define $\varphi:d\gS(n,d)\to \gS(n_1,d)$ by $\varphi(de_v)=e_{v \bmod n_1}$ for $1\leq v\leq n-1$ and extend $\varphi$ by linearity. We claim that $\varphi$ defines an isomorphism between $d\gS(n,d)$ and $\gS(n_1,d)$. To see this, proceed as follows.
Since $de_v=e_{dv}$ in $\gS(n,d)$, we have that
$d \sum_{v>0} \alpha_v e_v= \sum_{v>0} \alpha_v e_{dv}$, and from Lemma~\ref{Lexp} we conclude that $\sum_{v>0} \alpha_v e_{dv}$ can be expressed as a linear combination of elements $\epsilon_{dv}$ in~${\cal E}_{n,d}$ with rational coefficients; the expression is 0 in~$\gS(n,d)$ if and only if all coefficients can be chosen to be integer. So, noting that $\varphi$ maps $\epsilon_{dv}$ in ${\cal E}_{n,d}$ to $\epsilon_{v \bmod n_1}$ in~${\cal E}_{n_1,d}$, we conclude that $\varphi$ is well-defined and in fact one-to-one on~$d\gS(n,d)$. Since $\varphi$ is obviously onto $\gS(n_1,d)$, the desired conclusion follows.
To see that $\varphi$ also induces an isomorphism between $dS(n,d)$ and $S(n_1,d)$, in view of Theorem~\ref{Lsub} it is sufficient to
remark that for an element $d\sum_{v>0} a_ve_v=\sum_{v>0} a_ve_{dv}\in d\gS(n,d)$, we have $\sum_{v>0} dv a_v\equiv 0 \bmod n$ if and only if $\sum_{v>0} va_v\equiv 0\bmod n_1$. \end{proof}
The next step is to determine the {\em kernel\/} of the multiplication-by-$d$ map $d:\mathbb{Q}[x]/(x^n-1)\to \mathbb{Q}[x]/(x^n-1)$, defined by $x\mapsto dx$, on both $\gS(n,d)$ and $S(n,d)$. The result is as follows. \btm{LTker} (i) The kernel ${\rm Ker}_\gS(d)$ of the map $d$ on $\gS(n,d)$ is isomorphic to $\mathbb{Z}_d^{n-n_1}$.\\ (ii) The kernel ${\rm Ker}_S(d)$ of the map $d$ on $S(n,d)$ is isomorphic to $\mathbb{Z}_{d/d_0}\oplus \mathbb{Z}_d^{n-n_1}$. \end{teor} \begin{proof} The order of $\gS(n,d)$ is equal to the product of its invariant factors, which are the positive invariant factors of the $(n-1)\times (n-1)$ matrix $\gS=\gS^{(n,d)}$ that has as rows the vectors
$\epsilon_v=de_v-e_{dv}$ with respect to the basis $\{e_v \mid 1\leq v\leq n-1\}$.
Since $\Sigma$ is nonsingular, this product is equal to $\det\Sigma$. Now partition the
set $\mathbb{Z}_n\setminus \{0\}$ of row and column indices of $\Sigma$ into parts $\mathbb{Z}-d\mathbb{Z}_n$ and $(d\mathbb{Z}_n)\setminus \{0\}$. Under this ordering of the rows and columns, $\Sigma$ takes the form \[
\gS=\gS^{(n,d)}= \left(\begin{array}{c|c} D & A\\ \hline O & \gS', \end{array} \right), \] where $D={\rm diag}(d, \ldots, d)$ is a $(n-n_1)\times (n-n_1)$ diagonal matrix and $\gS'=\gS^{(n_1,d)}$ is the matrix corresponding to $\gS(n_1,d)$. (Note that $d\mathbb{Z}_n\cong \mathbb{Z}_{n/(d,n)}=\mathbb{Z}_{n_1}$.) We conclude that
\beql{LEord} |\gS(n,d)| = d^{n-n_1}|\gS(n_1,d)|. \end{equation} In view of Theorem~\ref{LTd}, we have that
$|{\rm Ker}_\gS(d)|=|\gS(n,d)|/|\im_\gS(d)|=d^{n-n_1}$, and as a consequence of Corollary~\ref{LCord}, we also have
\[|{\rm Ker}_S(d)|
= |S(n,d)|/|S(n_1,d)|=(|\gS(n,d)|/n)/(|\gS(n,d)|/n_1)=d^{n-n_1}/d_0.\] To actually construct a basis for these kernels, let us define \[\Delta_{ab}:=e_{a+bn_1}-e_a = d^{-1}(\epsilon_{a+bn_1}-\epsilon_a),\] where the second equality follows directly from \eqref{def:e_v}. By Lemma~\ref{Lord}, each $\Delta_{ab}$ has order~$d$. Hence they are contained in ${\rm Ker}_\gS(d)$. First, we claim that the set \[ {\cal B}=\{\Delta_{ab} \mid 0\leq a\leq n_1-1, 1\leq b\leq d_0-1\}\] is independent in~$\gS(n,d)$ and a basis for ${\rm Ker}_\gS(d)$. Indeed, consider a $\mathbb{Z}$-linear combination of the elements of~${\cal B}$. Since \[ \sum_{a=0}^{n_1-1}\sum_{b=1}^{d_0-1}\lambda_{ab}\Delta_{ab} = \sum_{a=0}^{n_1-1}\sum_{b=1}^{d_0-1} \lambda_{ab}d^{-1}(\epsilon_{a+bn_1} -\epsilon_a) = d^{-1}\sum_{a=0}^{n_1-1}\left( \sum_{b=1}^{d_0-1}\lambda_{a,b}\epsilon_{a+bn_1}-\biggl(\sum_{b=1}^{d_0-1} \lambda_{ab}\biggr)\epsilon_a \right), \] and since $a+bn_1=a'+b'n_1$ with $a,a'\in\{0,1,\ldots, n_1-1\}$ and $b,b'\in\{0, \ldots, d_0-1\}$ is only possible when $(a,b)=(a',b')$, each $\epsilon_{a+bn_1}$ occurs only once in the expression. Hence the linear combination can be zero only if every term $d^{-1}\lambda_{ab}\epsilon_{a+bn_1}$ is zero, i.e. $d\mid \lambda_{ab}$, i.e. $\lambda_{ab}\Delta_{ab}=0$.
Since every $\Delta_{ab}$ has order $d$, we conclude that \[{\rm Ker}_\gS(d)=\bigoplus_{\stackrel{a\in\{0,\ldots, n_1-1\}}{b\in\{1, \ldots, d_0-1\}}} \langle \Delta_{ab}\rangle \cong \mathbb{Z}^{n-n_1},\] where the equality (instead of a containment) follows from the equality of the respective sizes.
Similarly, the set \[ B=\{\Delta_{ab}-b\Delta_{01} \mid 0\leq a\leq n_1-1, 1\leq b\leq d_0-1, (a,b)\neq (0,1)\} \cup \{d_0\Delta_{01}\}\] spans ${\rm Ker}_S(d)$: according to Theorem~\ref{Lsub}, every element of~$B$ is contained in~$S(n,d)$, and their independence easily follows from the independence of ${\cal B}$; a counting argument similar to the one above shows that they span the entire kernel. \end{proof}
In what follows we need a few simple properties of finite abelian groups. The first of these is a straightforward consequence of the uniqueness of decomposition of finite abelian groups into cyclic groups of prime power order. \bpn{LTdiv} Let $G, H, K$ be finite abelian groups. If $G\oplus K\cong H\oplus K$, then $G\cong H$. \qed \end{prop} The next result are needed when we deal with invariant factors of a group. Recall that as a consequence of the uniqueness of the Smith Normal Form, every abelian group $G$ also has a unique decomposition $G\cong \mathbb{Z}_{s_1}\oplus \cdots \oplus\mathbb{Z}_{s_r}$ with
$1<s_1|\cdots |s_r$. We refer to $s_1, \ldots, s_r$ as the {\em invariant factors\/} of~$G$. \btm{LTinv} (i) We have that $\mathbb{Z}_m\oplus \mathbb{Z}_n\cong \mathbb{Z}_{(m,n)}\oplus \mathbb{Z}_{[m,n]}$, where $(m,n)$ and $[m,n]$ denote the greatest common divisor (gcd) and the least common multiple (lcm) of $m$ and $n$.\\ (ii)
If $G$ has invariant factors $s_1, \ldots, s_r$, then $G\oplus \mathbb{Z}_m$ has invariant factors $s_1', \ldots, s_{r+1}'$ with $s_1'=(s_1, m)$, $s_i'=(s_i, [s_{i-1},m])$ for $2 \leq i\leq r$, and $s_{r+1}'=[s_r,m]$, or invariant factors $s_2', \ldots, s_{r+1}'$ if $s_1'=(s_1,m)=1$. \end{teor} \begin{proof} Part (i) is trivial, it follows immediately from the decomposition of $\mathbb{Z}_m$ and $\mathbb{Z}_n$ into their prime power summands. Part (ii) follows from part (i) and induction on~$r$: we have that \begin{equation}
\mathbb{Z}_{ s_1}\oplus \cdots \oplus \mathbb{Z}_{s_r} \oplus \mathbb{Z}_m \cong \mathbb{Z}_{ s_1'}\oplus \cdots \oplus \mathbb{Z}_{s_{r-1}'}\oplus\mathbb{Z}_{[s_{r-1},m]} \oplus \mathbb{Z}_{s_r} \cong \mathbb{Z}_{ s_1'}\oplus \cdots \oplus \mathbb{Z}_{s_{r-1}'}\oplus\mathbb{Z}_{s_{r}'} \oplus \mathbb{Z}_{s_{r+1}'}, \end{equation}
where we have used part (i) and the fact that $s_{r-1}|s_r$. Since
$s_1'|\cdots |s_{r+1}'$ and $1<s_1|s_2'$, the invariant factors of $G\oplus \mathbb{Z}_m$ are $s_1', \ldots, s_{r+1}'$, or $s_2', \ldots, s_{r+1}'$ in case that $s_1'=(s_1,m)=1$. \end{proof}
We also require the following simple lemma. \ble{LLpre} Let $G$ be an abelian group, and let $d, m$ be two positive integers. Let $d'$ denote the maximal divisor of $d$ for which $(d',m)=1$. If $\mathbb{Z}_m$ is a direct summand of $dG$, then $\mathbb{Z}_{md/d'}$ is a direct summand of~$G$.
In particular, if $d|m$, then $\mathbb{Z}_{md}$ is a direct summand of~$G$. \end{lem} \begin{proof}
Consider a decomposition of $G$ into cyclic groups of prime-power order. If $\mathbb{Z}_{p^t}$ is a direct summand of~$G$ and if $p^s||d$, then
$d\mathbb{Z}_{p^t}=\mathbb{Z}_{p^t/(d,p^t)}=\mathbb{Z}_{p^{max(0,t-s)}}$ is the corresponding direct summand in~$dG$. Therefore, if $p^r||m$, then the direct summand $\mathbb{Z}_{p^r}$ of~$\mathbb{Z}_m\leq dG$ can only arise from a direct summand $\mathbb{Z}_{p^{r+s}}$ of $G$, i.e. the required direct summand of $\mathbb{Z}_{md/d'}$. \end{proof}
Let $\delta=d_0\cdots d_{k-1}$ and write $m=n_k$. Then $n=\delta m$ with $(\delta,m)=1$. The Chinese Remainder Theorem (CRT) decomposition $\mathbb{Z}_n=\mathbb{Z}_\delta \oplus \mathbb{Z}_{m}$ induces a corresponding decompositions for the sand dune and sandpile groups. \ble{LTcrt}We have that \[\gS(n,d)\cong \gS_0(n,d)\oplus \gS(m,d)\] and \[S(n,d)\cong S_0(n,d) \oplus S(m,d),\] where $\gS_0(n,d)={\rm Ker}_{\gS}(d^k)$ and $S_0(n,d) ={\rm Ker}_{S}(d^k)$ are the kernel of the map $x\mapsto d^kx$ on $\gS(n,d)$ and $S(n,d)$, respectively. \end{lem} \begin{proof} Since $n=\delta m$ with $(\delta,m)=1$, by the CRT there are integers $\chi$ and $\eta$ such that $\chi \delta$ and $\mu m$ are mutually orthogonal idempotents, that is, $\chi \delta \equiv 1 \bmod m$ and $\mu m\equiv 1 \bmod \delta$. As a consequence, for each $v\in \mathbb{Z}_n$ we have $v=(v\mu m) + (v\chi \delta)$, and it is easily seen that the map $v\mapsto (v\mu m, v\chi \delta)$ induces a decomposition $\mathbb{Z}_n \cong \mathbb{Z}_\delta \oplus \mathbb{Z}_{m}$. Then we can write \[e_v = (e_{v\chi \delta+v\mu m} - e_{v\chi \delta}) + e_{v\chi \delta},\] and it is easily verified that the map \[ e_v \mapsto (e_{v\chi \delta +v\mu m} - e_{v\chi \delta}, e_{v\chi \delta})\] induces a decomposition \[\gS(n,d)\cong \gS_0(n,d)\oplus \gS(m,d),\] where $\gS_0(n,d)$ denotes the subgroup generated by the elements $e_{v\chi \delta+v\mu m} - e_{v\chi \delta}$ for $v\in \mathbb{Z}_n$. Now $(d,m)=1$, so the map $x\mapsto dx$ acts as a permutation on~$\mathbb{Z}_m$; since $de_v=e_{dv}$ on $\gS(n,d)$, we conclude that $d\gS(m,d)\cong\gS(m,d)$. Next, since \[\delta e_{v\chi \delta + v\mu m} = e_{v\chi \delta^2 + v\mu m\delta}= e_{v\chi \delta^2}=\delta e_{v\chi \delta},\] we have that $\delta (e_{v\chi \delta + v\mu m} - e_{v\chi \delta})=0$. Now
$d_i=(d,n_i)|d$ for $i=0, \ldots, k-1$, so that $\delta|d^k$. Combining these observations, we conclude that $\gS_0(n,d)={\rm Ker}_{\sigma(n,d)}(d^k)$.
By Theorem~\ref{Lsub}, the element $a=\sum_{v\geq 0}^{n-1}a_ve_v\in S(n,d)$ iff $\chi:=\sum_{v\geq 0}{n-1}va_v\equiv0 \bmod n$, which by CRT is the case if and only if $\chi$ is 0 both modulo $\delta$ and modulo $m$. Therefore, $a\in S(n,d)$ iff both projections $\sum_v a_v (e_{v\chi \delta +v\mu m} - e_{v\chi g})$ and $\sum_v a_v e_{v\chi \delta}$ are in $S(n,d)$. It follows that the above decomposition for $\gS(n,d)$ induces a similar decomposition for $S(n,d)$. \end{proof}
We now use the multiplication-by-$d$ map to inductively determine the parts $\gS_0(n,d)$ and $S_0(n,d)$. \btm{LTmain1} Let $n$ have $d$-sequence $n=n_0, n_1, \ldots, n_k=n_{k+1}$, with $d_i=n_i/n_{i+1}=(n_i,d)$ for $i=0, \ldots, k$. Then \[\gS_0(n,d) \cong\bigoplus_{i=0}^{k-1} \mathbb{Z}_{d^{i+1}}^{n_i-2n_{i+1}+n_{i+2}}\] and \[S_0(n,d) \cong \bigoplus_{i=0}^{k-1}\biggl(\mathbb{Z}_{d^{i+1}/d_i}\oplus \mathbb{Z}_{d^{i+1}}^{n_i-2n_{i+1}+n_{i+2}-1}\biggr).\] \end{teor} \begin{proof} We use induction on~$k$. If $k=0$, then there is nothing to prove. Now, suppose that $k\geq1$. By Theorem~\ref{LTd}, we have $d\gS_0(n,d)\cong \gS_0(n_1,d)$. Since $n_1$ has $d$-sequence $(n_1, n_2, \ldots, n_k)$, by induction and by Lemma~\ref{LLpre}, we conclude that $\gS_0(n,d)$ is of the form \[\gS_0(n,d) \cong \Lambda \oplus \bigoplus_{i=1}^{k-1} \mathbb{Z}_{d^{i+1}}^{n_i-2n_{i+1}+n_{i+2}}\] with $\Lambda\subseteq {\rm Ker}_\gS(d)$.
Hence ${\rm Ker}_\gS(d)={\rm Ker}_{\gS_0}(d)=\Lambda\oplus \mathbb{Z}_d^{n_1-n_2}$ (to see this, note that $n_i-2n_{i+1}+n_{i+2}=(n_i-n_{i+1})-(n_{i+1}-n_{i+2})$ for all $i$ and $n_k-n_{k+1}=0$). So by Theorem~\ref{LTker} and Proposition~\ref{LTdiv}, we have that $\Lambda\cong \mathbb{Z}_d^{(n-n_1)-(n_1-n_2)}$, as was to be proved.
Similarly, again using Lemma~\ref{LLpre}, we can conclude that \[S_0(n,d) \cong L \oplus \mathbb{Z}_{d^2}^{n_1-2n_2+n_3-1} \oplus \bigoplus_{i=2}^{k-1}\biggl(\mathbb{Z}_{d^{i+1}/d_i}\oplus \mathbb{Z}_{d^{i+1}}^{n_i-2n_{i+1}+n_{i+2}-1}\biggr),\] where \beql{LEdL} dL=\mathbb{Z}_{d/d_1}.\end{equation}
From this expression, we read off that ${\rm Ker}_S(d)={\rm Ker}_{S_0}(d)={\rm Ker}_L(d)\oplus \mathbb{Z}_d^{n-n_1-1}$, hence by Theorem~\ref{LTker} and Proposition~\ref{LTdiv}, we conclude that \beql{LEkL} {\rm Ker}_L(d) = \mathbb{Z}_{d/d_0}\oplus \mathbb{Z}_d^{n-2n_1+n_2}.\end{equation} Suppose that $L$ has invariant-factor decomposition
\[L=\mathbb{Z}_{s_0}\oplus \cdots \oplus \mathbb{Z}_{s_r}\] with $s_0|\cdots | s_r$. Then from (\ref{LEdL}) and (\ref{LEkL}), we conclude that \beql{LEdLp} \mathbb{Z}_{s_0/(s_0,d)} \oplus \cdots \oplus \mathbb{Z}_{s_r/(s_r,d)} =\mathbb{Z}_{d/d_1}\end{equation} and \beql{LEkLp}\mathbb{Z}_{(s_0,d)} \oplus \cdots \oplus \mathbb{Z}_{(s_r,d)} =\mathbb{Z}_{d/d_0}\oplus
\mathbb{Z}_d^{n-2n_1+n_2}.\end{equation} Since $(s_i,d)|(s_{i+1},d)$ for $i=0,
\ldots, r-1$ and $(d/d_0)| d$, both direct sums in (\ref{LEkLp}) are invariant-factor decompositions. Hence $(s_0,d)=d/d_0$ and $(s_i,d)=d$ for $i=1, \ldots, r$. So we can write $s_0=\tau d/d_0$ for some $\tau$ with $(\tau,d_0)=1$ and $s_i=\sigma_i d$ for $i=1, \ldots, r$, where
$\sigma_1|\cdots |\sigma_r$ and $\tau|\sigma_1d_0$, so that $\tau|\sigma_1$. Moreover, from (\ref{LEdLp}), we conclude that \[\mathbb{Z}_{\tau}\oplus \mathbb{Z}_{\sigma_1}\oplus \cdots \oplus \mathbb{Z}_{\sigma_r} = \mathbb{Z}_{d/d_1}.\] Now, using Theorem~\ref{LTinv}, we conclude that the left-hand side above has invariant factors $(\sigma_1,\tau)$, $(\sigma_{i+1}, [\sigma_i,\tau])$ for $i=1, \ldots, r-1$, and $[\sigma_r,\tau]$, while the right-hand side has invariant factors $d/d_1$. Since the invariant factors are unique, we conclude that $(\sigma_1,\tau)=1$ and $(\sigma_{i+1}, [\sigma_i,\tau])=1$ for $i=1, \ldots, r-1$, while $[\sigma_{r},\tau]=d/d_1$. Since
$\sigma_i=(\sigma_{i+1}, \sigma_i)| (\sigma_{i+1}, [\sigma_i,\tau])$, we conclude that $\sigma_1=\ldots = \sigma_{r-1}=1$ and $(\sigma_r,\tau)=1$, while
$[\sigma_r,\tau] =\sigma_r\tau=d/d_1$. Since $\tau|\sigma_1|\sigma_r$, we conclude that $\tau=1$ and $\sigma_r=d/d_1$, hence $L=\mathbb{Z}_{d/d_0}\oplus \mathbb{Z}_d^{n-2n_1+n_2-1}\oplus \mathbb{Z}_{d^2/d_1}$, which is what we wanted to prove. \end{proof}
\subsection{Adaptations for the case of generalized Kautz graphs}
With minor adaptations, all the results in this section are also valid when $d<0$, so for the sand dune group $\gS(n,d)$ and sandpile group
$S(n,d)$ of the generalized Kautz graph ${\rm Ktz}(n,|d|)$. Like before, write $n_1=n/(n,|d|)$ and $d_0=(n,|d|)$. Using multiplication by~$d$, we conclude in a similar way that $d\gS(n,d)\cong \gS(n_1,d)$ and $dS(n,d)\cong S(n_1,d)$, and also that ${\rm Ker}_{\gS}(d)\cong
\mathbb{Z}_d^{n-n_1}$ and ${\rm Ker}_{S}(d)\cong \mathbb{Z}_{|d|/d_0}\oplus
\mathbb{Z}_{|d|}^{n-n_1-1}$. And finally, we can use these facts to determine $\gS_0(n,d)$ and $S_0(n,d)$ in a similar way.
\section{\label{sect:rp}The sand dune and sandpile group in the relatively prime case}
In this section, we determine the sand dune group $\gS(m,d)$ and the sandpile group $S(m,d)$ for fixed positive integers $m$ and $d$ with $(m,d)=1$. In that case, the map $x\mapsto dx$ partitions $\mathbb{Z}_m$ into orbits of the form \[O(v) = \{v,dv, \ldots, d^{e-1}v\},\] where $d^ev\equiv v\bmod m$ and $d^iv\not\equiv d \bmod m$ for $1\leq i<e$.
Recall that we refer to $o(v)=|O(v)|$ as the {\em order\/} of~$v$, and that $\cV$ denotes a complete set of representatives of the orbits different from~$\{0\}$, that is, $\cV$ contains precisely one element from each
orbit different from~$\{0\}$. Recall that for $p$ a prime, $\pi_p(v)$ denote the largest power of~$p$ dividing~$v$. For the remainder of this section, we let
\[ \mathbb{P}=\{ p \mid \mbox{$p$ is a prime and $p|m$}\},\] and write \[M_p=m/\pi_p(m), \qquad p\in \mathbb{P}.\] For later use, we define
\[ \cV^* = \{1\leq v\leq m \mid \mbox{$v\equiv p^i M_p \bmod m$ for some $p\in \mathbb{P}$ and some integer $i\geq 0$}\}.\] Since $(d,m)=1$, it is easily seen that for $p,q\in\mathbb{P}$, if $p^iM_p \equiv d^k q^jM_q \bmod m$ then $p=q$ and $i=j$. Thus the elements in $\cV^*$ are in different orbits on $\mathbb{Z}_m$.
(Another way to see this is to note that every divisor $k|m$ is minimal in its orbit, which is contained in $d\mathbb{Z}_m$.)
\noindent {\em For the remainder of this section, we assume that the set $\cV$ of orbit representatives contains all the members of~$\cV^*$.\/}
The determination of the sand dune group is easy. \btm{TSd} With the above notation, we have that
$\gS(m,d)=\bigoplus_{v\in\cV}\mathbb{Z}_{d^{o(v)}-1}$. \end{teor} \begin{proof} By Lemma~\ref{Lexp}, the expression for $e_v$ in terms of the~$\epsilon_w$ involves only $\epsilon_w$ with $w\in O(v)$. Hence the $e_v$ with $v\in \cV$ are independent. Moreover, since $d^ie_v=e_{d^iv}$, the subgroup $\langle e_v\rangle$ generated by $e_v$ contains every $e_w$ with $w\in O(v)$, so $\gS(m,d) =\oplus_{v\in \cV}\langle e_v\rangle$, According to Corollary~\ref{Cord},
the order of $e_v$ is equal to $d^{o(v)}-1$, so $\langle e_v\rangle\cong \mathbb{Z}_{d^{o(v)}-1}$, from which the theorem follows.
\end{proof}
\bre{LRSd} \rm Aa alternative way to see the above result is to remark that the matrix $\gS=\gS^{(m,d)}$ is equivalent to a block-diagonal matrix with $|\cV|$ blocks $\gS_v$ ($v\in \cV$), where the block $\gS_v$ is the restriction of $\gS$ to the rows and columns indexed by orbit $O(v)$; moreover, if within an orbit $O(v)$ we index in the order $v,dv,\ldots, d^{o(v)-1}v$, then $\gS_v$ is $o(v)\times o(v)$ and of the form \[ \gS_v=
\left( \begin{array}{ccccc} d&-1&0&\cdots&0\\ 0&d&-1&\cdots&0\\ &\ddots&\ddots &\ddots& \\ 0&&0&d&-1\\ -1&0& &0 & d \end{array} \right). \] Now it is easy to see that $\gS_v$ has invariant factors $(1, \ldots, 1, d^{o(v)}-1)$, for example, by successively adding $d$ times the first row to the second row, then $d$ times the current second row to the third row, \ldots, and finally $d$ times the then current $(o(v)-1)$th row to the last row, and then subtracting from the first column a suitable linear combination of the other columns. \end{rem}
As explained in Section~\ref{sect:main}, it is a bit more complicated to determine the structure of~$S(m,d)$. First, we define the modified generators $\te_v$ for $\gS(m,d)$. To this end, we need some preparation. Let $p\in \mathbb{P}$. Since $\pi_p(m)$ and $M_p=m/\pi_p(m)$ are relatively prime, there is a number $\eta_p$ such that \[\eta_p M_p \equiv 1 \bmod \pi_p(m).\] Define \[\lambda_p(v)=\eta_p v/\pi_p(v),\]
and for $v\in \cV\setminus \cV^*$, put \[\te_v=e_v-\sum_{p|m} \lambda_p(v) e_{\pi_p(v)M_p}.\]
\btm{LTte1} For $v\in \cV\setminus \cV^*$, let $\te_v $ be defined as above. Then $\te_v$ is contained in~$S(m,d)$, and $\te_v$ and $e_v$ have the same the order in~$S(m,d)$. Moreover, the $\te_v$ for $v\in \cV\setminus \cV^*$ together with the $e_v$ for $v\in \cV^*$ are independent and generate $\gS(m,d)$. \end{teor} \begin{proof} To show that $\te_v\in S(m,d)$, we use Theorem~\ref{Lsub}. For a fixed prime $q\in\mathbb{P}$, the numbers $M_p=m/\pi_p(m)$ with $p\in \mathbb{P}\setminus
\{q\}$ are divisible by $\pi_q(n)$, so that modulo~$\pi_q(v)$, we have \[v-\sum_{p|m} \lambda_p(v) \pi_p(v)M_p\equiv v-\lambda_q(v)\pi_q(v)M_q \equiv v-\eta_q (v/\pi_q(v))\pi_q(v)M_q\equiv 0 \bmod \pi_q(v).\]
Since this holds for every $q\in \mathbb{P}$, by the Chinese Remainder Theorem we have that $v-\sum_{p|m} \pi_p(v) \pi_p(v)m/\pi_p \equiv 0 \bmod m$, hence by Theorem~\ref{Lsub}, $\te_v$ is in $S(m,d)$.
Next, recall that by Corollary~\ref{Cord}, $e_v$ has order $d^{o(v)}-1$. To show that $\te_v$ has order $d^{o(v)}-1$, it is sufficient
to show that $(d^{o(v)}-1)e_{\pi_p(v)m/\pi_p(m)}=0$ holds for every $p\in \mathbb{P}$. To see this, we proceed as follows. By the definition of $o(v)$, we have that $(d^{o(v)}-1)v\equiv 0\bmod m$, hence $(d^{o(v)}-1)\pi_p(v) \equiv 0 \bmod \pi_p(m)$, and therefore $(d^{o(v)}-1)\pi_p(v) m/\pi_p(m) \equiv 0 \bmod m$. So the order of $e_{\pi_p(v)M_p}$ divides $d^{o(v)}-1$,
and now the desired conclusion follows from the equality $d^{o(v)}e_{\pi_p(v)M_p}= e_{d^{o(v)}\pi_p(v)M_p}$.
Finally, by the definition of $\cV^*$, it is obvious that the $\te_v$ for $v\in \cV\setminus \cV^*$ together with the $e_v$ for $v\in \cV^*$ have the same span as the $e_v$ for $v\in \cV$, from which the claim follows immediately. \end{proof}
So now we are left with the choice of suitable elements $\te_v$ for $v\in \cV^*$.
First, we need a simple number-theoretic result. For a prime $p$, let $\nu_p(n)$ denote the largest integer~$e\geq 0$ for which
$p^e|n$. We say that an integer $d$ has {\em order $e$ modulo $p$\/}
if $e$ is the smallest positive integer for which $p|d^e-1$. \ble{Lpcont} For a prime $p$,with $(p,d)=1$, let $d$ have order $e$ modulo $p$, and suppose that $\nu_p(d^e-1)=a$.
Then $d$ has order $ep^i$ modulo $p^{a+i}$ for all $i\geq0$, except when $p=2$ and $d\equiv 3 \bmod 4$. In that exceptional case, $e=1$ and $a=1$, and if $\nu_2(d^{2}-1)=b$, then $b\geq3$ and $d$ has order $2^{i+1}$ modulo $2^{b+i}$ for all $i\geq0$ (and order $1$ modulo
$2^i$ for $1\leq i<b$). \end{lem} \begin{proof} For an integer $t\geq1$, if $d$ has order $f$ mod $p^t$, then $p^t|d^n-1$ if and only if $f|n$. So the order of $d$ modulo $p^{t+1}$ is of the form $kf$. Moreover, if $\nu_p(d^f-1)=s\geq t$, then $d^f=1+qp^s$ for some integer $q$ not divisible by~$p$, so \[ d^{rf}=(1+qp^s)^r\equiv 1+rqp^s \bmod p^{2s};\] hence for every integer $k\geq1$, \[d^{kf}-1=(d^f-1)(1+d^f+\cdots +d^{(k-1)f}) \equiv k+qp^a(0+1+\cdots +(k-1)) \equiv qp^s \left(k+qp^s\binom{k}{2}\right) \bmod p^{3s}.\]
So the smallest $k>1$ for which $p^{s+1}|d^{kf}-1$ is $k=p$, in which case \[d^{pf}-1\equiv qp^{s+1}(1+qp^{s-1}\binom{p}{2}) \bmod p^{3s}.\] Moreover, we see that $\nu_p(d^{pf}-1)=s+1$ , except when
$s=1$ and $p=2$. In that case, $q$ is odd and $8|d^{2e}-1$, and $d\equiv 3 \bmod 4$ since $s=1$. So we conclude that there is a ``jump'' in the order of $d$ modulo powers $p^s$ of $p$ if and only if $s=1$, $p=2$, and $d\equiv 3 \bmod 4$, as claimed in the theorem. \end{proof}
\bco{LLordcon} If $p\neq 2$ or $d\equiv 1\bmod 4$, then $\nu_p(d^{o(M_p)}-1)-\nu_p(d^{o(p^tM_p)}-1) \leq t$ for $1\leq t \leq \pi_p(m)-1$. Otherwise, i.e. for $p=2$ and $d\equiv 3\bmod 4$, we have that $\nu_p(d^{o(M_p)}-1)-\nu_p(d^{o(p^tM_p)}-1) \leq t$ for $1\leq t \leq \pi_p(m)-2$. \end{cor} \begin{proof} Let $p\in \mathbb{P}$, and write $s=\nu_p(m)$, so that $M_p=m/p^s$. First we claim that the order of $d$ modulo $p^{s-t}$ is equal to $o(p^tM_p)$. To see this, note that by definition, $o(p^tM_p)$ is the smallest integer $e\geq 1$ for which $(d^e-1)p^tM_p \equiv 0\bmod m$, or, equivalently, for which $(d^e-1)p^t \equiv 0\bmod p^s$, from which the claim follows.
Now suppose that $o(p^{s-1}M_p)=e$ and $\nu_p(d^e-1)=a$. Then we see from Lemma~\ref{Lpcont} that in the ``non-exceptional'' case, where $p\neq 2$ or $d\equiv 1\bmod 4$, we have that $\nu_p(d^{ep^i}-1) = a+i$ for all $i\geq0$, so as a consequence of our claim, for $s-a\leq t\leq s-1$, the order $o(p^t M_p)$ is still equal to $e$, with $\nu_p(d^{ o(p^t M_p)}-1) = \nu_p(d^e-1)=a$, and for $t=s-a-i$ with $i\geq1$, the order $o(p^t M_p)$ is equal to $ep^i$, with $\nu_p(d^{ o(p^t M_p)}-1) = \nu_p(d^{ep^i}-1)=a+i$. This proves the result in the ``non-exceptional'' case.
In the ``exceptional'' case where $p=2$ and $d\equiv 3\bmod 4$, we have $e=1$ and $a=1$, and according to Lemma~\ref{Lpcont}, for some integer $b\geq3$ we have that $\nu_2(d^{2+i}-1)=b+i$ for all $i\geq0$. So as a consequence of our claim, $o(p^{s-2}M_2)=b$ and $\nu_2(d^2-1)=b$; then for $s-b\leq t\leq s-3$, the order $o(2^t M_2)$ is still equal to $b$, with $\nu_2(d^{ o(2^t M_2)}-1) = \nu_2(d^2-1)=b$, and for $t=s-b-i$ with $i\geq1$, the order $o(2^t M_2)$ is equal to $e2^i$, with $\nu_2(d^{ o(2^t M_2)}-1) = \nu_2(d^{e2^i}-1)=b+i$. This proves the result in the ``exceptional'' case.
\end{proof}
Now we are ready to define the $\te_v$ in the cases where $v\in\cV^*$, that is, when $v$ is of the form $p^tM_p$ for some $p\in \mathbb{P}$ with $0\leq t <\nu_p(m)$. In the ``non-exceptional case'', i.e. for $p$ odd, or $p=2$ and $d\equiv 1\bmod 4$, and also for $4\nmid m$, we let \[ \te_{M_p}=\te_{m/\pi_p(m)}=\pi_p(m)e_{M_p},\] and for $1\leq t<\nu_p(m)$, we let
\[ \te_{p^tM_p}=e_{p^tM_p} - \lambda_{p,t} e_{M_p},\] where $\lambda_{p,t}$ is such that \beql{LEordcon} \lambda_{p,t}=\frac{d^{o(M_p)}-1}{d^{o(p^tM_p)}-1} \mu_{p,t} \equiv p^t \bmod \pi_p(m)\end{equation}
for some suitable integer $\mu_{p,t}$. (We will show in a moment that this is indeed possible.) In the ``exceptional case'' where $p=2$ and $d\equiv 3\bmod 4$ and when also $4|m$, we we do not change the definition of~$\te_{p^tM_2}$ for $t=1, \ldots, \pi_2(m)-2$, but we let
\beql{LEte2}\te_{M_2}=e_{2^{s-1}M_2} -2^{s-1}e_{M_2}, \qquad \te_{2^{s-1}M_2} = \te_{m/2}= 2e_{2^{s-1}M_2},\end{equation} where $s=\pi_2(m)$. \btm{LLte2} For $v\in \cV^*$, let $\te_v $ be defined as above. Then $\te_v\in S(m,d)$. Moreover, $\te_v$ and $e_v$ have the same the order in~$S(m,d)$, except in the following two cases.
1. In the ``non-exceptional'' case where $p\in \mathbb{P}$ with $p\neq 2$ or $d\equiv 1 \bmod 4$, and also when
$4\nmid m$,
the order of $\te_{M_p}=\pi_p(m)e_{M_p}$ is
$1/\pi_p(m)$ times the order of $e_{M_p}$.
2. In the ``exceptional''
case where $p=2$ and $d\equiv 3 \bmod 4$, if also $4|m$ then the order of $\te_{m/2}$ is half the order of $e_{m/2}$ and
the order of $\te_{M_2}$ is $2/\pi_2(m)$ times the order of $e_{M_2}$.
Finally, the $\te_v$ for $v\in \cV$
are independent and generate $S(m,d)$, and Theorem~\ref{LTmain} holds. \end{teor} \begin{proof}
We begin by showing that the $\te_{p^tM_p}$ with $1\leq t\leq \pi_p(v)-1$ are well-defined. To this end, first observe that by definition of the order, we have that $o(\lambda v) |o(v)$, and hence $d^{o(\lambda v)}-1$ divides $d^{o(v)}-1$. So the fraction in (\ref{LEordcon}) is an integer, and as a consequence of Corollary~\ref{LLordcon}, in all relevant cases the exponent of the highest power of $p\in \mathbb{P}$ dividing this integer is at most $t$, so that an integer $\mu_{p,t}$ for which (\ref{LEordcon}) holds indeed can be found.
Next, Theorem~\ref{Lsub} states that $\te_{p^tM_p}-\lambda_{p,t}e_{M_p}$ is in~$S(m,d)$ if and only if $p^tM_p - \lambda_{p,t} M_p\equiv 0 \bmod m$, or, equivalently, if $p^t \equiv \lambda_{p,t} \bmod m$, which holds since it is just the second requirement in (\ref{LEordcon}). By the same theorem, obviously $\te_{M_p} =\pi_p(m)e_{M_p}$ is also in~$S(m,d)$, and the same holds for $\te_{M_2}$ and $\te_{m/2}$ as defined in (\ref{LEte2}).
Then, we observe that $\te_{p^tM_p}=e_{p^tM_p}-e_{M_p}$ and $e_{p^tM_p}$ have the same order if and only if $(d^{o(p^tM_p)}-1)\lambda_{p,t}e_{M_p}=0$, or, equivalently, if $d^{o(M_p)}-1$ divides $(d^{o(p^tM_p)}-1)\lambda_{p,t}$; this holds since it is just the first requirement in (\ref{LEordcon}). Also, since $\pi_p(m)$ divides the order $d^{o(M_p)}-1$ of $e_{M_p}$ for $p\in
\mathbb{P}$, we immediately have that $\pi_p(m)\te_{M_p}$ has the order as claimed. Now consider the exceptional case where $p=2$, $d\equiv 3\bmod 4$ and $4|m$. Since $e_{m/2}$ has order $d-1$, which is $2\bmod 4$, the order of $\te_{m/2}$ is as claimed. Now let $s=\nu_2(m)$. To determine the order of $\te_{M_2}$, we first note that $e_{M_2}$ has order $d^f-1$, where $f$ is the order of $d$ modulo $2^s$. Hence $2^{s-1}e_{M_2}$ has order $(d^f-1)/2^{s-1}$. We claim that $\te_{M_2}$ has the same order. To show this, it is sufficient to show that the order $d-1$ of $e_{m/2}$ divides $(d^f-1)/2^{s-1}$. To this end, note that both $(d-1)/2$ and $2^{s}$ divide $d^f-1$; since $d\equiv 3 \bmod 4$, we know that $(d-1)/2$ is odd. Hence $((d-1)/2, 2^s)=1$, from which the conclusion follows.
Finally, it is fairly obvious that the $\te_{p^tM_p}$ for $p\in \mathbb{P}$ and $1\leq t\leq \nu_p(m)-2$ together with $e_{M_p}$ and $e_{m/p}$ $p\in \mathbb{P}$ are independent and generate the same subgroup as the $e_v$ for $v\in\cV^*$. Applying Theorem~\ref{LTte1}, we conclude that the $\te_v$ for $v\in \cV$ are independent, and thus
form a basis of a subgroup of $S(m,d)$. From the expressions for the orders derived above, we conclude that this subgroup has order $|\gS(m,d)|/m$; now from Corollary~\ref{LCord} we see that in fact the $\te_v$ with $v\in \cV$ generate $S(m,d)$. So $S(m,d)=\bigoplus_{v\in \cV} \langle \te_v\rangle$, and now Theorem~\ref{LTmain} follows from the order expressions. \end{proof}
\subsection{\label{ssect:ktzrp}Adaptations for the case of generalized Kautz graphs}
Essentially, the above analysis for the case of the generalized de Bruijn graphs only depends on the orbits of the map $x\mapsto dx$ on $\mathbb{Z}_m\setminus\{0\}$ and the order of $d$ modulo powers of $p$. Thus, similar results hold when $d<0$; the only difference is that relevant group elements $g$ now have orders of the form
$|d^e-1|/c$, so that the group $\langle g \rangle$ is
isomorphic to $\mathbb{Z}_{|d^e-1|/c}$. We leave the details for the interested reader.
\section{The group of invertible circulant matrices}
In \cite{DuPa-neck} it was shown that a family of bijections between the set of aperiodic necklaces of length~$n$ over the finite field $\mathbb{F}_q$, with $q$ prime, and the set of degree $n$ normal polynomials over $\mathbb{F}_q$, gives rise to a permutation group on any of these sets. This group turns out to be isomorphic to $C(n,q)$; as well, \cite{DuPa-neck} conjectured a relation of $C(n,q)$ and $S(n,q)$. The present work confirms this relation\footnote{In view of this relation, it would be desirable to have an explicit embedding of the group $S(n,q)$ as a subgroup of~$C(n,q)$.} with the sandpile group $S(n,q)$ of $\mathrm{DB}(n,q)$. We show that indeed $C'(n,q)/\langle x\rangle$ is isomorphic to~$S(n,q)$ for all $n$ and all primes $q$, and also for all prime powers $q$ provided that $(n,q)=1$, by explicitly computing a decomposition into cyclic subgroups. While the case $(n,q)=1$ appears to be well-understood, we did not find an explict reference in the literature. The general case is harder, and the only relevant reference we found was the case $n=2^k$, $p=2$ dealt with in \cite[Prop. XI.5.7]{MR0249491}.
In what follows, we use the description for $C'(n,q)$ as given in (\ref{LECp}).
\btm{LTiso-reg-circ} Let $q$ be a prime power, and let $m$ be a positive integer with $(m,q)=1$.
Then \[C'(m,q)\cong \gS(m,q)\] and \beql{LE-reg-circ} C'(m,q) / \langle x \rangle \cong S(m,q). \end{equation}
\end{teor}
\bex{LEnotgen} We remark that the condition $(m,q)=1$ (respectively $(n,d)=1$) is necessary in Theorem~\ref{LTiso-reg-circ} (respectvely in Theorem~\ref{LTmain-circ}). Indeed, one may check directly that $C'(9,9)/\langle x\rangle\cong \mathbb{Z}_9^4\oplus \mathbb{Z}_{27}\oplus\mathbb{Z}_3^3$, while $S(9,9)\cong \mathbb{Z}_9^7$.
\end{examp}
\begin{proof} Let $\cV$ be defined as in the beginning of Section~\ref{sect:rp}, so that $\cV^+=\cV\cup\{0\}$ is
a complete set of orbit representatives for the map $x\mapsto qx$ on ${\bf Z}_m$ containing the set $\cV^*$ of all numbers of the form $p^iM_p$ for a prime $p\in \mathbb{P}$, with $M_p=m/\pi_p(m)$. (Refer to the beginning of Section~\ref{sect:rp} for definitions of $\cV^*$, $M_p$, and $\pi_p(m)$.)
We write $o(v)$ to denote the size of the orbit of $v\in\cV^+$.
Next, let $q$ have order $k$ modulo $q$, so that $n|q^k-1$, and let $\alpha$ be a primitive element of~${\rm GF}(q^k)$; put $\xi=\alpha^{(q^k-1)/n}$. For $v\in \cV^+$, let $f_v(x)$ denote the minimal polynomial of $\xi^{v}$ over $\mathbb{F}_q$; note that $f_v(x)=\prod_{i=0}^{o(v)-1} (x-\xi^{q^iv})$ is $\mathbb{F}_q$-irreducible of degree~$o(v)$. Then we can write $x^m-1=\prod_{v\in \cV^+} f_v(x)$. Thus $C(m,q)\cong \displaystyle\prod_{v\in \cV^+} \left(\mathbb{F}_q[x]/( f_v(x))\right)^*$ and \[C'(m,q)\cong \prod_{v\in \cV}\left(\mathbb{F}_q[x] /(f_v(x))\right)^*
\cong \oplus_{v\in \cV} \mathbb{F}_{q^{o(v)}}^*
\cong \oplus_{v\in \cV} \mathbb{Z}_{q^{o(v)}-1}.\] Hence $C'(m,q)\cong \gS(m,q)$ by Theorem~\ref{TSd}.
Next, we consider the quotient $C'(m,q)/\langle x \rangle$. The image of $x\bmod f_v(x)$ in~$\mathbb{F}_{q^{o(v)}}$ is $\xi^{v}$; since $\xi$ has order~$m$, we see that $x$ has order~$m/(m,v)$ in~$\mathbb{F}_{q^{o(v)}}$. Hence for each $v\in \cV$,
we can choose a primitive element $\beta_v$ in~$\mathbb{F}_{q^{o(v)}}\cong\mathbb{F}_q[x]\bmod f_v(x)$, so with $\beta_v$ of order $q^{o(v)}-1$, such that the image of $x$ in~$\mathbb{F}_{q^{o(v)}}$ is $\beta^{r_v}$, where \beql{LEfi}r_v=\frac{q^{o(v)}-1}{m/(m,v)};\end{equation} as a consequence, we have that \beql{LE-sp-cong} G:=C'(m,q) / \langle x \rangle \cong G= \langle \beta_v \mid v\in \cV, \mbox{$\ord(\beta_v)=q^{o(v)}-1$ ($v\in \cV$) and $\prod_{v\in \cV} \beta_v^{r_v}=1$} \rangle. \end{equation}
To obtain the group structure of $G$ in (\ref{LE-sp-cong}), we investigate the Sylow-$p$ subgroup $G_p\leq G$ for each prime $p$. After some standard manipulations, i.e. writing $\beta_v=\prod_p \beta_{v,p}$ with $\ord(\beta_{v,p})=\pi_p( q^{o(v)}-1)$, then fixing a prime $p$ and letting $\gamma_v=\beta_{v,p}$,
we obtain that \[G_p = \langle \gamma_v\mid v\in \cV, \mbox{$\ord(\gamma_v)=\pi_p(q^{o(v)}-1)$ ($v\in\cV$) and $\prod_{v\in \cV} \gamma_v^{s_v}=1$} \rangle,\] where
$s_v=\pi_p(r_v)=\pi_p((q^{o(v)}-1)(v,m)/m)$ for all $v\in \cV$. First, observe that if \mbox{$p\!\!\not | m$}, then $s_v=\pi_p(q^{o(v)}-1)=\ord(\gamma_v)$, and hence \[G_p= \bigoplus_{v\in \cV} \mathbb{Z}_{\pi_q(p^{o(v)}-1)}.\]
Now, let $p|m$. We need to determine for which $v\in \cV$ the number $s_v$ is minimal.
\vskip
amount
\noindent
{\bf Claim 1}: If $p|m$ and $\pi_p(v)=p^k$, then $s_v\geq s_{p^kM_p}$.
\vskip
amount
\noindent Indeed, first note that $p^kM_p(p^e-1)\equiv 0\bmod m$ precisely when $p^e-1\equiv 0 \bmod \pi_p(m)/p^k$.
Then, writing $v=p^kw$ with $(p,w)=1$, we see that $p^kw(p^e-1)\equiv 0 \bmod m$ implies that
$(p^e-1) \equiv 0 \bmod \pi_p(m)/p^k$, and the claim is now obvious.
\vskip
amount
\noindent {\bf Claim 2}: If $p\neq 2$ or $q\not \equiv 3 \bmod 4$ or
$4\!\!\not | m$, then $s_v$ is minimal for $v=M_p=m/\pi_p(m)$.
\vskip
amount
\noindent Indeed, if $p=2$ and \mbox{$4\!\!\not | m$}, the claim follows immediately from claim 1. Otherwise, let $v_{j}=m/p^j$, and let $\lambda_j=\pi_p(q^{o(v_j)}-1)$.
As a consequence of Lemma~\ref{Lpcont}, we have that \[(\lambda_1, \lambda_2, \ldots ) = (q^a, \ldots, q^a, q^{a+1}, q^{a+2}, \ldots),\] where $q^a$ occurs $a$ times, for some $a\geq 1$. So, since $s_{v_j}=\lambda_j p^{-j}$, we see that $s_{v_j}$ is minimal when $j$ is as large as possible. Now the claim follows from claim 1.
To complete the determination of~$G_p$ in the non-exceptional case where $p\neq 2$ or $q\not \equiv 3 \bmod 4$, or when $4\nmid m$, we proceed as follows.
Define $\delta = \gamma_{M_p} \prod_{v\neq M_p} \gamma_v^{s_v/s_{M_p}}$. Note that $\delta$ has order $s_{M_p}=\pi_p(q^{M_p}-1)/\pi_p(m)$ in $G_p$. Obviously, $\delta$ and the $\gamma_v$ with $v\neq M_p$ generate $G_p$, so we conclude that \[G_p\leq \mathbb{Z}_{\pi_p(q^{o(M_p)}-1)/\pi_p(m)} \oplus \bigoplus_{v\in \cV\setminus\{M_p\}} \mathbb{Z}_{\pi_p(q^{o(v)}-1)}.\]
Now consider the exceptional case where $p= 2$ and $q \equiv 3 \bmod 4$ and $4|m$. With the same notation as after claim 2, Lemma~\ref{Lpcont} now implies that \[(\lambda_1, \lambda_2,\ldots) = (2, 2^b, \ldots, 2^b, 2^{b+1}, 2^{b+2}, \ldots),\] where $2^b$ occurs $b-1$ times, for some $b\geq 3$. So \[(s_{v_1}, s_{v_2}, ...) = (1, 2^{b-2}, \ldots, 2, 1, 1, \ldots).\] Since $v_1=m/2$, we see that $s_{m/2}=1$ and $o(m/2)=1$, so $\ord(m/2)=\pi_2(q^{o(m/2)}-1)=\pi_2(q-1)=2$. Also,
note that $s_{M_2}= \pi_2(q^{o(M_2)}-1)/\pi_2(m)$. Now define $\delta=\gamma_{M_2}\prod_{v\neq M_2, m/2} \gamma_v^{s_v/s_{M_2}}$. Then $\delta^{s_{M_2}}=\gamma_{m/2}^{-s_{m/2}}=\gamma_{m/2}$, and since $\ord(\gamma_{m/2})=2$, we conclude that $\delta$ has order $2s_{M_2}$. Also, since $s_{m/2}=1$, $\gamma_{m/2}$ can be expressed in terms of elements $\gamma_v$ with $v\neq m/2$. It is now easy to see that $G_p$ is generated by $\delta$ and $\gamma_v$ for $v\in\cV \setminus\{M_2, m/2\}$, and since $\pi_2(q^{o(m/2)}-1)=2$, we conclude that in this case \[G_2\leq\mathbb{Z}_{\pi_p(q^{o(M_2)}-1)/\pi_2(m)} \oplus \mathbb{Z}_{(q^{o(m/2)}-1)/2} \oplus \bigoplus_{v\in \cV\setminus\{M_2,m/2\}} \mathbb{Z}_{\pi_2(q^{o(v)}-1)}.\]
If we combine the information about the various Sylow $p$-subgroups, we may conclude from Theorem~\ref{LTmain} that $G$ is a subgroup of
$S(m,q)$. Since $|G|=|C'(m,q)/\langle x \rangle|=
|C'(m.q)|/m=|S(m,q)|$, we conclude that $G=S(m,q)$ as desired. \end{proof}
\section{The group $C(n,p)$ for general $n$ and prime $p$}
\newcommand{\bbZ}{\mathbb{Z}}
We work with $C(n,p)$ as the group of invertible circulant matrices over $\FF_p$ and with $C^*(n,p)\leq C(n,p)$ as the subgroup of marices with eigenvalue $1$ on the eigenvector $(1, \ldots, 1)$.
We first compute the group decomposition of $C^*(n,p)$ and then proceed to compute the group decomposition of $C^*(n,p) / \langle Q_n \rangle $.
\ble{LLsh1} Let $p$ be prime and $(m,p)=1$. Then \[C^*(p^km,p)= \left[ \bigoplus _{i=0}^{k-2} \mathbb{Z}_{p^{k-1-i}}^{p^{i} (p-1)^2 m} \right] \oplus \bbZ_{p^k}^{(p-1)m} \oplus C^*(p,m).\] \end{lem}
\begin{proof} Note that $\phi : C^*(pn,p) \to C^*(pn,p)$ given by $x \mapsto x^p$ is a well-defined homomorphism, as $\phi$ preserves both the determinant and the eigenvalue of the common eigenvector $(1, \ldots 1)$. It can be easily checked that
$|{\rm Ker}(\phi)|=p^{(p-1)n}$.
It can be also easily checked that $(C^*(pn,p))^p$ is isomorphic to $C^*(n,p)$, via $\sum_{i=0}^{n-1} a_i Q_{pn}^{p i} \mapsto \sum_{i=0}^{n-1} a_i Q_n^{i} .$ Hence $\phi$ can be viewed as surjective homomorphism from $C^*(pn,p)$ to $C^*(n,p)$.
We prove the lemma by using induction on $k$. It is easy to see that $C^*(pm,p)=\bbZ_p^{a_1} \oplus C^*(m,p)$ since $(C^*(pm,p))^p
\simeq C^*(m,p)$. Since in this case $|{\rm Ker}(\phi)|=p^{(p-1)m}$, we can conclude that $a_i=(p-1)m$ and the statement is true for $k=1$.
By using the fact that $(C^*(p^{k+1}m,p))^p \simeq C^*(p^km,p)$, by induction we conclude that \[ C^*(p^{k+1}m,p) = Z_p^{a_1} \oplus \left[ \bigoplus _{i=0}^{k-2} \mathbb{Z}_{p^{k-i}}^{p^{i} (p-1)^2 m} \right] \oplus \bbZ_{p^{k+1}}^{(p-1)m} \oplus C^*(p,m). \]
It remains to find $a_1$. Note that when $n=p^{k+1}m$, we have \[ {\rm Ker} (\phi)= Z_p^{a_1} \oplus \left[ \bigoplus _{i=0}^{k-2} \mathbb{Z}_{p}^{p^{i} (p-1)^2 m} \right] \oplus \bbZ_{p}^{(p-1)m}. \] Recall that
$|{\rm Ker}(\phi)|=p^{(p-1)p^km}$. Thus we have two expressions for
$|{\rm Ker}(\phi)|$. Equating them, we have: \begin{align*} (p-1)p^km&= a_1 + \left[ \sum_{i=0}^{k-2}p^i (p-1)^2m \right] +(p-1)m, \\ (p-1)p^km &= a_1 + (p^{k-1}-1)(p-1)m +(p-1)m, \\
a_1&= p^{k-1}(p-1)^2m, \end{align*} which completes the induction. \end{proof}
Finally, we complete the proof of Theorem~\ref{LTmain-circ}. Let $d=p$ be prime, $n=p^km$ and $(p,m)=1$. Since $\langle Q_n \rangle$ is cyclic of order $p^km$, its Sylow $p$-subgroup is $Z_{p^k}$. Combined with Lemma~\ref{LLsh1}, this implies that the Sylow $p$-subgroup of the quotient group $C^*(n,p)/\langle Q_n \rangle$ equals \[ \mathrm{Syl}_p( C^*(p^{k}m,p)/ \langle Q_n \rangle) = \left[ \bigoplus _{i=0}^{k-2} \mathbb{Z}_{p^{k-1-i}}^{p^{i} (p-1)^2 m} \right] \oplus \bbZ_{p^{k}}^{(p-1)m-1}.\]
By Lemma~\ref{LTcrt} and Theorem~\ref{LTmain1} with $d=p$, the Sylow $p$-subgroup of $S(n,p)$ satisfies \[\mathrm{Syl}_p(S(n,p))\cong S_0(n,p)
\cong \bigoplus_{i=0}^{k-1}\biggl(\mathbb{Z}_{p^{i+1}/d_i}\oplus \mathbb{Z}_{p^{i+1}}^{n_i-2n_{i+1}+n_{i+2}-1}\biggr),\] with $n=n_0,n_1,\dots,n_k=n_{k+1}$ the $p$-sequence of $n$, and $d_i=n_i/n_{i+1}=(n_i,p)$ for $0\leq i\leq k$. As $n=p^km$, we have $n_i=p^{k-i}m$ and $d_i=p$ for $0\leq i\leq k$, and $n_i=m$ otherwise. Plugging these values into above equation, we have \[ \mathrm{Syl}_p(S(p^km,p))= \bbZ_{p^{k-1}} \oplus \bbZ_{p^k }^{(p-1)m-1} \oplus \bigoplus_{i=0}^{k-2} \left[ \bbZ_{p^{i}} \oplus \bbZ_{p^{i+1}}^{p^{k-2-i}(p-1)^2m-1} \right]. \] Changing $i$ to $k-2-i$ in the above summation, we have \begin{align*} \mathrm{Syl}_p(S(p^km,p))&= \bbZ_{p^{k-1}} \oplus \bbZ_{p^k }^{(p-1)m-1} \oplus \bigoplus_{i=0}^{k-2} \left[ \bbZ_{p^{k-2-i}} \oplus \bbZ_{p^{k-1-i}}^{p^{i}(p-1)^2m-1} \right]\\ &= \left[ \bigoplus _{i=0}^{k-2} \mathbb{Z}_{p^{k-1-i}}^{p^{i} (p-1)^2 m} \right] \oplus \bbZ_{p^{k}}^{(p-1)m-1}\\ &= \mathrm{Syl}_p(C^*(p^{k}m,p)/ \langle Q_n\rangle) \end{align*} and the proof of Theorem~\ref{LTmain-circ} is complete.
\iffalse \section{Conclusions}
In this paper, we have determined the sandpile group of generalized de Bruijn graphs $\mathrm{DB}(n,d)$ and generalized Kautz graphs $\mathrm{Ktz}(n,d)$, both directed graphs on~$n$ vertices with all in-degrees and out-degrees equal to $d$. This generalizes earlier results for the from \cite{LeLI-sand} for the binary de Bruijn graphs $\mathrm{DB}(2^\ell,2)$ and Kautz graphs (with $p$ prime) $\mathrm{Ktz}((p-1)p^{\ell-1},p)$, and \cite{BiKi-knuth} for the classical de Bruijn graphs $\mathrm{DB}(d^\ell,d)$ and Kautz graphs $\mathrm{Ktz}((d-1)d^{\ell-1},d)$. Moreover, we have established a nontrivial connection between the sandpile group of $\mathrm{DB}(n,q)$ and the group of invertible $n\times n$ circulants $C(n,q)$ over a finite field $\mathbb{F}_q$ when $q$ is prime or when $(n,q)=1$, which could potentially lead to ...{\color{red}HDLH: I leave this to you Dima} \fi
\end{document} | arXiv |
Multisets of a given set
A multiset is an unordered collection of elements where elements may repeat any number of times. The size of a multiset is the number of elements in it counting repetitions.
(a) What is the number of multisets of size $4$ that can be constructed from $n$ distinct elements so that at least one element occurs exactly twice?
(b) How many multisets can be constructed from $n$ distinct elements?
For part b, infinite is correct.
For part a, taking $n=3$ and elements $\{1,2,3\}$ we have multisets as: $\{1,1,2,2\}, \{1,1,3,3\}, \{1,1,2,3\}, \{2,2,3,3\}, \{2,2,1,3\}, \{3,3,1,2\}$, for a total of $6$.
Similarly for $n=4$ and using elements $\{1,2,3,4\}$, we have $18$ multisets. There must be some formula, or we have to develop one!
I am in particular looking for a formula when there is a restriction on the number occurrences in the multiset.
combinatorics sets
Here is one way to go about your part (a):
There are four places to be filled in the multiset using the $n$ distinct elements. Atleast one element has to occur exactly twice. That would leave 2 more places in the multiset. This means, atmost two elements can occur exactly twice. We can thus divide this into 2 mutually exclusive cases as follows:
Exactly one element occurs exactly twice: Select this element in ${n\choose{1}} = n$ ways. Fill up the remaining two spots using 2 distinct elements from the remaining $n-1$ elements in ${{n-1}\choose{2}}$ ways. Overall: $n \cdot {{n-1}\choose{2}} = \frac{n(n-1)(n-2)}{2}$ ways.
Exactly two elements that occur twice each: These two will fill up the multiset, so you only have to select two elements out of $n$ in ${n\choose 2} = \frac{n(n-1)}{2}$ ways.
Since these are mutually exclusive, the total number of ways to form the multiset is: $$\frac{n(n-1)(n-2)}{2} + \frac{n(n-1)}{2}$$$$ = \frac{n(n-1)^2}{2}$$
Note, that $n \ge 2$ otherwise no element can be present twice. This is also obvious from the formula (when $n = 0, 1$).
You can check that the formula tallies with your counts.
PareshParesh
$\begingroup$ one more thing suppose we are given {a,b,c,d} and {1*a,2*b,3*c,5*d} than way to distribute 10 vertices among a,b,c,d ? $\endgroup$ – user1771809 Dec 25 '12 at 13:46
$\begingroup$ @user1771809 What do you mean by {1*a,2*b,3*c,5*d}? What do you mean by distribute 10 vertices among a,b,c,d? Please be clearer. And if this is a reasonably different question, you might want to post a separate question too. $\endgroup$ – Paresh Dec 25 '12 at 17:53
Not the answer you're looking for? Browse other questions tagged combinatorics sets or ask your own question.
What's the correct definition of the $\Upsilon$ category of schedules?
What are efficient ways to compute the derivatives of iterated functions?
Finding all partitions of a set s.t. each block contains exactly one subset from a given set of subsets - how hopeless?
How can a set of N players be split into M teams, given certain rules?
Given N sets of disjoint subsets, find a set of disjoint subsets such that it satisfies a criteria
Efficient alternatives to inclusion-exclusion
Given n numbers How to find out a set of numbers whose sum equal to a certain given number
Proving B* = B on a given set
Polynomial multiplications and counting
Enumerating every "partnering" without repeating partners | CommonCrawl |
CGWeek II
Some highlights from day 2 of CGWeek:
On Laplacians of Random Complexes, by Anna Gundert and Uli Wagner.
The Cheeger inequality is a well known inequality in spectral graph theory that connects the "combinatorial expansion" of a graph with "spectral expansion". Among other things, it's useful for clustering, because you can split a graph using Cheeger's inequality to find a "thin cut" and then repeat. There's been work recently on a "higher-order" Cheeger inequality, that allows you to cut a graph into $k$ pieces instead of two by connecting this to observations about the first $k$ eigenvectors of the Laplacian.
The above paper generalizes the notion of combinatorial expansion versus spectral expansion in a very different way. Think of a graph as the 1-skeleton of a simplicial complex (i.e the set of faces of dimension at most 1). Is there a way to define the Laplacian of the complex itself ? And is there then an equivalent of Cheeger's inequality ?
It turns out that this can indeed be done. Recent work by Gromov and others have shown how to define the notion of "edge expansion" for a simplicial complex. Roughly speaking, you compute the edge expansion of a cochain (a function of a chain) by endowing it with a norm and then looking at the norm of the edge operator with respect to the norm (sort of) of the cochain itself. What is interesting is that if you choose the underlying coefficient field as $\mathbb{R}$ and the norm as $\ell_2$, you get the spectral equivalent, and if you choose instead $\mathbb{Z}_2$ and the Hamming distance, you get the combinatorial equivalent.
It's known that for 1-skeletons, and even for things like persistence, the underlying field used to define homology doesn't really matter. However, for this problem, it matters a lot. The authors show that there is no equivalent Cheeger's inequality for simplicial complexes ! They also look at random complexes and analyze their properties (just like we can do for random graphs).
Add Isotropic Gaussian Kernels at Own Risk: More and More Resilient Modes in Higher Dimensions, by Edelsbrunner, Fasy and Rote
Suppose you have three Gaussians in the plane, and you look at the resulting normalized distribution. You expect to see three bumps (modes), and if the Gaussians merge together, you'd expect to see the modes come together in a supermode.
Can you ever get more than three modes ?
This is the question the above paper asks. It was conjectured that this cannot happen, and in fact in 2003 it was shown that it was possible to get 4 modes from three Gaussians (you can get a little bump in the middle as the three Gaussians pull apart). In this paper, they show that in fact you can get a super-linear number of "bumps" or critical points for $n$ Gaussians, and these modes are not transient - they "persist" in a certain sense.
This is quite surprising in and of itself. But it's also important. A plausible approach to clustering a mixture of Gaussians might look for density modes and assign cluster centers there. What this paper says is that you can't really do that, because of the "ghost" centers that can appear.
Other quick hits:
Chris Bishop ran a very interesting workshop on Analysis and Geometry. While I couldn't attend all the talks, the first one was by Ranaan Schul gave an overview of the analytic TSP, in which you're given a continuous set of points in the plane, and want to know if there's a finite length curve that passes through all the points. The techniques used here relate to multiscale analysis of data and things like local PCA.
One of the cool innovations in CGWeek was the fast-forward session: each afternoon before the workshops started, speakers were allowed to give a 1-slide overview of what would happen in their events. The slides were all placed in a single presentation ahead of time, and it was clocked so that people couldn't run on. It was great - I didn't have to do careful planning, and got a much better idea of what was coming up. We should do this for all talks at the conference !
Labels: cs.CG, socg2012
Geometry in the Field: Talk Slides
CGWeek Day I
A Reflection On Big Data (Guest post)
Geometry in the Field: Abstracts are up !
Mihai Patrascu, R.I.P
Minimum language complexity needed to construct ex...
Extracurricular events at STOC | CommonCrawl |
\begin{definition}[Definition:Reduced Residue System]
Let $m \in \Z_{> 0}$ be a (strictly) positive integer.
The '''reduced residue system modulo $m$''', denoted $\Z'_m$, is the set of all residue classes of $k$ (modulo $m$) which are prime to $m$:
:$\Z'_m = \set {\eqclass k m \in \Z_m: k \perp m}$
Thus $\Z'_m$ is the '''set of all coprime residue classes modulo $m$''':
:$\Z'_m = \set {\eqclass {a_1} m, \eqclass {a_2} m, \ldots, \eqclass {a_{\map \phi m} } m}$
where:
:$\forall k: a_k \perp m$
:$\map \phi m$ denotes the Euler phi function of $m$.
\end{definition} | ProofWiki |
Effort or timing: The effect of lump-sum bonuses
Thomas J. Steenburgh1
QME volume 6, Article number: 235 (2008) Cite this article
This article addresses the question of whether lump-sum bonuses motivate salespeople to work harder to attain incremental orders or whether they induce salespeople to play timing games (behaviors that increase incentive payments without providing incremental benefits to the firm) with their order submissions. We find that lump-sum bonuses primarily motivate salespeople to work harder—a result that is consistent with the widespread use of bonuses in practice, but that contradicts earlier empirical work in academics.
Those who manage salespeople commonly believe that lump-sum bonuses are effective motivators. A recent field survey (Joseph and Kalwani 1998) finds that 72% of firms use bonuses in their sales incentive contracts, whereas only 58% use commission rates, the next most common form of incentive pay.Footnote 1 Moynahan (1980, p. 149) states in his book on designing effective sales incentive contracts that "for the majority of industrial sales positions, [lump-sum bonuses are] probably the optimum form of compensation." While lump-sum bonuses are not considered to be the only sound way to motivate salespeople, they are widely regarded in the trade literature (Agency Sales Magazine, Sep 2001; Bottomline, Oct 1986) and in textbooks on sales compensation planning (Churchill et al. 2000) as effective motivators.
Given the business world's preoccupation with lump-sum bonuses, it is interesting to note that academics are divided as to their effectiveness. Two main arguments are advanced against their use. First, as Holmstrom and Milgrom (1987) and Lal and Srinivasan (1993) point out, the motivational effects of lump-sum bonuses disappear once sales quotas have been met and incentives have been earned. "It is not uncommon," write Lal and Srinivasan, "to hear of salespeople spending time playing golf or indulging in other leisurely activities if their past efforts have been unusually successful."Footnote 2 A flat commission rate, on the other hand, should not induce such fluctuations in behavior since the incentive to work is constant over time and independent of how well or poorly an individual has performed in the past.
Second, as Oyer (1998) and Jensen (2003) point out, lump-sum bonuses tempt salespeople to manipulate the timing of orders to meet sales quotas without having to expend additional effort. This type of behavior can take two forms. Salespeople who have already made quota are encouraged to push out new orders to the next period to make attaining future quotas easier to accomplish, a behavior termed delayed selling. On the other hand, salespeople who would otherwise fall short of their current quota are encouraged to pull in orders from the next period, a behavior termed forward selling. These behaviors are in conflict with the firm's interest because they result in higher incentive costs without returning concomitant gains.
Adverse consequences notwithstanding, some academics maintain that lump-sum bonuses are effective motivators. Darmon (1997), among others, makes the point that providing bonuses encourages individuals to reach for sales targets that they otherwise might not attain.
The rationale for such plans is simple and well known: Quotas are set so as to provide salespeople with objectives that are challenging and worth being achieved. In order to enhance salespeople's performance, management grants them some reward when they reach a pre-specified performance level (the quota) which is higher than the level they would have achieved otherwise.
Attention to the study of how goals, such as sales quotas, affect motivation dates to the experimental work of Hull (1932, 1938) and Mace (1935). Latham and Locke (1991) present the findings of hundreds of subsequent studies in the goal-setting literature. McFarland et al. (2002) discuss how multiple quotas affect sales call selection; Darmon (1997) discusses what influences management to select specific bonus contract structures; and Mantrala et al. (1994) use agency theory to develop an approach for determining optimal bonus contracts.
These arguments for and against lump-sum bonuses suggest the basic question that must be asked by firms considering whether to offer them: will the productive gains from increased effort outweigh the counterproductive losses? This question is not entirely new to marketing since the same basic concern applies to the promotion of consumer packaged goods. Just as bonuses can motivate either productive effort or unproductive timing games, consumer promotions can increase demand either through increased consumption (primary demand) or through brand switching (secondary demand). Gupta (1988), Van Heerde et al. (2003), and Steenburgh (2007) are among many others who have addressed this issue in the promotions literature.
Although much attention has been given to consumer promotions, little empirical work has been devoted to the effects of sales incentive contracts. A notable exception is Oyer (1998), who provides empirical evidence that nonlinear incentive contracts induce temporal variation in firms' output. Using firm-level data across many industries, Oyer finds that firms' reported revenue tends to increase in the fourth quarter and to dip in the first quarter of their fiscal years. This result is consistent with the notion that some agents of the firm, whether salespeople or executives, are varying effort, manipulating the timing of sales, or both in response to annual incentive contracts. As the magnitude of the spikes and dips are roughly equivalent in Oyer's analysis (see Table 1 for estimates from a few industries), we might infer that timing games play a particularly important role.
Table 1 Bonus plan effects across industries
In contrast, this study suggests that lump-sum bonuses primarily motivate salespeople to work harder. Our results are based on a unique dataset that differs from Oyer's (1998) in several important respects. First, it offers a more refined view of how salespeople behave because it is based on individual-level rather than firm-level output. This is an important distinction because theory suggests that some salespeople should game the system by delaying sales and others by moving sales forward at a given point in time. It is not possible to simultaneously observe the effects of both behaviors on sales using aggregate data.Footnote 3 Second, whereas Oyer does not directly observe the firms' incentive contracts and reasonably assumes that incentives are offered at the fiscal-year end, we do observe the contracts under which salespeople work. We show that directly observing the structure and timing of incentives is critical to understanding whether greater effort or timing games explain the resulting variation in output. Finally, we observe the output of a group of salespeople who lack incentives to concentrate production at the end of quarters, and we use this group's output use to control for temporal variation not attributable to the incentive contract, such as customer buying cycles.
Our results suggest that salespeople respond rationally to incentives because individuals work harder when they have more to gain by doing so. They also suggest that the widespread practice of using lump-sum bonuses may not be as detrimental to firms as some believe because their primary effect can be to motivate people to work harder rather than to play timing games.
An extensive theoretical literature in marketing and economics, usually focused on finding an optimal incentive contract under a given set of conditions, explores how various incentive contracts affect worker motivation. Basu et al. (1985), Rao (1990), Lal and Srinivasan (1993), Joseph and Thevaranjan (1998), Gaba and Kalra (1999), and Godes (2004), among others, examine issues directly related to sales incentive strategy. Several of these studies examine how sales incentive contracts influence effort, but none explore timing effects. Gaba and Kalra's (1999) experimental evidence supports theoretical predictions about how salespeople should respond to lump-sum bonuses, but they focus on whether salespeople should engage in more risky selling behavior rather than whether salespeople should put forth more effort.
Chevalier and Ellison (1997) suggest that a relatively small empirical literature on how people respond to incentives exists because the direct observation of incentive contracts is rare. Coughlan and Sen (1986), John and Weitz (1989), Coughlan and Narasimhan (1992), and Misra et al. (2005) explore sales force incentive issues using survey data, but focus on firms' decisions (e.g., what mix of salary and incentive to offer) rather than the behavior of salespeople. Banker et al. (2000) and Lazear (2000) find, respectively, that salespeople and factory workers increase productivity in response to pay-for-performance incentive contracts. These studies, being based on piece-rate incentive contracts that should curb such behavior, do not explore timing effects. Healy (1985) finds that managers alter accrual decisions (a timing effect) in response to their incentive contracts, but does not examine how these contracts affect the managers' productivity. Our study provides a more comprehensive view of behavior by examining workers' effort and timing decisions under a directly observed incentive contract.
Institutional details
The focal firm is a Fortune 500 company that manufactures, sells, finances, and maintains durable office products. Its products range in complexity from relatively simple machines that sit on a desktop to fairly sophisticated ones that fill a room. Prices range from less than one thousand dollars to several hundred thousand dollars per machine. In addition to its physical products, the firm offers services such as equipment maintenance, labor outsourcing, and systems consulting. The firm's customers include major corporations, small businesses, and government agencies.
The firm directly employsFootnote 4 the salespeople in this study, and it broadly classifies them as either account managers or product specialists. The account managers are responsible for selling basic products and for spotting opportunities in which the product specialists may be able to sell more sophisticated ones. There are several types of product specialists, each having distinct product-line expertise. Organizationally, the account managers make up one sales force, and the specialists are divided into the remaining sales forces by their product expertise. Although several salespeople may serve an account, each has unique responsibility and, as a rule, only one salesperson receives credit for the sale of a given product. The firm's culture frowns upon team compensation, and very few salespeople share territories.
The structure of the incentive contract, which is consistent across all of the sales forces, is outlined in Table 2. The salespeople's incentive pay is based on the amount of revenue that they produce for the firm. The contract includes three quarterly bonuses, a full-year bonus, a base commission rate, and an overachievement commission rate. The values of the commission rates and bonuses are common within a sales force, but vary across them. The sales quotas are specific to individual salespeople. The bonuses and tiered commission rates create a nonlinear relationship between the output of the salesperson and the incentive pay that they earn. Roughly half of the salespeople's pay is distributed through salary and the other half through incentives. We make no claim that this is an optimal incentive contract, but rather take it as given. Given the survey work of Joseph and Kalwani (1998), this structure appears to represent what is commonly found in practice.
Table 2 Elements of the incentive contract
The firm views a salesperson as having had a successful year if the full-year sales quota has been met, and the incentive contract places the greatest emphasis on this target. The sum of the three quarterly bonuses is worth just slightly more than the single full-year bonus, and the overachievement commission rate further emphasizes its importance to the firm. Long-term incentives outside of the sales incentive contract, such as promotions to better job assignments, grade-level increases, and salary increases, also depend in part on whether the full-year quota has been met. These extra-contractual incentive decisions do not depend on the satisfaction of quarterly quotas.
Preliminary aggregate analysis
Taking a preliminary view of the problem, we estimate a model based on sales-force level data. The intent of this analysis is twofold. First, it helps explain why Oyer's (1998) results, which are based on a dataset in which the incentive contracts are not directly observed, do not necessarily provide evidence of timing games. Specifically, we show that we cannot draw meaningful conclusions about whether gains in revenue at the fiscal-year end exceed losses in the subsequent period unless we also account for the effects of bonuses from interim periods (such as quarterly bonuses). Second, this analysis produces results based on "aggregate" dataFootnote 5 that can be compared with results based on individual-level data in a more refined analysis.
Our study is based on 2,570 salespeople who worked in one of six sales forces.Footnote 6 The data consist of 50,106 monthly observations taken from January 1999 to December 2001. The maximum number of observations per individual is 36 and the average number is 19.5. Each month of the observation period, we observe the actual revenue that an individual produces for the firm, the associated sales quota or quotas that need to be met, and the individual's tenure with the firm (measured by the number of months that a salesperson has been employed).
Summary information about the sales forces is reported in Table 3. Descriptive statistics include the number of individuals, the average tenure, and the average, 10th and 90th percentile sales quotas for each sales force. Account managers (AM) represent more than half of the salespeople in the study. Individuals in this group tend to have lower sales quotas than the product specialists (PS1–PS5) do because they sell the most basic products offered by the firm. While the account managers also tend to have less sales experience, they are not entry-level salespeople. Their average tenure with the firm is over six years, and most individuals have outside experience in sales before joining the company. The wide spread between the 10th and the 90th percentile sales quotas is due to a significant difference in the sales potential of individual sales territories.
Table 3 Descriptive statistics for the sales forces
Observing the incentives and the revenue production of individual salespeople is not enough to determine whether an incentive contract is causing the temporal variation in output. We need to control for the possibility that customer behavior explains the peaks and dips in revenue production rather than strategic changes in the salespeople's actions. For example, suppose the firm's customers tend to delay spending until the last month of every quarter. The spikes and dips in production might then be attributed to market behavior rather than the salespeople's response to the incentive plan.
We use the revenue produced by the indirect sales channel as a covariate to control for this possibility. The member firms of the indirect channel are compensated such that the variation in their output over time can be attributed to market forces rather than to their incentive contracts. In the first two years of the study, firms worked under a flat commission rate, while in the final year they worked under a flat commission rate in conjunction with a monthly bonus. Thus, agents in this channel lack incentive to lump production at the end of quarters.
Empirical analysis
To analyze the data, we estimate the model
$$ \begin{array}{*{20}c} {{y_{{st}} = \alpha + {\sum\limits_{s = 1}^5 {D_{s} * \alpha _{s} } } + X_{t} \beta _{t} + \varepsilon _{{st}} ,}} & {{\varepsilon _{{st}} \sim N{\left( {0,\sigma ^{2} } \right)}}} \\ \end{array} $$
The independent variable y st represents the log of the average revenue produced by the salespeople in sales force s in month t. We use log revenue so the effects are proportional rather then additive. D s is a dummy variable that takes the value one for product specialist \(s \in \left\{ {1,...,5} \right\}\); this yields sales-force-specific intercepts. X is composed of the following explanatory variables: FY is a dummy variable that takes the value one in December, the month before the full-year bonus period closes. POST.FY takes the value one in January, the month after the full-year bonus period closes. These variables are zero in all other months. Similarly, Q is a dummy variable that takes the value one in March, June, and September, the months before the quarterly bonus periods close. POST.Q takes the value one in April, July, and October. These variables are zero in all other months. CONTROL.SALESFootnote 7 is the average sales revenue generated by the indirect sales channel, the control population. This variable is standardized for ease of interpretation.
We present results for the model that include only year-end effects in Table 4. The revenue production at the end of the bonus period marginally increases (β FY = 0.1782), signifying that the incentive plan has a positive influence on the salespeople's behavior, but it is exceeded in magnitude by the decrease in revenue after the bonus period ends (β POST.FY = −0.3393). The coefficient for the year-end increase is not statistically significant. From a broad perspective, these estimates appear similar to Oyer's and may lead us to conclude that the incentive contract encourages salespeople only to forward sell, not to work harder.
Table 4 Preliminary analysis assuming year-end effects only
Given that the quarterly bonuses are of lesser value than the full-year bonus, we might not expect their inclusion to make a significant impact. Yet, when looking at the results in Table 5 for the model including both quarterly and full-year effects, the picture is now quite different. The positive effects during the bonus periods outweigh the negative effects afterwards. In fact, we do not find evidence of a dip in revenue after the quarterly bonus periods end, as β POST.Q is insignificant. For the full-year bonus, the dip in revenue after the period explains 41% of the increase in revenue during the period.Footnote 8 These results suggest the primary influence of the incentive contract is to encourage salespeople to work harder, not to play timing games.
Table 5 Preliminary analysis with effects for all bonus periods
What causes the results to change so dramatically? The baseline sales level is overestimated by omitting the quarterly effects because the productive increases in revenue at quarters' end are not followed by counterproductive decreases. Since the quarterly effects do not merely cancel each other out, but rather are positive, the intercept of the log of revenue drops from 11.6 to 11.4 when we include them. As can be seen in Fig. 1, this affects the parameter estimates and changes our interpretation of the year-end effects. By not accounting for the quarterly bonuses, we underestimate the spike in revenue caused by the full-year bonus and overestimate the dip in revenue following it.
Preliminary model comparison
The preliminary analysis illustrates the need for careful modeling, but it brings up as many questions as it answers. Most importantly, we have not accounted for differences in individuals' circumstances that may have an equally important impact on our results. For example, while an individual who has already made a bonus is encouraged to delay sales, an individual who has not yet made it is encouraged to forward sell. Do these effects cancel one another out or does one effect tend to dominate? How does this affect our analysis of whether effort or timing effects are more important? We now discuss how an individual's sales history can influence her or his actions and build an individual-level model to capture these effects.
Theoretical motivation
Principal-agent models give us an appreciation of how individuals respond to various circumstances. In this section, we discuss the ways we would anticipate a salesperson responding given various levels of accumulated sales within a bonus period. Specifically, we focus on how past performance influences an individual's decision to work and to play timing games. Our conclusions will suggest that an accurate decomposition of effort and timing effects cannot be made without accounting for individual-level behavior. This motivates the development of a statistical model based on individual-level data.
Lal and Srinivasan (1993) point out that past performance influences the level of effort exerted when a salesperson is working under a bonus contract. A simple example helps clarify this relationship. Consider a salesperson who is working to achieve a quarterly bonus. Each month she has the opportunity to sell one unit of a good. By working harder she increases the probability of a sale, but greater effort comes at an increasing marginal cost. Let θ t be the probability of a successful sale and \({\raise0.7ex\hbox{${\theta _t^2 }$} \!\mathord{\left/ {\vphantom {{\theta _t^2 } 2}}\right.\kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$2$}}\) be the associated cost of effort in month \(t \in \left\{ {1,2,3} \right\}\). Suppose that the salesperson's utility for wealth is u(w). Suppose further that the firm offers a salary of \(a\) no matter what the salesperson produces and a bonus of \(b\) if the salesperson meets or exceeds a quota of q = 2 units. Let \(\Delta \equiv u\left( {a + b} \right) - u\left( a \right)\) be the difference in utility between earning and not earning the bonus without regard to the cost of effort.
Figure 2 illustrates how the salesperson's past performance affects the level of effort exerted in the final month of the quarter. First, consider a salesperson who does not complete a sale in either of the first two months. She has no chance of making her quota and earning the bonus; consequently, she chooses not to work in month three, a marginal decrease in effort from the second period level. Next, consider a salesperson who completes sales in both of the first two months. She has already made quota and earned the bonus; consequently, she also chooses not to work in month three, a marginal decrease in effort from the second period level. Finally, consider a salesperson who completes one sale in the first two periods. The third period provides the final opportunity for her to make quota; consequently, she marginally increases effort from the second period level. (Proof in Appendix I.)
Effort in month three given accumulated sales
Despite being simple, this model provides the basic intuition of how individuals vary effort when working under a bonus contract. As illustrated in Fig. 2, those who are within reach of the bonus work harder; those who have already earned the bonus relax; those who cannot earn the bonus give up.
We summarize the predictions of how salespeople will behave and the corresponding influence of this behavior on revenue production (which we observe in the data) as follows:
Suppose a lump-sum bonus is the only incentive offered for quota attainment. In the final month of the bonus period:
salespeople who can make quota if they stretch will increase effort and their revenue production will marginally increase.
salespeople who either
have already made quota, or
are unlikely to make quota
will decrease effort and their revenue production will marginally decrease.
The firm analyzed in this paper offers an overachievement commission rate in conjunction with the full-year bonus. This commission rate will modify how a salesperson who has already made quota behaves, but it will not influence the other salespeople. Returning to the previous example, suppose the firm offers an additional incentive c if the salesperson sells one unit more than her quota. She now will exert positive effort in the third month if she sold a unit in each of the first two periods, but she still exerts no effort if she did not sell a unit in each of the first two periods. (See Fig. 3) Given the overachievement commission rate, we make no prediction about whether salespeople who have met quota will marginally increase or decrease effort.
The effect of an overachievement commission rate
Timing games
Just as past performance influences how hard an individual is willing to work for a bonus, it affects the types of timing games that he or she plays with orders. Oyer (1998) builds a simple theoretical model to predict how individuals manipulate the timing of sales, essentially showing that salespeople will pull in orders from future periods if they would otherwise fall short of a sales quota and they will push out orders to future periods if quotas are either unattainable or have already been achieved. The timing-game predictions correspond to the effort predictions as follows:
salespeople who can make quota if they stretch will pull in sales from future periods. Their revenue production will marginally increase in the month before and will marginally decrease in the month after the bonus period closes.
will push out sales to future periods. Their revenue production will marginally decrease in the month before and will marginally increase in the month after the bonus period closes.
The timing-game predictions raise the issue of whether it is even possible to decompose the effort and timing effects using aggregate data. For instance, suppose one group of salespeople is forward selling and another, of equal size, is delaying sales. In aggregate, we would see no change in output, as the spikes in output of one group are perfectly balanced by the dips in output of another. Orders are being moved across periods, but we cannot identify the counterproductive behavior from the data because they move equally in both directions. We now turn to developing a statistical model that takes into account an individual's distance from quota so as to accurately identify the timing and effort effects.
Model development
Defining the sales history variables
The theoretical discussion highlights why we need to account for past performance if we are to accurately decompose the effort and timing effects. The implementation of this, however, is made difficult by the nonlinear relationship between past performance and how an individual behaves. For example, if prior outcomes are poor, the salesperson reduces effort near the end of a bonus period. If he or she is within striking distance of quota, the salesperson increases effort. Yet, if the quota has already been made, the salesperson reduces effort.
We use categorical variables to capture how past performance affects an individual's revenue production. The variables are created using the individuals' performance to date (PTD) against quota immediately prior to the final month of a bonus period. For every month that a salesperson works, we observe the sales quota or quotas that need to be met and the actual amount of revenue produced by the individual. An individual's PTD is defined as the ratio of cumulative revenue produced in a bonus period to the quota that needs to be met. For example, if a salesperson's first-quarter quota is $400K and she has produced $200K in total at the end of February, the PTD is 50% against the first quarter quota at that point in time.
Two sets of categorical variables are needed to capture the effects of sales history on revenue production: one set of variables for the month before a bonus period and one set for the month following a bonus period. The categorical variables are: EXCEEDED, NEAR, STRETCH, FAR, and REMOTE in the month before the end of an incentive period; and POST.EXCEEDED, POST.NEAR, POST.STRETCH, POST.FAR, and POST.REMOTE in the month after it. (Note: we add two additional categories, VERY.FAR and POST.VERY.FAR, for the full-year bonus period because distribution of past performance is wider.) We refer to these as the sales history variables and their definitions, which are based on the PTD measure, are given in Table 6. We estimate the quarterly and full-year effects separately because the amount of compensation at stake is greater at the end of the year than it is at the end of a quarter. The observed frequency of occurrence for each of the categories is given in Fig. 4.
Observed frequency of categories
Table 6 Definition of Sales History Variables
An example clarifies how these variables are defined. Suppose a salesperson has done very well and her PTD is 120% at the end of February. In March, the variable EXCEEDED associated with the quarterly quotas takes the value one and variables NEAR, STRETCH, FAR, and REMOTE take the value zero for this salesperson. In April, all of the aforementioned variables take the value zero; the variable POST.EXCEEDED associated with the quarterly quotas takes the value one; and the variables POST.NEAR, POST.STRETCH, POST.FAR, and POST.REMOTE take the value zero. (The POST variables take the value zero in March, and all of the variables take the value zero in months not surrounding a quarterly bonus.) A similar process is used to define the quarterly variables in June, July, September, and October and to define the full-year variables in December and January.
How do we know whether individuals are playing timing games or exerting greater effort? Timing games imply that salespeople move orders from one period to the next. Subsequently, spikes (dips) in revenue production in the month prior to the close of a bonus period are followed by equivalent dips (spikes) in production in the month after it. On the other hand, if the salespeople are just varying effort, spikes or dips in production exist in the month prior to the close of an incentive period, but not in the month after it. In other words, we infer whether timing games are being played by the sign of the coefficient of the POST variables.
The regression model
We model the revenue production of salesperson i from sales force s in month t as follows:
$$ \begin{array}{*{20}c} {{y_{{sit}} = \alpha _{{si}} + X_{{sit}} \beta _{s} + \varepsilon _{{sit}} ,}} & {{\varepsilon _{{sit}} \sim N{\left( {0,\sigma ^{2}_{{si}} } \right)}}} \\ \end{array} $$
$$\alpha _s \sim N\left( {\xi _{si} ,\sigma _{\alpha s}^2 } \right)$$
$$\xi _s \sim N\left( {\gamma ,\sigma _\xi ^2 } \right)$$
$$\beta _s \sim MVN_p \left( {\delta ,\Sigma } \right)$$
where \(s = 1, \ldots ,6\); \(i = 1, \ldots ,n_s \); \(t \in \left\{ {1, \ldots ,36} \right\}\). An individual is identified by two subscripts, s and i, in this notation. The constant n s denotes the number of individuals in sales force s. The month t refers to a specific calendar month; this is necessary to identify the market sales, a variable in the vector X sit . A salesperson's output is measured in thousands of dollars of revenue produced for the firm. The variance of the error term is assumed to be individual specific. (See Appendix II for the full conditional posterior distributions.)
Differences among the individual salespeople are accounted for through the random intercepts α si . Since individuals within a sales force have many common characteristics—for example, they sell the same types of products, share common managers, undergo similar training, etc.—we model the intercepts as arising from a sales-force-specific distribution. In turn, the means of the sales-force-specific distributions, ξ s , are modeled as arising from a common population distribution. The intercepts α si are interpreted as an individual's baseline revenue production.
The vector of explanatory variables, X sit , includes tenure with the firm, market sales (measured by the revenue produced in the indirect channel), and the categorical variables describing an individual's sales history at that point in time. The sales-force-specific parameters β s quantify the influence of these variables. Since the sales history variables are categorical, we can interpret the coefficients associated with these variables as marginal changes in an individual's revenue production from her or his baseline. We model the parameters β s as arising from a common population distribution. Our specification allows us to draw inference at both the sales force and population levels.
We decompose the marginal changes in revenue production into effort and timing-game components using the following relationships: Let Δ be the marginal change in revenue production attributable to effort and let Λ be the change attributable to timing games. For any given sales history, say for individuals in the STRETCH classification, Δ and Λ are defined as:
$$ \begin{array}{*{20}c} {\Delta ^{{STRETCH}} = \delta ^{{STRETCH}} + \delta ^{{POST.STRETCH}} } \\ {\Lambda ^{{STRETCH}} = - \delta ^{{POST.STRETCH}} } \\ \end{array} \cdot $$
It is straightforward to find these quantities through the Markov chain Monte Carlo (MCMC) output.
We summarize the results from Eq. 2 using the mean and standard deviation of the posterior distributions. The population-level results are reported in Table 7 and the sales-force-level results in Table 8. The incentive contracts generally motivate salespeople to produce more revenue during the bonus period. See the EXCEEDED coefficients, for example, in Table 7. We now turn to discussing whether effort or timing games lead to the increases.
Table 7 Population parameter estimates
Table 8 Sales force parameter estimates
Very limited support exists for the idea that the salespeople play timing games in response to bonuses at this firm. When considered individually, none of the POST variables are statistically significant at the population level (see Table 7). This holds for both the quarterly and the full-year bonus periods. We also consider the weighted-average of the post-period effects, where the weights are determined by the observed frequency of a given sales history. When taken as a group, the 90% credible intervals of the weighted means are (−1.9, 8.2) for the quarterly effects and (−12.3, 0.3) for the full-year effects. Since both intervals contain zero, no support exists for timing games on this measure either.
This is surprising for a few reasons. Salespeople who sell durable goods should be able to influence the timing of sales more directly than their consumer goods counterparts because each sale requires considerable time and intense customer contact. We would expect that these salespeople would have some ability to manipulate the timing of business. Second, a sizeable portion of the focal firm's business comes from customers trading in old equipment. This should make it easier for salespeople to delay the timing of sales because not all customers have a pressing need for new equipment.
Two obstacles may prevent these salespeople from playing timing games. First, managers have regular one-on-one meetingsFootnote 9 to discuss where in the sales cycle all prospective customers are. This form of monitoring may make it difficult to delay the close of business because managers can infer delay tactics when future sales arrive. Furthermore, many of the managers have worked their way up through the ranks and have established personal relationships in their salespeople's accounts. If they suspect an employee is delaying orders, they may be able to directly contact customers and learn when the salesperson initiated the sales process. A monitoring explanation, however, does not account for why salespeople do not appear to be forward selling. Sales managers have no incentive to prevent this behavior, but we find no evidence of it either.
An explanation more consistent with the data is that the customers prevent timing games from being played in this industry. Spikes in market sales during the final month and dips during the first month of bonus periods bolster this idea. (The average values of the standardized CONTROL.SALES variable are 0.669 for the final months of a quarter and 1.61 for the final month of the year, whereas they are −0.430 for the first month of a quarter and −1.40 for the first month of the year.) Recall that the CONTROL.SALES variable was taken from an indirect channel that has no incentive to manipulate the timing of sales. A plausible explanation of the spikes and dips in these data is that customers require sales to close according to their own needs, perhaps making purchases only when enough money is available in their budgets at the end of a quarter. If this is the case, then salespeople face the prospect of either closing sales when the customers want them to close or losing them entirely, which precludes the salespeople from moving business across periods.
Support does exist for the idea that bonuses motivate salespeople to vary effort, and, on the whole, they motivate salespeople to work harder. Considered individually, the EXCEEDED and NEAR coefficients are positive and statistically significant for the quarterly periods, and the EXCEEDED, NEAR, STRETCH, and FAR coefficients are positive and statistically significant for the full-year period (see Table 7). Taken as a group, the 90% credible intervals for the weighted means are (4.6, 16.6) for the quarterly periods and (52.2, 73.0) for the full-year period. As both these intervals are strictly positive and all of the POST coefficients are insignificant, we claim that the incentive contract tends to motivate salespeople to work harder.
This is not to say that the bonuses only have productive effects. While the coefficients are not statistically significant, the estimates are negative for both of the REMOTE categories. This suggests that salespeople give up if they feel that they cannot make the quota. Even if we cannot interpret this as a marginal decrease in effort, we can certainly claim that these salespeople do not increase effort in an attempt to earn greater incentives. This supports the idea that salespeople react to the incentive contract in a rational manner.
How do the results based on individual output compare to the preliminary results based on sales-force output? The individual-level results provide even less evidence of timing games. The results for the quarterly bonus periods are consistent across the two analyses, and neither suggests that timing games occur. Spikes in revenue production at the end of the quarterly bonus periods are not followed by counterproductive dips in the subsequent period. The results for the full-year bonus period, however, are less consistent across the two analyses, and the individual-level results provide less evidence that the salespeople are playing timing games. In the preliminary analysis, we find evidence of forward selling because the spikes in revenue production at the end of the full-year bonus period are followed by dips in production in the subsequent period. In the individual-level analysis, we do not find statistically significant evidence of forward selling. Even if we were to use the weighted mean of the POST effects as a point estimate of the forward selling effects, it explains very little of the spike in revenue production. Since the weighted mean of the POST effects is −6.3, and the weighted mean of the bonus period effects is 62.0, we would estimate that about 10% of the increase is due to forward selling by this method.
Taken altogether, our results suggest that individual-level data are needed to determine the magnitude of timing and effort effects. As was seen in the preliminary analysis, the baseline sales level is crucial in accurately decomposing effort and timing effects, and the most appropriate baseline is an individual's sales level. Not accounting for heterogeneity in the intercepts is bound to bias the analysis. Furthermore, an individual's sales history determines which timing game is in her or his self-interest, and this history is lost if the data are aggregated.
Conclusions and future research
In this paper, we find that lump-sum bonuses motivate salespeople to work harder, not to play timing games—a result that is consistent with the widespread use of lump-sum bonuses in practice. This is not to suggest that lump-sum bonuses have no counterproductive effects. We find that bonuses cause some salespeople, those who are unlikely to make quota, to reduce effort, but this effect is more than compensated for by productive increases in output by other salespeople. Our results are based on a unique data source that contains the revenue production of individual salespeople. Using these data, we bring into question whether models based on aggregate data sources can accurately decompose effort and timing effects and cast doubt on previous findings that suggest the primary effect of lump-sum bonuses is to induce salespeople to play timing games.
This study also provides a basis for future research. We are currently addressing the issue of how firms should design optimal incentive contracts—combining sales quotas, bonuses, and commission rates to effectively motivate their sales forces. This and other studies that explore policy variation need to make assumptions about how individuals will behave when policies are changed. Our current findings suggest that salespeople will alter how hard they work, but will not manipulate the timing of orders in response to incentive contracts. Having identified the key ingredients to a structural model of salespeople's behavior, we can now pursue questions of how to effectively motivate them.
Joseph and Kalwani (1998) also find that 35% of firms include both bonuses and commission rates and 5% offer salary alone.
This argument is not limited to bonuses; other nonlinear incentive contracts, such as tiered commission rates, share the disadvantage of not offering constant motivation to work.
At best, it may be possible to observe the effect of the dominant behavior, either delayed or forward selling, on sales using aggregate data; still, given that these two behaviors produce opposing effects, it may also be possible that both behaviors occur, but their effects on sales cancel out when the data are aggregated.
An indirect sales channel exists to reach small and rural accounts. It is composed of roughly eight hundred smaller firms that resell the focal firm's products through "arm's length" transactions. The focal firm, for example, cannot directly compensate the salespeople that work in the indirect channel.
We use data that are averaged across individuals in the sales forces, rather than sales force aggregates, in order to control for differences in population size. Sales force aggregates would be sensitive to the number of people working at any given time.
Salespeople who worked in teams, with two or more people sharing quota responsibility and pooling the revenue for a given territory, were excluded from the study as these individuals' incentives might differ from those of the general population owing to the free-riding opportunity.
For future research, additional controls might be found by tracking customers' fiscal calendars, which could be used to infer their buying cycles. These data, however, were not available for this analysis.
The proportion of the spike in revenue during the bonus period explained by the dip in revenue afterwards is calculated as \({{\left| {e^{\beta ^{FY.POST} } - 1} \right|} \mathord{\left/ {\vphantom {{\left| {e^{\beta ^{FY.POST} } - 1} \right|} {\left( {e^{\beta ^{FY} } - 1} \right)}}} \right. \kern-\nulldelimiterspace} {\left( {e^{\beta ^{FY} } - 1} \right)}}\).
These meetings occur at least monthly and sometimes weekly.
"Finding the Compensation Plan that Works Best," Agency Sales Magazine (September 2001)
"Inspire Your Employees: Give Them Bonuses," Bottomline (October 1986)
Banker, R. D., Lee, S. Y., Potter, G., & Srinivasan, D. (2000). An empirical analysis of continuing improvements following the implementation of a performance-based compensation plan. Journal of Accounting and Economics, XXX(3), 315–350 (December).
Basu, A. K., Lal, R., Srinivasan, V., & Staelin, R. (1985). Sales force compensation plans: an agency theoretical perspective. Marketing Science, IV(4), 267–291.
Chevalier, J., & Ellison, G. (1997). Risk taking by mutual funds as a response to incentives. Journal of Political Economy, CV(6), 1167–1200.
Churchill, G. A., Ford, N. M., Walker, O. C., Johnston, M. W., & Tanner, J. F. (2000). Sales force Management, Sixth Edition. Irwin/McGraw-Hill.
Coughlan, A. T., & Narasimhan, C. (1992). An empirical analysis of sales-force compensation plans. Journal of Business, LXV(1), 93–121.
Coughlan, A. T., & Sen, S. K. (1986). Sales force Compensation: Insights from Management Science. Marketing Science Institute Report No. 86–101, Cambridge, Massachusetts.
Darmon, R. (1997). Selecting appropriate sales quota plan structures and quota-setting procedures. Journal of Personal Selling and Sales Management, XVII(1), 1–16.
Gaba, A., & Kalra, A. (1999). Risk behavior in response to quotas and contests. Marketing Science, XVIII(3), 417–434.
Godes, D. (2004). Contracting under endogenous risk. Quantitative Marketing and Economics, II(4), 321–345.
Gupta, S. (1988). The impact of sales promotions on when, what and how much to buy. Journal of Marketing Research, XXV(4), 342–355.
Healy, P. M. (1985). The effect of bonus schemes on accounting decisions. Journal of Accounting and Economics, VII, 85–107.
Holmstrom, B., & Milgrom, P. (1987). Aggregation and linearity in the provision of intertemporal incentives. Econometrica, LV, 303–328 (March).
Hull, C. L. (1932). The goal-gradient hypothesis and maze learning. The Psychological Review, 39, 25–43.
Hull, C. L. (1938). The goal-gradient hypothesis applied to some 'Field-Force' problems in the behavior of young children. The Psychological Review, XLV(4), 271–298.
Jensen, M. C. (2003). Paying people to lie: the truth about the budgeting process. European Financial Management, IX, 379–406.
John, G., & Weitz, B. (1989). Sales force compensation: an empirical investigation of factors related to use of salary versus incentive compensation. Journal of Marketing Research, XXVI, 1–14.
Joseph, K., & Kalwani, M. (1998). The role of bonus pay in sales force compensation plans. Industrial Marketing Management, XXVII, 147–159.
Joseph, K., & Thevaranjan, A. (1998). Monitoring incentives in sales organizations: an agency-theoretic perspective. Marketing Science, XVII(2), 107–123.
Lal, R., & Srinivasan, V. (1993). Compensation plans for single-and multi-product sales forces: an application of the Holmstrom-Milgrom Model. Management Science, XXXIX(7), 777–793.
Latham, G. P., & Locke, E. A. (1991). Self-regulation through goal setting. Organizational Behavior and Human Decision Processes, L, 212–247.
Lazear, E. P. (2000). Performance pay and productivity. American Economic Review, XC(5), 1346–1361.
Mace, C. A. (1935). Incentives: some experimental studies," Report No. 72 (Great Britain: Industrial Health Research Board).
Mantrala, M. K., Sinha, P., & Zoltners, A. A. (1994). Structuring a multiproduct sales quota-bonus plan for a heterogeneous sales force: a practical model based approach. Marketing Science, XIII(2), 121–144.
McFarland, R. G., Challagalla, G. N., & Zenor, M. J. (2002). The effect of single and dual sales targets on sales call selection: quota versus quota and bonus plan. Marketing Letters, XIII(2), 107–120.
Misra, S., Coughlan, A. T., & Narasimhan, C. (2005). Salesforce compensation: an analytical and empirical examination of the agency theoretic approach. Quantitative Marketing and Economics, III(1), 5–39.
Moynahan, J. K. (1980). Designing an effective sales compensation program. New York, NY: AMACOM.
Oyer, P. (1998). Fiscal year ends and nonlinear incentive contracts: the effect on business seasonality. Quarterly Journal of Economics, CXIII(1), 149–185.
Rao, R. (1990). Compensating heterogeneous sales forces: some explicit solutions. Marketing Science, IX(4), 319–341.
Steenburgh, T. J. (2007). Measuring consumer and competitive impact with elasticity decompositions. Journal of Marketing Research, XLIV(4), 636–646.
Van Heerde, H. J., Gupta, S., & Wittink, D. (2003). Is 75% of the sales promotion bump due to brand switching? No, only 33% is. Journal of Marketing Research, XL(4), 383–395.
Harvard Business School, Soldiers Field, Boston, MA, 02163, USA
Thomas J. Steenburgh
Correspondence to Thomas J. Steenburgh.
Thomas J. Steenburgh is an Associate Professor at the Harvard Business School. He would like to thank Andrew Ainslie, Subrata Sen, and K. Sudhir for comments and suggestions that greatly improved the quality of this article. He is especially grateful for the advice and encouragement of his late thesis advisor, Dick Wittink. The author, of course, is solely responsible for remaining errors.
Appendix I. The effort model
First, let us consider a salesperson who has been successful in the first two periods. This person's expected utility is
$$\begin{aligned}& \theta _3 \left[ {u\left( {a + b} \right) - {\raise0.7ex\hbox{${\theta _3^2 }$} \!\mathord{\left/{\vphantom {{\theta _3^2 } 2}}\right.\kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$2$}} - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } } \right] + \left( {1 - \theta _3 } \right)\left[ {u\left( {a + b} \right) - {\raise0.7ex\hbox{${\theta _3^2 }$} \!\mathord{\left/{\vphantom {{\theta _3^2 } 2}}\right.\kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$2$}} - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } } \right] \\& = \left[ {u\left( {a + b} \right) - {\raise0.7ex\hbox{${\theta _3^2 }$} \!\mathord{\left/{\vphantom {{\theta _3^2 } 2}}\right.\kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$2$}} - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } } \right] \\ \end{aligned} $$
because the bonus is earned whether the salesperson is successful or not. Taking the first derivative of expected utility with respect to θ 3 results in the first-order condition that θ 3 = 0. No additional gain comes from working, so the salesperson chooses not to do so. Letting \(\theta _{3|S_2 = s} \) represent the effort put in the third period if the salesperson's accumulated sales after the second period is s, we find \(\theta _{3|S_2 = 2} = 0\). The salesperson's expected utility is \(u\left( {a + b} \right) - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } \) if this decision node is reached.
A similar argument holds for a salesperson who has not completed a sale in the first two periods. This person's expected utility is
$$\left[ {u\left( a \right) - {\raise0.7ex\hbox{${\theta _3^2 }$} \!\mathord{\left/{\vphantom {{\theta _3^2 } 2}}\right.\kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$2$}} - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } } \right]$$
because the bonus is not earned whether the salesperson is successful in the third period or not. Thus, \(\theta _{3|S_2 = 0} = 0\) and the salesperson's expected utility is \(u\left( a \right) - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } \) if this decision node is reached.
Now, let us consider a salesperson who has completed one sale after two periods. This person's expected utility is
$$\begin{aligned}& \theta _3 \left[ {u\left( {a + b} \right) - {\raise0.7ex\hbox{${\theta _3^2 }$} \!\mathord{\left/{\vphantom {{\theta _3^2 } 2}}\right.\kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$2$}} - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } } \right] + \left( {1 - \theta _3 } \right)\left[ {u\left( a \right) - {\raise0.7ex\hbox{${\theta _3^2 }$} \!\mathord{\left/{\vphantom {{\theta _3^2 } 2}}\right.\kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$2$}} - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } } \right] \\& = u\left( a \right) + \theta _3 \left[ {u\left( {a + b} \right) - u\left( a \right)} \right] - {\raise0.7ex\hbox{${\theta _3^2 }$} \!\mathord{\left/{\vphantom {{\theta _3^2 } 2}}\right.\kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$2$}} - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } \\ \end{aligned} $$
because the bonus is earned only if the salesperson is successful in the last period. Thus, the first-order condition for a maximum is \(\left[ {u\left( {a + b} \right) - u\left( a \right)} \right] - \theta _3 = 0\). For convenience, define the change in utility for earning the bonus as \(\Delta = u\left( {a + b} \right) - u\left( a \right)\). Thus, \(\theta _{3|S_2 = 2} = \Delta \) (positive effort is exerted to earn the bonus) and the salesperson's expected utility is \(u\left( a \right) + \frac{1}{2}\Delta ^2 - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } \) if this decision node is reached. We assume the firm chooses a bonus b such that Δ ≤ 1; that is, the bonus is set at a reasonable, not an extraordinarily high, level. Otherwise the firm would be overpaying for the chance of a certain sale in this period. Since \(a >0,\;b >0\), \(0 <\Delta \leqslant 1\).
The question is how do the third period strategies compare to the second period strategies?
Let us first consider a salesperson who completed a sale in the first period. The expected utility of this person is
$$\begin{aligned}& \theta _2 \left[ {u\left( {a + b} \right) - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } } \right] + \left( {1 - \theta _2 } \right)\left[ {u\left( a \right) + \frac{1}{2}\Delta ^2 - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } } \right] \\& = u\left( a \right) + \theta _2 \Delta + \frac{{1 - \theta _2 }}{2}\Delta ^2 \; - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } \\ \end{aligned} $$
The first order condition for a maximum is \(\Delta - \frac{{\Delta ^2 }}{2}\; - \theta _2 = 0\), which implies \(\theta _{2|S_1 = 1} = \Delta - \frac{{\Delta ^2 }}{2}\).
Now consider a salesperson who did not complete a sale in the first period. This person's expected utility is
$$\begin{aligned}& \theta _2 \left[ {u\left( a \right) + \frac{1}{2}\Delta ^2 - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } } \right] + \left( {1 - \theta _2 } \right)\left[ {u\left( a \right) - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } } \right] \\& = u\left( a \right) + \frac{{\theta _2 \Delta ^2 }}{2} - \frac{1}{2}\sum\limits_{t = 1}^2 {\theta _t^2 } \\ \end{aligned} $$
The first order condition for a maximum is \(\frac{{\Delta ^2 }}{2}\; - \theta _2 = 0\), which implies \(\theta _{2|S_1 = 0} = \frac{{\Delta ^2 }}{2}\).
Since \(\theta _{3|S_2 = 1} = \Delta >\frac{{\Delta ^2 }}{2} = \theta _{2|S_1 = 0} \) and \(\theta _{3|S_2 = 1} = \Delta >\Delta - \frac{{\Delta ^2 }}{2} = \theta _{2|S_1 = 1} \) when \(0 <\Delta \leqslant 1\), the salesperson, if it is necessary to stretch to make the quota, marginally increases effort in the third period.
Since \(\theta _{3|S_2 = 2} = 0 <\Delta - \frac{{\Delta ^2 }}{2} = \theta _{2|S_1 = 1} \), the salesperson, if the quota has already been made, marginally decreases effort in the third period.
Since \(\theta _{3|S_2 = 0} = 0 <\frac{{\Delta ^2 }}{2} = \theta _{2|S_1 = 0} \), the salesperson, if the quota has already been made, marginally decreases effort in the third period.
Appendix II. The full conditional distributions
Assume conjugate prior distributions
$$ \begin{array}{*{20}l} {{{\left[ {\sigma ^{2}_{{si}} } \right]} = {\left[ {\sigma ^{2}_{{\alpha s}} } \right]} = {\left[ {\sigma ^{2}_{\xi } } \right]} = G{\left[ {{v_{0} } \mathord{\left/ {\vphantom {{v_{0} } 2}} \right. \kern-\nulldelimiterspace} 2,{\lambda _{0} } \mathord{\left/ {\vphantom {{\lambda _{0} } 2}} \right. \kern-\nulldelimiterspace} 2} \right]}} \hfill} & {{{\left[ {{\sum {^{{ - 1}} } }} \right]} = W{\left( {\rho _{0} I\rho ,\rho _{0} } \right)}} \hfill} \\ {{{\left[ \gamma \right]} = N{\left( {0,\tau ^{2}_{0} } \right)}} \hfill} & {{{\left[ \delta \right]} = MVN_{p} {\left( {0,T_{0} } \right)}} \hfill} \\ \end{array} $$
For notational convenience, define
$$\begin{aligned} & \begin{array}{*{20}c} {d_{{si}} = n_{{si}} \sigma ^{{ - 2}}_{{si}} + \sigma ^{{ - 2}}_{{\alpha s}} }{D_{s} = {\sum {^{{n_{s} }}_{{i = 1}} \sigma ^{{ - 2}}_{{si}} X^{T}_{{si}} X_{{si}} + {\sum {^{{ - 1}} } }} }} \\ {d_{s} = n_{s} \sigma ^{{ - 2}}_{{\alpha s}} + \sigma ^{{ - 2}}_{\xi } }{D = c{\sum {^{{ - 1}} } } + T^{{ - 1}}_{0} } \\ \end{array} \\ & d = c\sigma ^{{ - 2}}_{\xi } + \tau ^{{ - 2}}_{0} \\ \end{aligned} $$
The full conditional distributions resulting from these assumptions are
$$\begin{array}{*{20}c} {\left[ {\left. {\sigma _{si}^{ - 2} } \right|y_{sit} ,\alpha _{si} ,\beta _s } \right] = G\left( {\frac{{v_0 + n_{si} }}{2},\frac{{\lambda _0 + \sum\nolimits_{t = 1}^{n_{si} } {\left( {y_{sit} - \alpha _{si} - X_{sit} \beta _s } \right)^2 } }}{2}} \right)} \hfill \\ {\left[ {\left. {\alpha _{si} } \right|y_{sit} ,\beta _s ,\sigma _{si}^{ - 2} ,\xi _s ,\sigma _{\alpha s}^{ - 2} } \right] = N\left( {d_{si}^{ - 1} \left[ {\sigma _{si}^{ - 2} \sum\nolimits_{t = 1}^{n_{si} } {\left( {y_{sit} - X_{sit} \beta _s } \right) + \sigma _{\alpha s}^{ - 2} \xi _s } } \right],d_{si}^{ - 1} } \right)} \hfill \\ {\left[ {\left. {\sigma _{\alpha s}^{ - 2} } \right|\alpha _{si} ,\xi _s } \right] = G\left( {\frac{{v_0 + n_s }}{2},\frac{{\lambda _0 + \sum\nolimits_{i = 1}^{n_s } {\left( {\alpha _{si} - \xi _s } \right)^2 } }}{2}} \right)} \hfill \\ {\left[ {\left. {\xi _s } \right|\alpha _{si} ,\sigma _{\alpha s}^{ - 2} ,\gamma ,\sigma _\xi ^{ - 2} } \right] = N\left( {d_s^{ - 1} \left[ {\sigma _{\alpha s}^{ - 2} \sum\nolimits_{i = 1}^{n_s } {\alpha _{si} + \sigma _\xi ^{ - 2} \gamma } } \right],d_s^{ - 1} } \right)} \hfill \\ {\left[ {\left. {\sigma _\xi ^{ - 2} } \right|\xi _s ,\gamma } \right] = G\left( {\frac{{v_0 + c}}{2},\frac{{\lambda _0 + \sum\nolimits_{s = 1}^c {\left( {\xi _s - \gamma } \right)^2 } }}{2}} \right)} \hfill \\ {\left[ {\left. \gamma \right|\xi _s ,\sigma _\xi ^{ - 2} } \right] = N\left( {d^{ - 1} \left[ {\sigma _\xi ^{ - 2} \sum\nolimits_{s = 1}^c {\xi _s } } \right],d^{ - 1} } \right)} \hfill \\ {\left[ {\left. {\beta _s } \right|y_{sit} ,\alpha _{si} ,\sigma _{si}^{ - 2} ,\delta ,\sum ^{ - 1} } \right] = MVN_p \left( {D_s^{ - 1} \left( {\sum\nolimits_{i = 1}^{n_s } {\sigma _{si}^{ - 2} X_{si}^T \left( {y_{si} - \alpha _{si} \iota _{n_{si} } } \right) + \sum ^{ - 1} \delta } } \right),D_s^{ - 1} } \right)} \hfill \\ {\left[ {\left. {\sum ^{ - 1} } \right|\delta ,\beta _s } \right] = W\left( {\sum\nolimits_{s = 1}^c {\left( {\beta _s - \delta } \right)\left( {\beta _s - \delta } \right)^T + \rho _0 I_p ,c + \rho _0 } } \right)} \hfill \\ {\left[ {\left. \delta \right|\beta _s ,\sum ^{ - 1} } \right] = MVN_p \left( {D^{ - 1} \left( {\sum ^{ - 1} \sum\nolimits_{s = 1}^c {\beta _s } } \right),D^{ - 1} } \right)} \hfill \\ \end{array} $$
Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/by-nc/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Steenburgh, T.J. Effort or timing: The effect of lump-sum bonuses. Quant Mark Econ 6, 235 (2008). https://doi.org/10.1007/s11129-008-9039-7
Sales force incentives | CommonCrawl |
\begin{document}
\title{Letters relating to a Theorem of Mr. Euler, of the Royal Academy of Sciences at Berlin, and F.R.S. for correcting the Aberrations in the Object-Glasses of refracting Telescopes. \footnote{Delivered to the Royal Society of London on April 9th, 1752, November 23rd, 1752, and July 8th, 1753. Originally published in the {\it Philosophical Transactions of the Royal Society of London}, volume 48, 1754. A copy of the original text is available electronically at the Euler Archive, http://www.eulerarchive.org/. These letters comprise E210 in the Enestr\"om Index.} } \author{James Short, John Dolland, Leonhard Euler \footnote{Portions authored by Euler translated from the French by Erik R. Tou, Dartmouth College.} } \date{\today} \maketitle
\subj{I. A Letter from Mr. James Short, F.R.S. to Peter Daval, Esq; F.R.S.}
\subj{(Read April 9, 1752.)}
[Surrey-street, April 9, 1752.]
\vskip 12pt
Dear Sir,
\vskip 12pt
There is published, in the Memoirs of the Royal Academy at Berlin, for the year 1747, a theorem by Mr. Euler, in which he shews a method of making object-glasses of telescopes, in such a manner, as not to be affected by the aberrations arising from the different refrangibility of the rays of light; these object-glasses consisting of two {\it meniscus} lens's, with water between them.
\vskip 12pt
Mr. John Dollond, who is an excellent analyst and optician, has examined the said theorem, and has discovered a mistake in it, which arises by assuming an hypothesis contrary to the established principles of optics; and, in consequence of this, Mr. Dollond has sent me the inclosed letter, which contains the discovery of the said mistake, and a demonstration of it.
\vskip 12pt
In order to act in the most candid manner with Mr. Euler, I have proposed to Mr. Dollond to write to him, shewing him the mistake, and desiring to know his reasons for that hypothesis; and therefore I desire, that this letter of Mr. Dollond's to me may be kept amongst the Soceity's papers, till Mr. Euler has had a sufficient time to answer Mr. Dollond's letter to him. I am,
{\hskip 250pt SIR,}
\vskip 12pt
{\hskip 250pt Your most humble servant,}
\vskip 12pt
{\hskip 250pt James Short.}
\break
\subj{II. A letter from Mr. John Dollond to James Short, A.M.F.R.S. concerning a Mistake in M. Euler's Theorem for correcting the Aberrations in the Object-Glasses of refracting Telescopes.}
\subj{(Read Nov. 23, 1752.)}
\vskip 12pt
[London, March 11, 1752.]
\vskip 12pt
SIR,
\vskip 12pt
The famous experiments of the prism, first tried by Sir Isaac Newton, sufficiently convinced that great man, that the perfection of telescopes was impeded by the different refrangibility of the rays of light, and not by the spherical figure of the glasses, as the common notion had been till that time; which put the philosopher upon grinding concave metals, in order to come at that by reflexion, which he despair'd of obtaining by refraction. For, that he was satisfied of the impossibility of correcting that aberration by a multiplicity of refractions, appears by his own words, in his treatise of Light and Colours, Book I. Part 2. Prop. 3.
\vskip 12pt
``I found moreover, that when light goes out of air through several contiguous mediums, as through water and glass, as often as by contrary refractions it is so corrected, that it emergeth in lines parallel to those in which it was incident, continues ever after to be white. But if the emergent rays be inclined to the incident, the whiteness of the emerging light will by degrees, in passing on from the place of emergence, become tinged in its edges with colours."
\vskip 12pt
It is therefore, Sir, somewhat strange, that any body now-a-days should attempt to do that, which so long ago has demonstrated impossible. But, as so great a mathematician as Mr. Euler has lately published a theorem\footnote{Vide Memoires of the Royal Academy of Berlin for the Year 1747.} for making object-glasses, that should be free from the aberration arising from the different refrangibility of light, the subject deserves a particular consideration. I have therefore carefully examined every step of his algebraic reasoning, which I have found strictly true in every part. But a certain hypothesis in page 285 appears to be destitute of support either from reason or experiiment, though it be there laid down as the foundation of the whole fabrick. This gentleman puts $m:1$ for the ratio of refraction out of air into glass of the mean refrangible rays, and $M:1$ for that of the least refrangible. Also for the ratio of refraction out of air into water of the mean refrangible rays he puts $n:1$, and for the least refrangible $N:1$. As to the numbers, he makes $m=\frac{31}{20}$, $M=\frac{77}{50}$, and $n=\frac{4}{3}$; which so far answer well enough to experiments. But the difficulty consists in finding the value of $N$ in a true proportion to the rest.
\vskip 12pt
Here the author introduces the supposition above-mention'd; which is, that $m$ is the same power of $M$, as $n$ is of $N$; and therefore puts $n=m^a$, and $N=M^a$. Whereas, by all the experiments that have hitherto been made, the proportion will come out thus, $m-1:n-1::m-M:n-N$.
\vskip 12pt
The leters fixed upon by Mr. Euler to represent the radii of the four refracting surfaces of his compound object-glass, are $f$ $g$ $h$ and $k$, and the distance of the object he expresses by $a$; then will the focal distance be $=\frac{1}{n(\frac{1}{g} - \frac{1}{h}) + m(\frac{1}{f} - \frac{1}{g} + \frac{1}{h} - \frac{1}{k}) - \frac{1}{a} - \frac{1}{f} + \frac{1}{k}}$. Now, says he, it is evident, that the different refrangibility of the rays would make no alteration, either in the place of the image, or in its magnitude, if it were possible to determine the radii of the four surfaces, so as to have $n(\frac{1}{g}-\frac{1}{h}) + m(\frac{1}{f} - \frac{1}{g} + \frac{1}{k} - \frac{1}{h}) = N(\frac{1}{g}-\frac{1}{h}) + M(\frac{1}{f} - \frac{1}{g} + \frac{1}{k} - \frac{1}{h})$. And this, Sir, I shall readily grant. But when the surfaces are thus proportioned, the sum of the refractions will be $=0$; that is to say, the emergent rays will be parallel to the incident. For, if $n(\frac{1}{g}-\frac{1}{h}) + m(\frac{1}{f} - \frac{1}{g} + \frac{1}{h} - \frac{1}{k}) = N(\frac{1}{g}-\frac{1}{h}) + M(\frac{1}{f} - \frac{1}{g} + \frac{1}{h} - \frac{1}{k})$, then $n - N(\frac{1}{g} - \frac{1}{h}) + m - M(\frac{1}{f} - \frac{1}{g} + \frac{1}{h} - \frac{1}{k}) = 0$. Also if $n-N:m-M::n-1:m-1$, then $n-1(\frac{1}{g} - \frac{1}{h}) + m-1(\frac{1}{f} - \frac{1}{g} + \frac{1}{h} - \frac{1}{k})=0$; or otherwise $n(\frac{1}{g} - \frac{1}{h}) + m(\frac{1}{f} - \frac{1}{g} + \frac{1}{h} - \frac{1}{k}) - \frac{1}{f} + \frac{1}{k} = 0$; which reduces the denominator of the fraction expressing the focal distance to $\frac{1}{a}$. Whence the focal distance will be $=a$; or, in other words, the image will be the object itself. And as, in this case, there will be no refraction, it will be easy to conceive how there should be no aberration.
\vskip 12pt
And now, Sir, I think I have demonstrated, that Mr. Euler's theorem is intirely founded upon a new law of refraction of his own; but that, according to the laws discover'd by experiment, the aberration arising from the different refrangibility of light at the object-glass cannot be corrected by any number of refractions whatsoever. I am,
{\hskip 250pt SIR,}
\vskip 12pt
{\hskip 250pt Your most obedient humble servant,}
\vskip 12pt
{\hskip 250pt John Dollond.}
\break
\subj{III. Mr. Euler's Letter to Mr. James Short, F.R.S.}
\subj{(Read July 8, 1753.)}
\vskip 12pt
[Berlin, 19 June, 1752.]
\vskip 12pt
Sir,
\vskip 12pt
You have done me a great service, in having disposed Mr. Dollond to suspend the proposal of his objections against my objective lenses until I have had an opportunity to respond, and to you I am infinitely obliged. I take, therefore, the liberty of addressing to you my response to him, requesting that you, after having deigned to your examination, send it to him at your pleasure; and in case you judge this material worthy of the attention of the Royal Society, I ask that you communicate the details of the proofs in my theory, which I have given in this letter. In any case, I hope that Mr. Dollond will be satisfied, since I agree with him on the little success that one could achieve with my lenses, by handling them according to the ordinary manner.
\vskip 12pt
\indent I am honored to be, with the most perfect esteem,
{\hskip 250pt Sir,}
\vskip 12pt
{\hskip 250pt Your most humble, and}
\vskip 12pt
{\hskip 250pt most obedient servant,}
\vskip 12pt
{\hskip 250pt L. Euler.}
\break
\subj{IV. To Sir Mr. Dollond}
\subj{(Read July 8, 1753.)}
\vskip 12pt
[Berlin, June 15, 1752.]
\vskip 12pt
Sir,
\vskip 12pt
Being very appreciative of the honor you have shown me on the subject of the objective lenses which I had proposed, I must also tell you quite honestly that I have met with the most sizable obstacles in the execution of your design, consisting of four faces, which must conform exactly to the proportions that I had found. However, having done these experiments many times, which seem to have been more successful, we have found that the interval between the two lenses of the red and violet rays is very small, in which case there could not be a single lens of the same focal distance. Nevertheless, I must confess that such a lens, even if it were perfectly executed on my principles, would have other defects which would make matters worse than even ordinary lenses; it is that such a lens would not admit a very small opening due to the large curves which one must give to the interior faces, so that when one gives an ordinary opening, the image would deflect most confusedly.
\vskip 12pt
Thus since you have taken pains, Sir, to experiment on such lenses, and having done said experiments,\footnote{Mr. Dollond, in his letter to Mr. Euler, here referred to, does not say that he had made any trials himself, but only he had understood that such had been made by others, without success.} I pray that you discern the flaws, which can give rise to the different refrangibility of light rays, that arise from too large an opening; for this effect you will have only to leave a very small opening. However, if my theory were sound, as to which I will soon have the honor of speaking, it would be advisable to remedy this flaw: it would be necessary to give up on the spherical figure which one usually gives to the faces of the lenses, and endeavor to give another figure, and I have remarked that the figure of a parabola is advantageous, for it admits a very considerable opening. Our scientist Mr. Lieberkuhn has worked with lenses for which the curvature of the faces decreases from the middle to the edges, and he has found very great advantages. For these reasons I believe that my theory does not suffer anything on this front.
\vskip 12pt
For the theory, I agree with you, sir, that supposing the ratio of refraction of one medium in another unspecified medium for the middle light rays is $m$ to $1$, and for the red light rays is $M$ to $1$, the ratio of $m-M$ to $m-1$ will always be nearly constant, and that this conforms to nature, as the great Newton has remarked. This ratio does not differ more than almost imperceptibly from my theory: because since I maintain that $M=m^{\alpha}$, and that $m$ differs usually very little from $1$, namely $m=1+\omega$; and since $M=m^{\alpha}=1+\alpha \log(m)$ arbitrarily close, and $\log(1+\omega)=\log(m)=\omega$, also arbitrarily close, I will get $m-M=1+\omega-1-\alpha\omega=(1-\alpha)\omega$, and $m-1=\omega$, therefore the ratio $\frac{m-M}{m-1}$ will be $=1-\alpha$, very nearly constant. Further, I conclude that the experiments from which the great Newton had drawn his ratio could not contradict my theory.
\vskip 12pt
Secondly, I agree also that if the ratio $\frac{m-M}{m-1}=Const.$ were shown rigorously, it would not suffice to remedy the defect which results from the different refrangibility of the light rays, regardless of how one places the various transparent media, and that the interval between the various foci would always hold a constant ratio with the entire focal distance of the lens. But it is precisely this consideration, which to me furnishes the strongest argument: the eye seems to me such a perfect optical machine, but does not sense in any way the different refrangibility of the light rays. However small the focal distance may be, the sensitivity is so great that the various foci, if there be more than one, would not fail to considerably trouble the vision. However, it is quite certain that a healthy eye does not sense the effect of the different refrangibility.
\vskip 12pt
The marvelous structure of the eye, and the various humours of which it is composed, infinitely confirms me in this sentiment. For if it acted only to produce a representation on the back of the eye, one humour alone would have been sufficient; and the Creator would surely not have employed more. Further, I conclude that it is possible to eliminate the effect of the different refrangibility of the light rays by an appropriate arrangement of several trasparent media; thus since this would not be possible if the formula $\frac{m-M}{m-1}=Const.$ were shown rigorously, I draw the conclusion that it does not conform to nature.
\vskip 12pt
But here is a proof directing my thesis: I conceive of various transparent media, $A$, $B$, $C$, $D$, $E$, $etc$. which differ equally among themselves by ratio of their optical density, so that the ratio of refraction of each in the following will be the same. That is to say therefore in the passage from the first into the second the ratio of refraction for the red light rays $=r:1$, and for the violets $=v:1$; which will be the same in the passage from the second into the third, the third into the fourth, the fourth into the fifth, and so on. Moreover, it is clear that in the passage from the first into the third the ratio will be $=r^2:1$ for the red rays, and $=v^2:1$ for the violets; similarly, in the passage from the first into the fourth, the ratios will be $r^3:1$ and $v^3:1$.
\vskip 12pt
Therefore if in the passage through an unspecified medium the ratio of refraction of the red rays is $=r^n:1$, then the violet rays will be $v^n:1$; all this perfectly conforms to the principles of the great Newton. We suppose $r^n=R$, and $v^n=V$, such that $R:1$, and $V:1$ express the ratios of refraction for the red and violet rays in an arbitrary passage: and having $n\cdot \log(r)=\log(R)$ and $n\cdot \log(v)=\log(V)$ we will have $\log(R):\log(r)=\log(V):\log(v)$, whence $\frac{\log(R)}{\log(V)}=\frac{\log(r)}{\log(v)}$. Or put $v=r^{\alpha}$, and since $\log(v)=\alpha\cdot \log(r)$, one has $\frac{\log(R)}{\log(V)}=\frac{1}{\alpha}$, whence $\log(V)=\alpha\cdot \log(R)$, and then $V=R^{\alpha}$.
\vskip 12pt
We have seen therefore the foundation of the principle which I employed in my article, which to me appears again unwavering; however I subject myself to the decision of the illustrious Royal Society, and to your judgement in particular, having the honor to be with the most perfect esteem, Sir,
{\hskip 250pt Your most humble}
\vskip 12pt
{\hskip 250pt and most obedient servant,}
\vskip 12pt
{\hskip 250pt L. Euler.}
\end{document} | arXiv |
Czechoslovak Mathematical Journal
Mancera, Carmen H. ; Paúl, Pedro J.
On Pták's generalization of Hankel operators. (English). Czechoslovak Mathematical Journal, vol. 51 (2001), issue 2, pp. 323-342
MSC: 46E22, 47A20, 47B35 | MR 1844313 | Zbl 0983.47019
Toeplitz operators; Hankel operators; minimal isometric dilation
In 1997 Pták defined generalized Hankel operators as follows: Given two contractions $T_1\in {\mathcal B}({\mathcal H}_1)$ and $T_2 \in {\mathcal B}({\mathcal H}_2)$, an operator $X \:{\mathcal H}_1 \rightarrow {\mathcal H}_2$ is said to be a generalized Hankel operator if $T_2X=XT_1^*$ and $X$ satisfies a boundedness condition that depends on the unitary parts of the minimal isometric dilations of $T_1$ and $T_2$. This approach, call it (P), contrasts with a previous one developed by Pták and Vrbová in 1988, call it (PV), based on the existence of a previously defined generalized Toeplitz operator. There seemed to be a strong but somewhat hidden connection between the theories (P) and (PV) and we clarify that connection by proving that (P) is more general than (PV), even strictly more general for some $T_1$ and $T_2$, and by studying when they coincide. Then we characterize the existence of Hankel operators, Hankel symbols and analytic Hankel symbols, solving in this way some open problems proposed by Pták.
[1] P. Alegría: Parametrization and Schur algorithm for the integral representation of Hankel forms in $\mathbb{T}^3$. Collect. Math. 43 (1992), 253–272. MR 1252735
[2] J. Arazy, S. D. Fisher, J. Peetre: Hankel operators on weighted Bergman spaces. Amer. J. Math. 110 (1988), 989–1053. DOI 10.2307/2374685 | MR 0970119
[3] S. Axler: Bergman spaces and their operators. Surveys of Some Recent Results in Operator Theory, I, J. B. Conway, B. B. Morrel (eds.), Res. Notes Math. vol. 171, Pitman, Boston, London and Melbourne, 1988. MR 0958569 | Zbl 0681.47006
[4] S. Axler, J. B. Conway, G. McDonald: Toeplitz operators on Bergman spaces. Canad. J. Math. 34 (1982), 466–483. DOI 10.4153/CJM-1982-031-1 | MR 0658979
[5] J. Barría: The commutative product $V^*_{1}V_{2}=V_{2}V_{1}^* $ for isometries $V_{1}$ and $V_{2}$. Indiana Univ. Math. J. 28 (1979), 581–586. MR 0542945 | Zbl 0428.47019
[6] E. L. Basor, I. Gohberg: Toeplitz Operators and Related Topics. Operator Theory: Adv. Appl., vol. 71, Birkhäuser-Verlag, Basel, Berlin and Boston, 1994. MR 1300205
[7] C. A. Berger, L. A. Coburn: Toeplitz operators on the Segal-Bargmann space. Trans. Amer. Math. Soc. 301 (1987), 813–829. DOI 10.1090/S0002-9947-1987-0882716-4 | MR 0882716
[8] F. F. Bonsall, S. C. Power: A proof of Hartman's theorem on compact Hankel operators. Math. Proc. Cambridge Philos. Soc. 78 (1975), 447–450. DOI 10.1017/S0305004100051914 | MR 0383133
[9] A. Böttcher, B. Silbermann: Analysis of Toeplitz Operators. Springer-Verlag, Berlin, Heidelberg and New York, 1990. MR 1071374
[10] A. Brown, P. R. Halmos: Algebraic properties of Toeplitz operators. J. Reine Angew. Math. 213 (1963), 89–102. MR 0160136
[11] J. B. Conway, J. Duncan, A. L. T. Paterson: Monogenic inverse semigroups and their $C^*$-algebras. Proc. Roy. Soc. Edinburgh Sect. A 98 (1984), 13–24. MR 0765485
[12] M. Cotlar, C. Sadosky: Prolongements des formes de Hankel généralisées et formes de Toeplitz. C. R. Acad. Sci. Paris, Ser. I 305 (1987), 167–170. MR 0903954
[13] Ž. Čučković: Commutants of Toeplitz operators on the Bergman space. Pacific J. Math. 162 (1994), 277–285. DOI 10.2140/pjm.1994.162.277 | MR 1251902
[14] Ž. Čučković: Commuting Toeplitz operators on the Bergman space of an annulus. Michigan Math. J. 43 (1996), 355–365. DOI 10.1307/mmj/1029005468 | MR 1398160
[15] K. R. Davidson: Approximate unitary equivalence of power partial isometries. Proc. Amer. Math. Soc. 91 (1984), 81–84. DOI 10.1090/S0002-9939-1984-0735569-1 | MR 0735569 | Zbl 0521.47010
[16] A. Devinatz: Toeplitz operators on $H^2$ spaces. Trans. Amer. Math. Soc. 112 (1964), 307–317. MR 0163174
[17] R. G. Douglas: On majorization, factorization, and range inclusion of operators on Hilbert space. Proc. Amer. Math. Soc. 17 (1966), 413–415. DOI 10.1090/S0002-9939-1966-0203464-1 | MR 0203464 | Zbl 0146.12503
[18] R. G. Douglas: On the operator equation $S^*XT=X$ and related topics. Acta Sci. Math. (Szeged) 30 (1969), 19–32. MR 0250106 | Zbl 0177.19204
[19] R. G. Douglas: Banach Algebra Techniques in Operator Theory. Academic Press, New York, 1972. MR 0361893 | Zbl 0247.47001
[20] P. R. Halmos: Introduction to Hilbert Space and the Theory of Spectral Multiplicity. Second edition, Chelsea, New York, 1957. MR 1653399 | Zbl 0079.12404
[21] P. R. Halmos: A Hilbert Space Problem Book. Second edition, Springer-Verlag, Berlin, Heidelberg and New York, 1982. MR 0675952 | Zbl 0496.47001
[22] P. R. Halmos, L. J. Wallen: Powers of partial isometries. J. Math. Mech. 19 (1969/1970), 657–663. MR 0251574
[23] D. A. Herrero: Quasidiagonality, similarity and approximation by nilpotent operators. Indiana Univ. Math. J. 30 (1981), 199–233. DOI 10.1512/iumj.1981.30.30017 | MR 0604280 | Zbl 0497.47010
[24] T. B. Hoover, A. Lambert: Equivalence of power partial isometries. Duke Math. J. 41 (1974), 855–864. DOI 10.1215/S0012-7094-74-04185-4 | MR 0361872
[25] J. Janas, K. Rudol: Toeplitz operators in infinitely many variables. Topics in Operator Theory, Operator Algebras and Applications (Timisoara, 1994), Rom. Acad., Bucharest, 1995, pp. 147–160. MR 1421121
[26] C. H. Mancera, P. J. Paúl: Compact and finite rank operators satisfying a Hankel type equation $T_2X=XT_1^*$. Integral Equations Operator Theory (to appear), . MR 1829281
[27] C. H. Mancera, P. J. Paúl: Properties of generalized Toeplitz operators. Integral Equations Operator Theory (to appear), . MR 1829517
[28] C. H. Mancera, P. J. Paúl: Remarks, examples and spectral properties of generalized Toeplitz operators. Acta Sci. Math. (Szeged) 66 (2000), 737–753. MR 1804222
[29] C. Le Merdy: Formes bilinéaires hankéliennes compactes sur $H^2(X) \times H^2(Y)$. Math. Ann. 293 (1992), 371-384. DOI 10.1007/BF01444721 | MR 1166127
[30] G. L. Murphy: Inner functions and Toeplitz operators. Canad. Math. Bull. 36 (1993), 324–331. DOI 10.4153/CMB-1993-045-9 | MR 1245817 | Zbl 0792.47029
[31] Z. Nehari: On bounded bilinear forms. Ann. Math. 65 (1957), 153–162. DOI 10.2307/1969670 | MR 0082945 | Zbl 0077.10605
[32] N. K. Nikolskii: Treatise on the Shift Operator. Springer-Verlag, Berlin, Heidelberg and New York, 1986. MR 0827223
[33] L. Page: Bounded and compact vectorial Hankel operators. Trans. Amer. Math. Soc. 150 (1970), 529–539. DOI 10.1090/S0002-9947-1970-0273449-3 | MR 0273449 | Zbl 0203.45701
[34] V. V. Peller: Vectorial Hankel operators, commutators and related operators of the Schatten-von Neumann class $\sigma _p$. Integral Equations Operator Theory 5 (1982), 244–272. DOI 10.1007/BF01694041 | MR 0647702
[35] S. C. Power: Hankel Operators on Hilbert Space. Res. Notes Math., vol. 64, Pitman, Boston, London and Melbourne, 1982. MR 0666699 | Zbl 0489.47011
[36] V. Pták: Factorization of Toeplitz and Hankel operators. Math. Bohem. 122 (1997), 131–145. MR 1460943
[37] V. Pták, P. Vrbová: Operators of Toeplitz and Hankel type. Acta Sci. Math. (Szeged) 52 (1988), 117–140. MR 0957795
[38] V. Pták, P. Vrbová: Lifting intertwining dilations. Integral Equations Operator Theory 11 (1988), 128–147. DOI 10.1007/BF01236657 | MR 0920738
[39] M. Rosenblum: Self-adjoint Toeplitz operators. Summer Institute of Spectral Theory and Statistical Mechanics 1965 (1966), Broohhaven National Laboratory, Upton, N. Y. Zbl 0165.47703
[40] B. Sz.-Nagy, C. Foiaş: Harmonic Analysis of Operators on Hilbert Space. Akadémiai Kiadó and North-Holland, Budapest and Amsterdam, 1970. MR 0275190
[41] B. Sz.-Nagy, C. Foiaş: An application of dilation theory to hyponormal operators. Acta Sci. Math. (Szeged) 37 (1975), 155–159. MR 0383131
[42] B. Sz.-Nagy, C. Foiaş: Toeplitz type operators and hyponormality. Dilation Theory, Toeplitz Operators and Other Topics, Operator Theory: Adv. Appl. vol. 11, Birkhäuser-Verlag, Basel, Berlin and Boston, 1983, pp. 371–378. MR 0789650
[43] N. Young: An Introduction to Hilbert Space. Cambridge University Press, Cambridge, 1988. MR 0949693 | Zbl 0645.46024
[44] D. Zheng: Hankel operators and Toeplitz operators on the Bergman space. J. Funct. Anal. 83 (1989), 98–120. DOI 10.1016/0022-1236(89)90032-3 | MR 0993443 | Zbl 0678.47026
[45] D. Zheng: Toeplitz operators and Hankel operators on the Hardy space of the unit sphere. J. Funct. Anal. 149 (1997), 1–24. DOI 10.1006/jfan.1997.3110 | MR 1471097 | Zbl 0909.47020
[46] K. Zhu: Operator Theory in Function Spaces. Pure Appl, Math. vol. 139, Marcel Dekker, Basel and New York, 1990. MR 1074007 | Zbl 0706.47019 | CommonCrawl |
Does classical electromagnetism really predict the instability of atoms?
I will try to give a concise summary of what I wrote below. I understand that it is very long and apologize if I am wasting your time.
I used the Liénard-Wiechert potential and the Lorentz force formula to derive equations of motion for charged particles that do not involve the $\mathbf{E}$ and $\mathbf{B}$ fields. I use this in a two-body problem setup with two charges, and want to see whether the resulting equations result in spiraling orbits.
More specifically, I plug in the formulas for $\mathbf{E}_2$ and $\mathbf{B}_2$ given by Liénard-Wiechert into the Lorentz force formula $\mathbf{F}_{12}(\mathbf{r}_1(t), t) = q_1(\mathbf{E}_2(\mathbf{r}_1(t), t) + \mathbf{\dot{r}}_1(t)\times\mathbf{B}_2(\mathbf{r}_1(t), t))$ and use the classical equation of motion $m_1\mathbf{\ddot{r}}_1(t) = \mathbf{F}_{12}(\mathbf{r}_1(t), t)$. A second equation is obtained for particle 2 using the expressions for $\mathbf{E}_1$ and $\mathbf{B}_1$ instead.
The full equations are complicated so I take a limit as the mass of the nucleus goes to infinity, and (assuming my calculations are correct), I obtain a simpler equation of motion formally identical to that of planetary motion around the sun. This predicts elliptical trajectories for most initial conditions.
The concern is that the resulting motion is stable, and this seems to contradict what the usual accounts say about classical electromagnetism failing to explain atomic orbit stability.
If you want more detail, you can also read what I wrote below.
Suppose for a moment that we work classically and use the planetary model for the atom (hydrogen, for simplicity), as a positively charged nucleus with a negatively charged electron orbiting it in a similar way that planets orbit around the Sun.
The usual story goes that due to the electron orbiting around the nucleus, it undergoes acceleration which will cause the electron to radiate electromagnetic waves. The energy of this radiation is considered to be taken away over time from the electron's total energy via the Larmor formula, and therefore the model predicts a collapse of the electron's orbit over a short period of time because the radius of the orbit must decrease to compensate for the diminished energy of the electron.
At the risk of sounding ridiculous to more knowledgeable people, I would like to challenge this assumption with the following considerations (only as a way to clarify my own understanding of the problem). It appears to me that this problem is arising only because we consider the electromagnetic field to have an existence independent of the charges generating it, and carrying energy and momentum on its own.
But it seems possible to describe the fundamentals of (classical) electromagnetism without resorting to the concept of electromagnetic fields, by the use of a combination of the Lorentz force law and the Liénard-Wiechert potential. In particular, one can substitute the explicit expressions for the $\mathbf{E}$ and $\mathbf{B}$ fields obtained from the Liénard-Wiechert formulas into the Lorentz force formula to derive the force between two charged particles moving on arbitrary paths in space. One can then derive a classical equation of motion for the particles using Newtonian mechanics, or its special relativistic correction.
Explicitly, we obtain this system of two ODEs, where $m_1, m_2, q_1, q_2$ are the masses and charges of the two particles, $\mathbf{r}_1(t), \mathbf{r}_2(t)$ are the paths, and other quantities are defined in the Wikipedia page for the Liénard-Wiechert potential:
$$m_1{\mathbf{\ddot{r}}_1}(t) = {\frac {\mu_0c^2q_1q_2}{4\pi}}\left(1 + \left[\mathbf{\dot{r}}_1(t)\times\left[\frac{\mathbf{n}_2(t_r)}{c}\times\right]\right]\right)$$
$${\left[\frac{\mathbf {n}_2(t_r) -{\boldsymbol {\beta}_2(t_r)}}{\gamma_2 ^{2}(t_r)(1-\mathbf {n}_2(t_r) \cdot {\boldsymbol {\beta }_2(t_r)})^{3}|\mathbf {r}_1(t) -\mathbf {r}_2(t_r)|^{2}}+\frac{\mathbf {n}_2(t_r) \times {\big (}(\mathbf {n}_2(t_r) -{\boldsymbol {\beta }_2(t_r)})\times {{\boldsymbol {\dot{\beta} }_2(t_r)}}{\big )}}{c(1-\mathbf {n}_2(t_r) \cdot {\boldsymbol {\beta }_2(t_r)})^{3}|\mathbf {r}_1(t) -\mathbf {r}_2(t_r)|}\right]}$$
where $\mathbf {n}_2(t_r) = \frac{\mathbf{r}_1(t)-\mathbf{r}_2(t_r)}{|\mathbf{r}_1(t)-\mathbf{r}_2(t_r)|}$, $\boldsymbol {\beta}_2(t_r) = \frac{\mathbf{\dot{r}}_2(t_r)}{c}$ and $\gamma_2(t_r) = \frac{1}{\sqrt{1-|\boldsymbol {\beta}_2(t_r)|^2}}$, and similarly for $\mathbf {n}_1(t_r)$, $\boldsymbol {\beta}_1(t_r)$ and $\gamma_1(t_r)$. The retarded time is defined implicitly by the equation $t_r = t - \frac{1}{c}|\mathbf{r}_1(t)-\mathbf{r}_2(t_r)|$. I slightly abused notation by "factoring out" the cross product terms to avoid duplicating things, hopefully this is clear enough.
This can be generalized to a system of $N$ charged particles in a similar way. I have not performed the calculations myself due to the apparent complexity of the resulting equation of motion, but in principle one could check whether the solutions correspond to what is predicted with the electron spiraling down into the nucleus, or whether it leads to something close to elliptical orbits.
My intuition tells me that we won't observe the kind of spiralling down predicted by assuming energy is stored in the electric fields throughout all of space in this setup. Instead we consider the $\mathbf{E}$ and $\mathbf{B}$ fields as useful mathematical abstractions to simplify the expression given above into more manageable components. The interpretation of the Poynting vector would be as energy flux density which would exist only if there exist other charges that would get accelerated by the fields. In particular, the atomic stability problem would require additional charges being present near the hydrogen atom, which would obviously affect the electron's orbit through additional force terms as a multi-body problem. In that scenario, there would be an energy transfer between the electron and other charges nearby. But even then it is not clear that the electron automatically loses energy, because the nearby particles will in turn radiate and couple with the electron's motion.
We can simplify the equations above by assuming the mass of particle 2 to be very large, so that it remains effectively stationary in some inertial frame of reference, and located at the origin. Then the equations above simplify greatly and we have the following equation of motion for particle 1 (with $\mathbf{r}_2(t) = \mathbf{0}$):
$$m_1{\mathbf{\ddot{r}}_1}(t) = {\frac {\mu_0c^2q_1q_2}{4\pi}}\frac{\mathbf {r}_1(t)}{|\mathbf {r}_1(t)|^{3}}$$
which is just the equation of motion of a charged particle moving in an electrostatic potential with the Coulomb force. This limiting model is formally identical to the model of a planet orbiting a massive object under Newtonian gravitation, and we clearly have elliptical orbits. Therefore it would be incorrect to claim that classical electromagnetism predicts instabilities of the atom (in this limiting case with very massive nucleus, at least) if we don't consider the energy supposedly stored in the EM fields. Also it deals away with the self-energy problem of a charged particle (integrating the "electrical field energy density" over all spaces gives an infinite result, which would be simply a meaningless calculation since there is no actual energy stored in such a field).
I hope what I said above was sufficiently clear, and I would be curious whether solutions to the equations of motion I have described have been calculated (or approximated somewhat) to predict classical orbits of an electron around the nucleus.
Note also that I am not questioning the validity of quantum mechanics and more detailed theories of matter. I am simply wondering whether the specific problem of atomic instability supposedly predicted by classical electromagnetics arises only due to the assumed existence of electromagnetic fields carrying energy, or whether a field-free formulation of electromagnetics using the equations of motion above is also subject to this problem. I am sure there are other problems that this model cannot resolve, such as the existence of discrete atomic emission and absorption spectra. But the important observation I wanted to make is that starting from the classical Maxwell equations and Lorentz force, we can derive the Liénard-Wiechert potential, and then derive the explicit equations of motion above, and finally forget about the existence of the $\mathbf{E}$ and $\mathbf{B}$ fields. This leads to a classical model of a two-body atom with stable orbits.
quantum-mechanics electromagnetism electromagnetic-radiation classical-electrodynamics atoms
edited Feb 24 at 6:11
Tob Ernack
Tob ErnackTob Ernack
$\begingroup$ This does not seem to be a question. $\endgroup$ – StephenG Feb 24 at 3:36
$\begingroup$ The question is whether this classical model predicts the instability of the atom and whether it is truly a consequence of classical electromagnetism. $\endgroup$ – Tob Ernack Feb 24 at 3:38
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat, after the first couple which were relevant to improving the question. $\endgroup$ – David Z♦ Feb 24 at 4:47
$\begingroup$ @Tob I do believe there's a good question in here somewhere, but it's a really long post. Any way you could omit some of the details to keep it a bit shorter? Or if it's really necessary to have a lot of detail, at least have a shorter summary of the question at the beginning, which people can read to get a clear sense of what you're asking, then you can fill in more of the details after that. $\endgroup$ – David Z♦ Feb 24 at 4:50
$\begingroup$ I apologize if I was being too verbose. I prefer to write down all my thoughts to clear up as many misunderstandings as I can. I was afraid that being too concise would lead to people making incorrect assumptions about my problem. $\endgroup$ – Tob Ernack Feb 24 at 4:53
John Lighton Synge had similar idea and he analyzed numerically the equations of motions for two oppositely charged particles of arbitrary masses where only retarded EM forces are present.
J. L. Synge, On the electromagnetic two–body problem., Proc. Roy. Soc. A 177 118–39 (1940)
https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1940.0114
He found that the system still collapses, but the greater the difference in masses, the more slowly it does so. For hydrogen atom, the time of collapse turned out to be hundreds of times longer than the time naively obtained from the Larmor formula.
The collapse is due to simple but perhaps too simple assumption: that the force acting on any particle is just retarded EM force due to the other particle. Then, because the motion of the particles is anticorrelated (when one moves left, the other moves right), the system radiates EM energy away.
If we introduce into the model background EM radiation that acts on both particles (additional forces), collapse ceases to be inevitable, because the radiated energy can be supplied by the background radiation forces. There are some papers on that - for more on this, see also my answer here:
The classical electrodynamic atom
edited Jun 18 at 20:29
Ján LalinskýJán Lalinský
$\begingroup$ This is also a nice answer. I was also thinking about some numerical solutions for the equations I mentioned, so I am glad someone else did the work before. So I guess the collapse would arise from the fact that the force is neither central nor conservative due to its complicated dependence on the retarded time? $\endgroup$ – Tob Ernack Feb 25 at 17:00
$\begingroup$ I guess the fact that the collapse moves further away in time with increased mass of the nucleus confirms that my calculations are correct, and that the limiting case does give stable orbits. But due to the masses being finite in reality, the forces just become that much more complicated and the collapse becomes inevitable. $\endgroup$ – Tob Ernack Feb 25 at 17:12
$\begingroup$ With hindsight, I realize this should have been obvious perhaps. Conservation of energy and momentum arise from conservative and central forces usually. So when dealing with the weird forces arising from EM, it is likely that these properties simply do not hold. So to keep conservation of energy and momentum, we include these radiation terms which "go out of the system". In fact, I think this answer addresses more directly the questions I was having (not to take away anything from Void's great answer as well), so I will likely accept it. $\endgroup$ – Tob Ernack Feb 25 at 17:26
$\begingroup$ I think it should be clearly stated that the example of Synge creates energy out of nothing, as Synge himself admits in the introduction of the paper. You can see this by computing the Maxwell field sourced by the particles. In the zero-mass-ratio limit it is even a perpetuum mobile supplying its surroundings with radiative energy for eternity. $\endgroup$ – Void Feb 27 at 8:11
$\begingroup$ @Void Synge says actually something different. He states clearly that the energy is not conserved assuming the usual stress-energy tensor [the Poynting formulae - based one], but that there would be conservation if a modified energy-tensor was used. This modified tensor he was talking about would be essentialy the same as the one Frenkel introduced in his 1925 paper based on the Lorentz-force terms in the equations of motion without self-forces. Synge probably wasn't familiar or chose not to mention Frenkel's work when finishing his paper above. $\endgroup$ – Ján Lalinský Feb 27 at 18:58
The essential conceptual problem in your treatment is the fact that you assume that the radiation should somehow emerge solely from particle $A$ acting on particle $B$. This is not true, the radiation (and radiation-reaction) comes from the particle $B$ acting on itself! To understand this, you must consider $A$ and $B$ to be finite bodies first. Then the radiation schematically emerges as:
all the charges in the body $B$ "sending out" their own electromagnetic potential on their null cones $|\vec{r}-\vec{r}_B|-ct = 0$,
body $A$ accelerating charges within body $B$,
and finally, the accelerated charges within $B$ interacting with some of the electromagnetic potential from within $B$ (point 1.)!
(You can swap $B$ and $A$ to get the radiation from particle $A$ as well.)
When the dust settles, you can take a limit of the sizes of the bodies going to zero and you get the famous Abraham-Lorentz-Dirac force acting on either $A$ or $B$: $$F^{\mathrm{ALD}}_\mu = \frac{\mu_o q^2}{6 \pi m c} \left[\frac{d^2 p_\mu}{d \tau^2}-\frac{p_\mu}{m^2 c^2} \left(\frac{d p_\nu}{d \tau}\frac{d p^\nu}{d \tau}\right) \right]$$
Nevertheless, the derivation of this result is notoriously challenging both conceptually and technically. The reason for that is that when you treat the body $B$ as an infinitely small particle, it should not be able to interact with data on its own light-cone because that would mean it is moving beyond the speed of light! On the other hand, the particle is on its own lightcone at $t=0$ and $\vec{r} = \vec{r}_B$, and the potential and the Lorentz force diverge at that exact position!
The only way to resolve this in a rigorous way is to assume, as already mentioned above, that the bodies in questions are of finite spatial extent and finite charge densities. Then, you take the limit of the size going to zero. This was carefully redone in 2009 by Gralla, Harte & Wald and I recommend that paper for further information. (The reason why this topic has received heightened interest recently is the fact that gravitational-wave inspirals of small stellar-mass astrophysical objects into supermassive black holes can be treated exactly in the approximation of a "self-forced particle", see Barack & Pound, 2018.)
You can derive the Larmor formula from a particular perturbative approximation of the ALD force called order reduction. First you take the $\mathrm{d}p^\mu/\mathrm{d}\tau$ for the particle without radiation reaction and insert it into $F^{\rm ALD}_\mu$. Then the Larmor formula is just the rate with which this force is taking energy from the particle.
EDIT: A broader historical discussion
Jan Lálinský reminded me that there exist formulations of classical electrodynamics that 1) agree with most of the predictions of Maxwell equations in the continuum limit (given a certain "absorbing universe" postulate), and 2) where the "point particle" is not the limit of a finite body and does not feel any self-force. A brief review of these "Schwarzschild-Tetrode-Fokker(-Frenkel)" (STF) electrodynamics was given by Wheeler & Feynman in 1949.
Depending on how exactly do you implement the "absorber universe", the quasi-neutral ensemble of particles far away from your system, the planetary atom is also usually unstable in STF electrodynamics. This is because the energy tends to be stolen from the atom by the ensemble of far-away particles and dissipated (even though possibly at a slower rate). On one hand, this is a nice "Machian" twist on electrodynamics, since the notion of the field is emergent from the physical particles, and the particles would not radiate energy if there were no other particles to pass the energy to. On the other hand, the STF electrodynamics tend to have curious non-local properties such as the absorber universe "knowing" about an action on the particle an infinite time before the action itself occurs! This makes the theory physically unsatisfactory to me.
Consider the following example of a pulsar, whose pulse we detect and whose rotation rate slows down as a consequence of the radiative energy loss. In mainstream electrodynamics, we talk about electromagnetic waves traveling for eons through space from the pulsar as independent energy-carrying entities, while the STF theory defies this picture. While in the mainstream electrodynamics the wave took the energy away from the pulsar and made it slow down its rotation rate, in the STF theory the pulsar slows down (or not) thanks to the fact that it "knows" that energy-receiving objects such as your antenna will be there in a thousand years!!!
Ultimately, both the usual particle+field and STF theories are wrong, and the correct theory of electrodynamics of fundamental point particles is quantum electrodynamics (and even more ultimately the Standard model), so this is more of an academic discussion. However, I find the STF picture grossly undidactic as compared to the understanding of classical electrodynamics as the theory of the electromagnetic field sourced by finite continua that we sometimes limit towards approximate point particles.
VoidVoid
$\begingroup$ This is very interesting, so I should really be considering the finite extent of the charged bodies when doing the analysis, before taking any limits? I guess this kind of calculation will be a lot more complicated, I will try to follow the papers you mention. $\endgroup$ – Tob Ernack Feb 24 at 17:51
$\begingroup$ I know that in the gravitational case, one can treat spherical bodies as point particles in the region outside of their radius, so I wouldn't intuitively expect this to give qualitatively different results but I guess the electromagnetic equations might have different behavior due to the finite propagation velocity. $\endgroup$ – Tob Ernack Feb 24 at 18:07
$\begingroup$ @TobErnack: The gravitational self-force does not arise in the Newtonian theory of gravity, only in relativistic gravity (general relativity), where the speed of light is also a limit on the propagation speed of the field. You can derive the electromagnetic self-force also using other, less rigorous ways. One of them is to take half the advanced plus half the retarded Liénard-Wiechert potentials generated by $B$, and compute the Lorentz force the field exerts on $B$ itself from them. The infinites cancel and you get the ALD force. $\endgroup$ – Void Feb 24 at 18:45
$\begingroup$ > "This is not true, the radiation (and radiation-reaction) comes from the particle acting on itself!" This is right as far as we are talking about extended charged distributions. But not necessarily for point charges - there are consistent theories of point charged massive particles that is free of self-interaction, such as the Tetrode, the Fokker and the Frenkel formulations. $\endgroup$ – Ján Lalinský Feb 25 at 11:38
$\begingroup$ @JánLalinský I agree that once your particles are truly particles, you are opening a Pandora box and you can ascribe them many possible equations of motion that can be even embedded in things such as action principles. But I think that the theories you mention are neither 1) fundamentally correct, 2) practical for approximate computations, nor 3) good for teaching about this subject. So I mean yes, in principle, but that is about it. I added an edit to this effect. $\endgroup$ – Void Feb 25 at 16:32
This subject has long been an interest of mine and from what I can find, it seems to me that the theory of electromagnetic direct particle interaction is not well-developed enough to answer your question. The reason is that the equations you're working with are not just any kind of ODEs, they are delay differential equations. As far as I know, there is no general solution of 2 particles directly interacting in this way for 2 dimensions or more. I think in 1 dimension the problem only has a global solution if the charges have the same sign (though I could be mistaken and the attractive case might be solved too).
There is one group I've found tackling this question in an interesting way. They work with a formalism where they assume direct particle interaction along the light cone AND Maxwell's equations for the field (in which the fields are uniquely prescribed by the particles' trajectories and not independent, dynamic degrees of freedom). They have some results showing that some solutions to this system are also solutions to the direct particle interaction system. Their approach is very mathematical but I will provide links to their work here for your interest, though I am not qualified to vet their approach: https://iopscience.iop.org/article/10.1088/1751-8113/49/44/445202/pdf https://www.tandfonline.com/doi/abs/10.1080/03605302.2013.814142 https://www.sciencedirect.com/science/article/pii/S0022039616000243 https://arxiv.org/abs/1603.05115
From my read of things: If you consider a universe of only two particles, their orbits would be completely stable. Any instability comes from the interaction of the two particle system with a larger bath of many, many particles. This phenomenon can be captured by the Dirac-Abraham-Lorentz force, which arises from these complicated many-particle interactions. In the standard theory, this can be added on as a sort of fudge factor to Maxwell's equations, but in doing so a ton of complications are introduced and mathematically the resulting ODEs may not be well-posed. Nonetheless, the standard Maxwell-Lorentz theory forms the basis of quantization that results in QED, but one can quantize direct particle theories instead. The result is a different quantum field theory of the electromagnetic field with its own pros and cons (the cons outweigh the pros I believe or else it would be the standard approach).
Daniel KerrDaniel Kerr
$\begingroup$ The instability comes mainly due to absence of other than retarded forces. There is a numerical solution of the delay equations for purely retarded forces by J.L.Synge, see my answer. Then even two particles alone will not have stable orbits. If forces are half retarded, half-advanced, there may be stable orbits (this was investigated in 1924's by Leigh Page, see journals.aps.org/pr/abstract/10.1103/PhysRev.24.296). $\endgroup$ – Ján Lalinský Feb 25 at 23:15
$\begingroup$ The presence of solely retarded forces is a consequence of the interaction with many other particles. It doesn't make sense to consider a two particle universe with solely retarded forces. I read an interesting analysis about this assumption on arxiv I while back: arxiv.org/abs/1501.03516 I think there are more modern proofs of existence for stable orbits for specific initial data but I honestly don't remember. I don't think there are proofs for existence of solutions for all sets of initial trajectory data in 3 spatial dimensions. $\endgroup$ – Daniel Kerr Feb 26 at 3:29
$\begingroup$ > "It doesn't make sense to consider a two particle universe with solely retarded forces." In what sense it doesn't make sense? It makes perfect sense as a model to simulate on computer. $\endgroup$ – Ján Lalinský Feb 27 at 0:59
$\begingroup$ Within Wheeler Feynman absorber theory, a universe of two particles and nothing else will have half-advanced, half-retarded forces. Fully retarded forces are only a consequence of the interactions with the absorber (comprised of the rest of the universe's particles in a very specific thermodynamic arrangement). My original comment in the answer was referring to the hypothetical universe of only two particles, that is the context for this whole line of questioning. $\endgroup$ – Daniel Kerr Feb 27 at 1:46
$\begingroup$ You are correct as far as the Wheeler-Feynman absorber theory goes, but I think the original question wasn't focused on this theory. $\endgroup$ – Ján Lalinský Feb 27 at 19:06
I agree with Void's answer, but I'll offer another side: By taking the mass of one particle to be much larger than the other, you've ended up with one charge doing all the radiating under the influence of the static Coulomb field of the other: your model of an electron orbiting a nucleus has the same equation of motion as a small body orbiting a much large body in a Newtonian gravitational system. Your model doesn't predict spiralling into the nucleus because you've used the standard Lorentz force without the radiation damping term. It's mathematically correct, but violates conservation of energy and momentum of the whole system when radiation is significant.
Dirac$^1$ addressed this problem by deriving an equation of motion for an arbitrarily moving charge using the local conservation of energy and momentum for a tube surrounding the charge:
$$1/2q^2\epsilon^{-1}\dot{v}_{\mu} - qv_{\nu}f^{\nu}_{\mu} = \dot{B}_{\mu}$$
Where $q$ is the charge, $\epsilon$ the radius of the tube, $v$ the four-velocity, $f$ the field bounded to the charge, $B$ an undetermined four-vector.
To get further, he had to make further assumptions on how simple the equation was likely to be, and add a negative mass to compensate the Coulomb electromagnetic mass contribution $\rightarrow \infty$ as $\epsilon \rightarrow 0$, getting:
$$m\dot{v}_{\mu} - 2/3q^2\ddot{v}_{\mu} - 2/3q^2\dot{v}^2 {v}_{\mu} = ev_{\nu}F^{\nu}_{\mu\;\text{in}}$$
Dirac's derivation has the advantage of ignoring the structure of the charge and the Poincare stresses holding it together; unlike earlier models created by Lorentz, Abraham and Schott. However, it has problems with pre-acceleration and causality leading to it being modified by the Landau-Lifshitz equation.
[1] Dirac, P.A.M. Proc. R. Soc. London A 167, 148 (1938).
answered Feb 26 at 1:46
John McVirgoJohn McVirgo
$\begingroup$ The model with only interparticle EM forces does not violate conservation of energy, if energy is properly defined (based on the actual equations of motion). There is a problem with Poynting energy in the sense it is infinite, but that is not a problem of the model, since Poynting theorem isn't really relevant for point particles. The proper energy definition based on equations of motion was given by Frenkel in his paper J. Frenkel, Zur Elektrodynamik punktfoermiger Elektronen, Zeits. f. Phys., 32, (1925), p. 518-534. dx.doi.org/10.1007/BF01331692 $\endgroup$ – Ján Lalinský Feb 27 at 1:07
$\begingroup$ Dirac addressed the problem with the imperfect equations of motion for charged body of non-zero spatial extent and there his result is fine as estimate of the EM self-force. However, his formulae do not have sensible limit for point charges. There, Frenkel's approach is much better. $\endgroup$ – Ján Lalinský Feb 27 at 1:18
$\begingroup$ Is there an English translation of Frenkel's paper? $\endgroup$ – Tob Ernack Feb 27 at 22:17
$\begingroup$ @TobErnack not that I know of. $\endgroup$ – Ján Lalinský Feb 28 at 18:58
From a comment of the OP:
The question is whether this classical model predicts the instability of the atom and whether it is truly a consequence of classical electromagnetism
The following discusses the stability part:
The basic question is whether the solutions you find are stable, or metastable, i.e. a small perturbation, like, radiation in the field of the other, or atomic vibrations, will send the electron down to the nucleus.
(I do remember that metastable states can exist in the classical solutions, but cannot find the reference)
From glancing through your derivation , I cannot understand what you do with radiation, i.e. how you could perturb your solution to see if it is stable or metastable. Radiation is an experimental fact. A charge radiates energy away in the electromagnetic spectrum when accelerating in a field . This is an experimental fact. Where is radiation in your formulas?
Bohr did obtain planetary solutions, but needed to impose quantization of angular momentum in order to have stability. ( radiation would carry of a unit of angular momentum)
I suspect that this is the case with your solutions, they are metastable not taking into account radiation.
anna vanna v
$\begingroup$ Where is radiation in your formulas? The OP is trying to have this arise from their work. They aren't saying it isn't true. Therefore, assuming radiation exists would be a sort of "assuming what you are trying to prove". $\endgroup$ – Aaron Stevens Feb 24 at 23:27
$\begingroup$ @AaronStevens I think you misunderstand. A model that wants to replace the quantum model MUST describe all data, and radiation does not happen around two charges, it is an experimental effect that it happens aroundany charges in an environment of accelerated motion. So even if he finds a stable orbit, it should be proven that it is stable against small perturbations too, otherwise it is metastable, and those metastanle are known in the classical from maxwell equations too. $\endgroup$ – anna v Feb 25 at 5:01
$\begingroup$ I don't think the OP is looking to replace the quantum model. The OP is aware of the experimental fact of radiation, and they are not debating this. I agree about what you say about stable orbits, but telling the OP they need to take radiation into account completely defeats to purpose of their question. $\endgroup$ – Aaron Stevens Feb 25 at 5:08
$\begingroup$ @AaronStevens I clarified to what part my answer addresses $\endgroup$ – anna v Feb 25 at 7:31
$\begingroup$ But you are still asking for radiation to be accounted for in the equations, whereas the OP wants radiation to come out of the equations $\endgroup$ – Aaron Stevens Feb 25 at 7:33
Not the answer you're looking for? Browse other questions tagged quantum-mechanics electromagnetism electromagnetic-radiation classical-electrodynamics atoms or ask your own question.
Why is there no (time derivative of charge density) in the $B$ field in Jefimenko's equations?
Question about "quadrupole radiation" vector potential formula derivation
Classical Viewpoint on Electromagnetism
The spin-orbit interaction for a classical magnetic dipole moving in an electric field
Contradiction between energy conservation and radiation damping force
Has the Helmholtz decomposition of the $\mathbf{E}$ field from the Liénard–Wiechert potentials been worked out?
What are the advanced electric and magnetic fields for an arbitrarily moving charge?
Retarded potentials with a dirac delta fail to give Lienard-Wiechert
Retarded time in Larmor's formula
On the applicability of Coulomb's law and the Biot-Savart law | CommonCrawl |
\begin{definition}[Definition:Enumeration/Countably Infinite]
Let $X$ be a countably infinite set.
An '''enumeration''' of $X$ is a bijection $x: \N \to X$.
\end{definition} | ProofWiki |
There exists an interval exchange with a non-ergodic generic measure
JMD Home
This Volume
Erratum: On Omri Sarig's work on the dynamics of surfaces
2015, 9: 305-353. doi: 10.3934/jmd.2015.9.305
Equidistribution for higher-rank Abelian actions on Heisenberg nilmanifolds
Salvatore Cosentino 1, and Livio Flaminio 2,
Centro de Matemática, Universidade do Minho, Campus de Gualtar, 4710-057 Braga, Portugal
UMR CNRS 8524, UFR de Mathématiques, Université de Lille 1, F59655 Villeneuve d'Asq CEDEX, France
Received June 2015 Revised September 2015 Published November 2015
We prove quantitative equidistribution results for actions of Abelian subgroups of the $(2g+1)$-dimensional Heisenberg group acting on compact $(2g+1)$-dimensional homogeneous nilmanifolds. The results are based on the study of the $C^\infty$-cohomology of the action of such groups, on tame estimates of the associated cohomological equations and on a renormalization method initially applied by Forni to surface flows and by Forni and the second author to other parabolic flows. As an application we obtain bounds for finite Theta sums defined by real quadratic forms in $g$ variables, generalizing the classical results of Hardy and Littlewood [25,26] and the optimal result of Fiedler, Jurkat, and Körner [17] to higher dimension.
Keywords: cohomological equation., Equidistribution, Heisenberg group.
Mathematics Subject Classification: Primary: 37C85, 37A17, 37A45; Secondary: 11K36, 11L0.
Citation: Salvatore Cosentino, Livio Flaminio. Equidistribution for higher-rank Abelian actions on Heisenberg nilmanifolds. Journal of Modern Dynamics, 2015, 9: 305-353. doi: 10.3934/jmd.2015.9.305
L. Auslander and R. Tolimieri, Abelian harmonic analysis, theta functions and function algebras on a nilmanifold,, Lecture Notes in Mathematics, (1975). Google Scholar
L. Auslander, Lecture Notes on Nil-Theta Functions,, Regional Conference Series in Mathematics, (1977). Google Scholar
A. Bufetov and G. Forni, Théorèmes limites pour les flots horocycliques,, Ann. Sci. Éc. Norm. Supér. (4), 47 (2014), 851. Google Scholar
M. V. Berry and J. Goldberg, Renormalisation of curlicues,, Nonlinearity, 1 (1988), 1. doi: 10.1088/0951-7715/1/1/001. Google Scholar
H. Cartan, Ouverts fondamentaux pour le groupe modulaire,, Séminaire Henri Cartan, 10 (): 1957. Google Scholar
N. Chevallier, Meilleures approximations diophantiennes simultanées et théorème de Lévy,, Ann. Inst. Fourier (Grenoble), 55 (2005), 1635. doi: 10.5802/aif.2134. Google Scholar
N. Chevallier, Best simultaneous Diophantine approximations and multidimensional continued fraction expansions,, Mosc. J. Comb. Number Theory, 3 (2013), 3. Google Scholar
E. A. Coutsias and N. D. Kazarinoff, The approximate functional formula for the theta function and Diophantine Gauss sums,, Trans. Amer. Math. Soc., 350 (1998), 615. doi: 10.1090/S0002-9947-98-02024-8. Google Scholar
F. Cellarosi and J. Marklof, Quadratic Weyl sums, automorphic functions, and invariance principles,, , (2015). Google Scholar
S. G. Dani, Divergent trajectories of flows on homogeneous spaces and Diophantine approximation,, J. Reine Angew. Math., 359 (1985), 55. doi: 10.1515/crll.1985.359.55. Google Scholar
D. Damjanović and A. Katok, Local rigidity of partially hyperbolic actions I. KAM method and $\mathbb Z^k$ actions on the torus,, Ann. of Math. (2), 172 (2010), 1805. doi: 10.4007/annals.2010.172.1805. Google Scholar
_______, Local rigidity of homogeneous parabolic actions: I. A model case,, J. Mod. Dyn., 5 (2011), 203. doi: 10.3934/jmd.2011.5.203. Google Scholar
L. Flaminio and G. Forni, Invariant distributions and time averages for horocycle flows,, Duke Math. J., 119 (2003), 465. doi: 10.1215/S0012-7094-03-11932-8. Google Scholar
________, Equidistribution of nilflows and applications to theta sums,, Ergodic Theory Dynam. Systems, 26 (2006), 409. doi: 10.1017/S014338570500060X. Google Scholar
________, On the cohomological equation for nilflows,, J. Mod. Dyn., 1 (2007), 37. Google Scholar
________, On effective equidistribution for higher step nilflows,, , (2014). Google Scholar
H. Fiedler, W. Jurkat and O. Körner, Asymptotic expansions of finite theta series,, Acta Arith., 32 (1977), 129. Google Scholar
A. Fedotov and F. Klopp, An exact renormalization formula for Gaussian exponential sums and applications,, Amer. J. Math., 134 (2012), 711. doi: 10.1353/ajm.2012.0016. Google Scholar
G. B. Folland, Harmonic analysis in phase space,, Annals of Mathematics Studies, (1989). Google Scholar
G. Forni, Deviation of ergodic averages for area-preserving flows on surfaces of higher genus,, Ann. of Math. (2), 155 (2002), 1. doi: 10.2307/3062150. Google Scholar
F. Götze and M. Gordin, Limiting distributions of theta series on Siegel half-spaces,, Algebra i Analiz, 15 (2003), 118. doi: 10.1090/S1061-0022-03-00803-3. Google Scholar
F. Götze and G. Margulis, Distribution of values of quadratic forms at integral points,, , (2010). Google Scholar
J. Griffin and J. Marklof, Limit theorems for skew translations,, J. Mod. Dyn. 8 (2014), 8 (2014), 177. doi: 10.3934/jmd.2014.8.177. Google Scholar
R. S. Hamilton, The inverse function theorem of Nash and Moser,, Bull. Amer. Math. Soc. (N.S.), 7 (1982), 65. doi: 10.1090/S0273-0979-1982-15004-2. Google Scholar
G. H. Hardy and J. E. Littlewood, Some problems of diophantine approximation,, Acta Math., 37 (1914), 193. doi: 10.1007/BF02401834. Google Scholar
________, Some problems of diophantine approximation: An additional note on the trigonometrical series associated with the elliptic theta-functions,, Acta Math., 47 (1926), 189. doi: 10.1007/BF02544111. Google Scholar
A. Katok, Cocycles, cohomology and combinatorial constructions in ergodic theory,, in collaboration with E. A. Robinson, (1999), 107. doi: 10.1090/pspum/069/1858535. Google Scholar
________, Combinatorial Constructions in Ergodic Theory and Dynamics,, University Lecture Series, (2003). doi: 10.1090/ulect/030. Google Scholar
A. Katok and S. Katok, Higher cohomology for abelian groups of toral automorphisms,, Ergodic Theory Dynam. Systems, 15 (1995), 569. doi: 10.1017/S0143385700008531. Google Scholar
________, Higher cohomology for abelian groups of toral automorphisms. II. The partially hyperbolic case, and corrigendum,, Ergodic Theory Dynam. Systems, 25 (2005), 1909. doi: 10.1017/S0143385705000271. Google Scholar
H. Klingen, Introductory Lectures on Siegel Modular Forms,, Cambridge Studies in Advanced Mathematics, (1990). doi: 10.1017/CBO9780511619878. Google Scholar
D. Y. Kleinbock and G. A. Margulis, Logarithm laws for flows on homogeneous spaces,, Invent. Math., 138 (1999), 451. doi: 10.1007/s002220050350. Google Scholar
A. Katok and V. Niţică, Rigidity in Higher Rank Abelian Group Actions. Volume I. Introduction and Cocycle Problem,, Cambridge Tracts in Mathematics, (2011). doi: 10.1017/CBO9780511803550. Google Scholar
A. Katok and F. Rodriguez Hertz, Rigidity of real-analytic actions of $SL(n,\mathbb Z)$ on $\mathbb T^n$: A case of realization of Zimmer program,, Discrete Contin. Dyn. Syst., 27 (2010), 609. doi: 10.3934/dcds.2010.27.609. Google Scholar
A. Katok and R. J. Spatzier, Differential rigidity of Anosov actions of higher rank abelian groups and algebraic lattice actions,, Tr. Mat. Inst. Steklova, 216 (1997), 292. Google Scholar
J. C. Lagarias, Best simultaneous Diophantine approximations. I. Growth rates of best approximation denominators,, Trans. Amer. Math. Soc., 272 (1982), 545. doi: 10.2307/1998713. Google Scholar
G. W. Mackey, A theorem of Stone and von Neumann,, Duke Math. J., 16 (1949), 313. doi: 10.1215/S0012-7094-49-01631-2. Google Scholar
J. Marklof, Limit theorems for theta sums,, Duke Math. J., 97 (1999), 127. doi: 10.1215/S0012-7094-99-09706-5. Google Scholar
________, Theta sums, Eisenstein series, and the semiclassical dynamics of a precessing spin,, in Emerging Applications of Number Theory (Minneapolis, (1996), 405. doi: 10.1007/978-1-4612-1544-8_17. Google Scholar
J. Marklof, Pair correlation densities of inhomogeneous quadratic forms,, Ann. Math. (2), 158 (2003), 419. doi: 10.4007/annals.2003.158.419. Google Scholar
J. Moser, On commuting circle mappings and simultaneous Diophantine approximations,, Math. Z., 205 (1990), 105. doi: 10.1007/BF02571227. Google Scholar
D. Mumford, Tata Lectures on Theta. I,, With the collaboration of C. Musili, (1983). doi: 10.1007/978-1-4899-2843-6. Google Scholar
________, Tata Lectures on Theta. III,, With collaboration of M. Nori and P. Norman, (1991). Google Scholar
L. Schwartz, Théorie des Distributions,, Publications de l'Institut de Mathématique de l'Université de Strasbourg, (1966). Google Scholar
N. A. Shah, Asymptotic evolution of smooth curves under geodesic flow on hyperbolic manifolds,, Duke Math. J., 148 (2009), 281. doi: 10.1215/00127094-2009-027. Google Scholar
C. L. Siegel, Symplectic Geometry,, Academic Press, (1964). Google Scholar
D. Sullivan, Disjoint spheres, approximation by imaginary quadratic numbers, and the logarithm law for geodesics,, Acta Math., 149 (1982), 215. doi: 10.1007/BF02392354. Google Scholar
R. Tolimieri, Heisenberg manifolds and theta functions,, Trans. Amer. Math. Soc., 239 (1978), 293. doi: 10.1090/S0002-9947-1978-0487050-7. Google Scholar
A. Weil, Sur certains groupes d'opérateurs unitaires,, Acta Math., 111 (1964), 143. doi: 10.1007/BF02391012. Google Scholar
T. D. Wooley, Perturbations of Weyl sums,, Int. Math. Res. Notices, (2015). doi: 10.1093/imrn/rnv225. Google Scholar
Heping Liu, Yu Liu. Refinable functions on the Heisenberg group. Communications on Pure & Applied Analysis, 2007, 6 (3) : 775-787. doi: 10.3934/cpaa.2007.6.775
Livio Flaminio, Giovanni Forni. On the cohomological equation for nilflows. Journal of Modern Dynamics, 2007, 1 (1) : 37-60. doi: 10.3934/jmd.2007.1.37
Jean-Francois Bertazzon. Symbolic approach and induction in the Heisenberg group. Discrete & Continuous Dynamical Systems - A, 2012, 32 (4) : 1209-1229. doi: 10.3934/dcds.2012.32.1209
Isabeau Birindelli, J. Wigniolle. Homogenization of Hamilton-Jacobi equations in the Heisenberg group. Communications on Pure & Applied Analysis, 2003, 2 (4) : 461-479. doi: 10.3934/cpaa.2003.2.461
Zhenqi Jenny Wang. The twisted cohomological equation over the geodesic flow. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3923-3940. doi: 10.3934/dcds.2019158
James Tanis, Zhenqi Jenny Wang. Cohomological equation and cocycle rigidity of discrete parabolic actions. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3969-4000. doi: 10.3934/dcds.2019160
Guy V. Norton, Robert D. Purrington. The Westervelt equation with a causal propagation operator coupled to the bioheat equation.. Evolution Equations & Control Theory, 2016, 5 (3) : 449-461. doi: 10.3934/eect.2016013
Xinjing Wang, Pengcheng Niu, Xuewei Cui. A Liouville type theorem to an extension problem relating to the Heisenberg group. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2379-2394. doi: 10.3934/cpaa.2018113
L. Brandolini, M. Rigoli and A. G. Setti. On the existence of positive solutions of Yamabe-type equations on the Heisenberg group. Electronic Research Announcements, 1996, 2: 101-107.
Pablo Ochoa. Approximation schemes for non-linear second order equations on the Heisenberg group. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1841-1863. doi: 10.3934/cpaa.2015.14.1841
Luis F. López, Yannick Sire. Rigidity results for nonlocal phase transitions in the Heisenberg group $\mathbb{H}$. Discrete & Continuous Dynamical Systems - A, 2014, 34 (6) : 2639-2656. doi: 10.3934/dcds.2014.34.2639
Patrizia Pucci. Critical Schrödinger-Hardy systems in the Heisenberg group. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 375-400. doi: 10.3934/dcdss.2019025
Fausto Ferrari, Qing Liu, Juan Manfredi. On the characterization of $p$-harmonic functions on the Heisenberg group by mean value properties. Discrete & Continuous Dynamical Systems - A, 2014, 34 (7) : 2779-2793. doi: 10.3934/dcds.2014.34.2779
Giovanni Forni. The cohomological equation for area-preserving flows on compact surfaces. Electronic Research Announcements, 1995, 1: 114-123.
Houda Mokrani. Semi-linear sub-elliptic equations on the Heisenberg group with a singular potential. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1619-1636. doi: 10.3934/cpaa.2009.8.1619
Pablo Ochoa, Julio Alejo Ruiz. A study of comparison, existence and regularity of viscosity and weak solutions for quasilinear equations in the Heisenberg group. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1091-1115. doi: 10.3934/cpaa.2019053
Thierry Horsin, Peter I. Kogut, Olivier Wilk. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. II. Approximation of solutions and optimality conditions. Mathematical Control & Related Fields, 2016, 6 (4) : 595-628. doi: 10.3934/mcrf.2016017
Sebastián Ferrer, Martin Lara. Families of canonical transformations by Hamilton-Jacobi-Poincaré equation. Application to rotational and orbital motion. Journal of Geometric Mechanics, 2010, 2 (3) : 223-241. doi: 10.3934/jgm.2010.2.223
Manuel de León, Juan Carlos Marrero, David Martín de Diego. Linear almost Poisson structures and Hamilton-Jacobi equation. Applications to nonholonomic mechanics. Journal of Geometric Mechanics, 2010, 2 (2) : 159-198. doi: 10.3934/jgm.2010.2.159
Thierry Horsin, Peter I. Kogut. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. I. Existence result. Mathematical Control & Related Fields, 2015, 5 (1) : 73-96. doi: 10.3934/mcrf.2015.5.73
Salvatore Cosentino Livio Flaminio | CommonCrawl |
\begin{document}
\subjclass[2010]{35R03, 58A10, 43A80} \keywords{Heisenberg groups, Rumin complex, $\ell^{p}$-cohomology, parabolicity} \begin{abstract}
Averages are invariants defined on the $\ell^1$ cohomology of Lie groups. We prove that they vanish for abelian and Heisenberg groups. This result completes work by other authors and allows to show that the $\ell^1$ cohomology vanishes in these cases. \end{abstract} \title{Averages and the $\ell^{q,1}
\section{Introduction}
\subsection{From isoperimetry to averages of $L^1$ forms}
The classical isoperimetric inequality implies that if $u$ is a compactly supported function on $\mathbb{R}^n$, $\|u\|_{n'}\leq C\|du\|_1$, where $n'=\frac{n}{n-1}$. Equivalently, every compactly supported closed $1$-form $\omega$ admits a primitive $u$ such that $\|u\|_{n'}\leq C\|\omega\|_1$. More generally, if $\omega$ is a closed $1$-form on $\mathbb{R}^n$ which belongs to $L^1$, does it have a primitive in $L^{n'}$ ?
There is an obstruction. We observe that each component $a_i$ of $\omega=\sum_{i=1}^n a_i dx_i$ is again in $L^1$, the integral $\int_{\mathbb{R}^n}a_i \,dx_1\cdots dx_n$ is well defined and it is an obstruction for $\omega$ to be the differential of an $L^q$ function (for every finite $q$). Indeed, if $\omega=du$, $a_n=\frac{\partial u}{\partial x_n}$. For almost every $(x_1,\ldots,x_{n-1})$, the function $t\mapsto \frac{\partial u}{\partial x_n}(x_1,\ldots,x_{n-1},t)$ belongs to $L^1$ and $t\mapsto u(x_1,\ldots,x_{n-1},t)$ belongs to $L^q$. Since $u(x_1,\ldots,x_{n-1},t)$ tends to $0$ along subsequences tending to $+\infty$ or $-\infty$, $$ \int_{\mathbb{R}}\frac{\partial u}{\partial x_n}(x_1,\ldots,x_{n-1},t)\,dt=0, \quad\text{hence}\quad \int_{\mathbb{R}^n}\frac{\partial u}{\partial x_n}\,dx_1\cdots dx_n=0. $$ A similar argument applies to other coordinates. Note that $a_n \,dx_1\wedge\cdots\wedge dx_n=(-1)^{n-1}\omega\wedge(dx_1\wedge\cdots\wedge dx_{n-1})$.
More generally, if $G$ is a Lie group of dimension $n$, there is a pairing, the \emph{average pairing}, between closed $L^1$ $k$-forms $\omega$ and closed left-invariant $(n-k)$-forms $\beta$, defined by $$ (\omega,\beta)\mapsto \int_G \omega\wedge\beta. $$ The integral vanishes if either $\omega=d\phi$ where $\phi\in L^1$, or $\beta=d\alpha$ where $\alpha$ is left-invariant. Indeed, Stokes formula $\int_{M}d\gamma=0$ holds for every complete Riemannian manifold $M$ and every $L^1$ form $\gamma$ such that $d\gamma\in L^1$. Hence the pairing descends to quotients, the $L^{1,1}$-cohomology $$ L^{1,1}H^k(G)=\text{closed }L^1 \,k\text{-forms}/d(L^1 \,(k-1)\text{-forms with differential in }L^1), $$ and the Lie algebra cohomology $$ H^{n-k}(\mathfrak{g})=\text{closed, left-invariant }(n-k)\text{-forms}/d(\text{left-invariant }(n-k-1)\text{-forms}). $$
\subsection{$\ell^{q,1}$ cohomology}
It turns out that $L^{1,1}$-cohomology has a topological content. By definition, the $\ell^{q,p}$ cohomology of a bounded geometry Riemannian manifold is the $\ell^{q,p}$ cohomology of every bounded geometry simplicial complex quasiisometric to it. For instance, of a bounded geometry triangulation. Contractible Lie groups are examples of bounded geometry Riemannian manifolds for which $L^{1,1}$-cohomology is isomorphic to $\ell^{1,1}$-cohomology.
We do not need define the $\ell^{q,p}$ cohomology of simplicial complexes here, since, according to Theorem 3.3 of \cite{Pansu-Rumin}, every $\ell^{q,p}$ cohomology class of a contractible Lie group can be represented by a form $\omega$ which belongs to $L^p$ as well as an arbitrary finite number of its derivatives. If the class vanishes, then there exists a primitive $\phi$ of $\omega$ which belongs to $L^q$ as well as an arbitrary finite number of its derivatives. This holds for all $1\leq p\leq q\leq\infty$.
Although $\ell^p$ with $p>1$, and especially $\ell^2$ cohomology of Lie groups has been computed and used for large families of Lie groups, very little is known about $\ell^1$ cohomology.
\subsection{From $\ell^{1,1}$ to $\ell^{q,1}$ cohomology}
For instance, the averaging pairing is specific to $\ell^1$ cohomology and it has never been studied yet. The first question we want to address is whether the averaging pairing provides information on $\ell^{q,1}$ cohomology for certain $q>1$.
\begin{que} Given a Lie group $G$, for which exponents $q$ and which degrees $k$ is the averaging pairing $\ell^{q,1}H^k(G)\otimes H^{n-k}(\mathfrak{g})\to\mathbb{R}$ well-defined? \end{que}
The question is whether there exists $q>1$ such that the pairing vanishes on all $L^1$ forms which are differentials of $L^q$ forms. We just saw that for abelian groups $\mathbb{R}^n$, the pairing is well defined for $k=1$ and all finite exponents $q$. Here is a more general result.
\begin{theorem} \label{defined} Let $G$ be a Carnot group. In each degree $1\leq k\leq n$, there is an explicit exponent $q(G,k)>1$ (see Definition \ref{jq}) such that the averaging pairing is defined on $\ell^{q,1}H^k(G)$ for $q\in[1,q(G,k)]$. \end{theorem} We shall see that $q(\mathbb{R}^n,k)=n'=\frac{n}{n-1}$ in all degrees. For Heisenberg groups, $q(\mathbb{H}^{2m+1},k)=\frac{2m+2}{2m+1}$ if $k\not=m+1$, and $q(\mathbb{H}^{2m+1},m+1)=\frac{2m+2}{2m}$.
\subsection{Vanishing of the averaging pairing}
The second question we want to address is whether the averaging pairing is trivial or not.
\begin{que} Given a Lie group $G$, for which exponents $q$ and which degrees $k$ does the averaging pairing $\ell^{q,1}H^k(G)\otimes H^{n-k}(\mathfrak{g})\to\mathbb{R}$ vanish? \end{que} The pairing is always nonzero in top degree $k=n$. Indeed, there exist $L^1$ $n$-forms (even compactly supported ones) with nonvanishing integral. However, this seems not to be the case in lower degrees.
\begin{theorem} \label{vanish} Let $G$ be an abelian group or a Heisenberg group of dimension $n$. In each degree $1\leq k< n$, the averaging pairing vanishes on $\ell^{q,1}H^k(G)$ for $q\in[1,q(G,k)]$. \end{theorem}
In combination with results of \cite{BFP3}, Theorem \ref{vanish} implies a vanishing theorem for $\ell^{q,1}$ cohomology.
\begin{cor} Let $G$ be an abelian group or a Heisenberg group of dimension $n$. In each degree $0\leq k< n$, $\ell^{q,1}H^k(G)=0$ for $q\geq q(G,k)$. \end{cor}
This is sharp. It is shown in \cite{Pansu-Rumin} that $\ell^{q,1}H^k(G)\not=0$ if $q<q(G,k)$. Also, in top degree, not only is $\ell^{q(G,n),1}H^n(G)\not=0$, but the kernel of the averaging map $\ell^{q(G,n),1}H^n(G)\to\mathbb{R}=H^0(\mathfrak{g})^*$ does not vanish. This is in contrast with the results of \cite{BFP2} concerning $\ell^{q,p}H^n(G)$ for $p>1$, where nothing special happens in top degree. The results of \cite{BFP3} rely in an essential manner on analysis of the Laplacian on $L^1$, inaugurated by J. Bourgain and H. Brezis, \cite{Bourgain2007}, adapted to homogeneous groups by S. Chanillo and J. van Schaftingen, \cite{chanillo2009subelliptic}.
\subsection{Methods}
The Euclidean space $\mathbb{R}^n$ is \emph{$n$-parabolic}, meaning that there exist smooth compactly supported functions $\xi$ on $\mathbb{R}^n$ taking value $1$ on arbitrarily large balls, and whose gradient has an arbitrarily small $L^n$ norm. If $\omega$ is a closed $L^1$ form and $\beta$ a constant coefficient form, and if $\omega=d\psi$, $\psi\in L^{n'}$, Stokes theorem gives $$
|\int\xi\omega\wedge\beta|=|\int\psi\wedge d\xi\wedge\beta|\leq \|\psi\|_{n'}\|d\xi\|_{n}\|\beta\|_\infty $$ which can be made arbitrarily small.
This argument extends to Carnot groups of homogeneous dimension $Q$, which are $Q$-parabolic. For this, one uses Rumin's complex, which has better homogeneity properties under Carnot dilations than de Rham's complex. When Rumin's complex is exactly homogeneous (e.g. for Heisenberg groups in all degrees, only for certain degrees in general), one gets a sharp exponent $q(G,k)$. This leads to Theorem \ref{defined}.
In Euclidean space, every constant coefficient form $\beta$ has a primitive $\alpha$ with linear coefficients (for instance, $dx_1\wedge\cdots\wedge dx_k=d(x_1\,dx_2\wedge\cdots\wedge dx_k)$). On the other hand, there exist cut-offs which decay like the inverse of the distance to the origin. Therefore $$
|\int\xi\omega\wedge\beta|=|\int\omega\wedge d\xi\wedge\alpha|\leq \|\omega\|_{L^1(\text{supp}(d\xi))} $$ which tends to $0$. This argument extends to Heisenberg groups in all but one degree. To complete the proof of Theorem \ref{vanish}, one performs the symmetric integration by parts, integrating $\omega$ instead of $\beta$. For this, one produces primitives of $\omega$ on annuli, of linear growth.
\subsection{Organization of the paper}
In section \ref{cut-off}, the needed cut-offs are constructed. Theorem \ref{defined} is proven in section \ref{descends}. In order to integrate $\omega\wedge\beta$ by parts, one needs understand the behaviour of wedge products in Rumin's complex on Heisenberg groups, this is performed in section \ref{wedge}. Section \ref{primitives} exploits the linear growth primitives of left-invariant forms. In section \ref{poincare}, controlled primitives of $L^1$ forms are designed, completing the proof of Theorem \ref{vanish}.
\section{Cut-offs on Carnot groups} \label{cut-off}
\subsection{Annuli}
We first construct cut-offs with an $L^\infty$ control on derivatives. In a Carnot group $G$, we fix a subRiemannian metric and denote by $B(R)$ the ball with center $e$ and radius $R$. We fix an orthonormal basis of horizontal left-invariant vector fields $W_1,\ldots,W_{n_1}$. Given a smooth function $u$ on $G$, and an integer $m\in\mathbb{N}$, we denote by $\nabla^m u$ the collection of order $m$ horizontal derivatives $W_{i_1}\cdots W_{i_m}$, $(i_1,\ldots,i_m)\in \{1,\ldots,n_1\}^m$, and by $|\nabla^m u|^2$ the sum of their squares.
\begin{lemma} Let $G$ be a Carnot group. Let $\lambda>1$. There exists $C=C(\lambda)$ such that for all $R>0$, there exists a smooth function $\xi_R$ such that \begin{enumerate}
\item $\xi_R=1$ on $B(R)$.
\item $\xi_R=0$ outside $B(\lambda R)$.
\item For all $m\in\mathbb{N}$, $|\nabla^m\xi_R|\leq C/R^m$. \end{enumerate} \end{lemma}
\begin{proof} We achieve this first when $R=1$, and then set $\xi_R=\xi_1\circ \delta_{1/R}$. \end{proof} \begin{lemma}\label{nabla} Given $f$ a (vector valued) function which is homogeneous of degree $d\in\mathbb{N}$ under dilations, then $\nabla f$ is homogeneous of degree $d-1$. \end{lemma} \begin{proof} Given $f\colon G\to\mathbb{R}$ homogeneous of degree $d$ under dilations, we have that $f(\delta_\lambda p)=\lambda^df(p)$.
By applying a horizontal derivative to the left hand side of the equation, namely $\nabla=W_{j}$ with $j\in\lbrace 1,\ldots,n_1\rbrace$, we get \begin{align*}
\nabla [f(\delta_\lambda p)]=W_j[f(\delta_\lambda p)]=df\circ d\delta_\lambda(W_j)_p=df\big(\lambda (W_j)_{\delta_\lambda p}\big)=\lambda\cdot df\big(W_j\big)_{\delta_\lambda p}\,. \end{align*}
If we now apply $\nabla$ to the right hand side, we get \begin{align*}
\nabla[\lambda^df(p)]=W_j[\lambda^df(p)]=\lambda^d\cdot df(W_j)_p\,. \end{align*}
We have therefore proved that $df\big\vert_{\delta_\lambda p}=\lambda^{d-1}\cdot df\big\vert_p$ when restricted to horizontal derivatives, so we finally get our result \begin{align*}
\nabla f(\delta_\lambda p)=\lambda^{d-1}\nabla f(p)\,. \end{align*} \end{proof}
\subsection{Parabolicity}
Second, we construct cut-offs with a sharper $L^Q$ control on derivatives.
Let $r$ be a smooth, positive function on $G\setminus\{e\}$ that is homogeneous of degree 1 under dilations (one could think of a CC-distance from the origin, but smooth) and let us define the following function \begin{align}\label{chi function} \chi(r)=\frac{\log(\lambda R/r)}{\log(\lambda R/R)}=\frac{\log(\lambda R/r)}{\log(\lambda )}\,. \end{align}
One should notice that $\chi(\lambda R)=0$, $\chi(R)=1$, and that $\chi$ is smooth.
\begin{defin}\label{cutoff defin} Using the smooth function $\chi$ introduced in (\ref{chi function}), we can then define the cut-off function $\xi$ as follows \begin{align*} \xi(r)=\begin{cases}
1,& \text{on } B(R)\\
\chi(r), & \text{on } B(\lambda R)\setminus B(R)\\ 0,& \text{outside } B(\lambda R)\,. \end{cases} \end{align*} \end{defin}
\begin{lemma}\label{norms of derivative of xi} The cut-off function $\xi$ defined above has the following property: for every integer $m\in\mathbb{N}$, $\Vert\nabla^m\xi\Vert_{Q/m}\to 0$ as $\lambda\to\infty$. \end{lemma}
\begin{proof}
We compute \begin{align*} \nabla\xi=\begin{cases} \frac{1}{\log\lambda}\frac{\nabla r}{r}&\text{if }R<r<\lambda R,\\ 0&\text{otherwise.} \end{cases} \end{align*} Let $f$ be the vector valued function $f=\frac{\nabla r}{r}$ on $G\setminus\{e\}$. Then $f$ is homogeneous of degree $-1$. According to Lemma \ref{nabla}, $\nabla^{m-1}f$ is homogeneous of degree $m$. It follows that \begin{align*} \int_{B(\lambda R)\setminus B(R)}\bigg\vert \nabla^{m-1}f\bigg\vert^{Q/m}&\le C\int_R^{\lambda R}\bigg\vert\frac{1}{r^m}\bigg\vert^{Q/m} r^{Q-1}dr\\ =C&\int_R^{\lambda R}\frac{r^{Q-1}}{r^Q}dr=C\log(\lambda). \end{align*} Therefore $$
\|\nabla^m\xi\|_{Q/m}\leq C\,(\log\lambda)^{-1+(m/Q)} $$ tends to $0$ as $\lambda$ tends to infinity, provided $m<Q$. Let us stress that, in general, for values of $m$ greater than or equal to $Q$, the final limit will not be zero. However, the values of $m$ which we will be considering are very specific.
This estimate will in fact be used in the proof of Proposition \ref{w-j}, and in that setting the degrees $m$ that can arise will be all the possible degrees (or equivalently weights) of the differential $d_c$ on an arbitrary Rumin $k$-form $\phi$ of weight $w$. If we denote by $M$ the maximal $m$ that could arise in this situation, then one can show that $M<Q$.
Let us first assume that the maximal order $M$ for the $d_c$ on $k$-forms (which is non trivial for $0\le k<n$) is greater or equal to $Q$. Then, given $\phi$ a Rumin $k$-form of weight $w$, then $d_c\phi=\sum_{i=1}^M\beta_{w+i}$, where each $\beta_{w+i}$ is a Rumin $(k+1)$-form of weight $w+i$. If we consider the Hodge of the $\beta_{w+i}$ with $Q\le i\le M$, then these forms are Rumin $(n-k-1)$-forms of weight $Q-(w+i)=Q-w-i\le Q-w-Q<-w\le 0$, which is impossible.
Therefore $M<Q$, which means that indeed in all the cases that we will take into consideration, the $L^{Q/m}$ norm of $\nabla^m\xi$ will always go to zero as $\lambda\to \infty$.
\end{proof}
\begin{oss} One says that a Riemannian manifold $M$ is \emph{$p$-parabolic} (see \cite{Troyanov}) if for every compact set $K$, there exist smooth compactly supported functions on $M$ taking value $1$ on $K$ whose gradient has an arbitrarily small $L^p$ norm. The definition obviously extends to subRiemannian manifolds.
Lemma \ref{norms of derivative of xi} implies that a Carnot group of homogeneous dimension $Q$ is $Q$-parabolic. \end{oss}
\section{The averaging map in general Carnot groups descends to cohomology} \label{descends}
\begin{defin} Let $G$ be a Carnot group of dimension $n$ and homogeneous dimension $Q$. For $k=1,\ldots,n$, let $\mathcal{W}(k)$ denote the set of weights arising in Rumin's complex in degree $k$. For a Rumin $k$-form $\omega$, let $$ \omega=\sum_{w\in\mathcal{W}(k)} \omega_{w} $$ be its decomposition into components of weight $w$.
Let $d_c=\sum_j d_{c,j}$ be the decomposition of $d_c$ into weights/orders. Let $\mathcal{J}(k,w)$ denote the set of weights/orders $j$ such that $d_{c,j}$ on $k$-forms of weight $w$ is nonzero, in other words, \begin{align*}
\mathcal{J}(k,w):=\lbrace j\in\mathbb{N}\mid d_{c,j}\omega_w\neq 0 \text{ for some }\omega\text{ of degree }k\rbrace\,. \end{align*}
We will denote by $\mathcal{J}(k)$ the set of all the possible weights/orders, that is \begin{align*}
\mathcal{J}(k)=\bigcup_{w\in \mathcal{W}(k)}\mathcal{J}(k,w)\,. \end{align*}
Let us define
$L^{\chi(k)}$ as follows \begin{align*}
L^{\chi(k)}=\lbrace \phi=\sum_{w\in\mathcal{W}(k-1)}\phi_w\in E_0^{k-1}\mid \forall j\in\mathcal{J}(k-1,w)\,,\phi_{w}\in L^{Q/Q-j}\rbrace \end{align*} and if $\mathcal{J}(k-1,\hat{w})=\emptyset$ for some $\hat{w}$, then we don't require anything on $\phi_{\hat{w}}$. \end{defin}
\begin{lemma} Let $G$ be a Carnot group. Fix a left-invariant subRiemannian metric making the direct sum $\mathfrak{g}=\bigoplus \mathfrak{g}_i$ orthogonal. The $L^2$-adjoint $d_c^*$ of $d_c$ is a differential operator. Fix a degree $k$. Let $$ d_c=\sum_{j\in\mathcal{J}(k)} d_{c,j} $$ be the decomposition of $d_c$ into weights ($d_{c,j}$ increases weights of Rumin forms by $j$, hence it has horizontal order $j$). Then the decomposition of $d_c^*$ on a $(k+1)$-form into weights/orders is $$ d_c^*=\sum_{j\in\mathcal{J}^\ast(k)} d_{c,j}^*. $$
In other words, the adjoint $d_{c,j}^*$ of $d_{c,j}$ decreases weights by $j$ and has horizontal order $j$. In fact, if we denote by $\mathcal{J}^*(k+1,\tilde{w})$ the set of weights/orders $j$ such that $d_{c,j}^*$ on $(k+1)$-forms of weight $\tilde{w}$ is non-zero, that is \begin{align*}
\mathcal{J}^\ast(k+1,\tilde{w})=\lbrace j\in\mathbb{N}\mid d_{c,j}^*\alpha_{\tilde{w}}\neq 0\text{ for some }\alpha\text{ of degree }k+1\rbrace\,, \end{align*} then there is a clear relationship between the sets of indices $\mathcal{J}(k,w)$ and $\mathcal{J}^*(k+1,\tilde{w})$, namely \begin{align*}
\mathcal{J}^\ast(k+1,\tilde{w})=\bigcup_{w\in\mathcal{W}(k)}\lbrace j\in\mathcal{J}(k,w)\mid w+j=\tilde{w}\rbrace\,. \end{align*}
And from this relationship, we get directly the following identity: \begin{align*}
\mathcal{J}^*(k+1)=\bigcup_{\tilde{w}\in\mathcal{W}(k+1)}\mathcal{J}^*(k+1,\tilde{w})=\mathcal{J}(k)\,. \end{align*}
Moreover, since the formula $d_c^\ast=(-1)^{n(k+1)+1}\ast d_c\ast$ applies to any Rumin $k$-form, we also have the equality $\mathcal{J}^\ast(n-k,Q-w)=\mathcal{J}(k,w)$.
{Let us stress that this also implies $\mathcal{J}(k)=\mathcal{J}^\ast(n-k)=\mathcal{J}(n-k-1)$}.
\end{lemma}
\begin{prop} \label{w-j} If $\omega$, $\phi$, $\beta$ are Rumin forms with $\omega\in L^1$ of degree $k$, $\beta$ left-invariant of complementary degree $n-k$, $\phi\in L^{\chi(k)}$ and $d_c \phi=\omega$, then $$ \int_G \omega\wedge\beta=0. $$ \end{prop}
\textbf{Proof}. Without loss of generality, one can assume that $\beta$ has pure weight $Q-w$ for some $w\in\mathcal{W}(k)$. Then its Hodge-star $\ast\beta$ has pure weight $w$. Let $\xi$ be a smooth cut-off. By definition of the $L^2$-adjoint, we have $$ \int_{G}(d_c\phi)\wedge\xi\beta=\int_{G}\langle d_c\phi ,\ast\xi\beta\rangle\,dvol=\int_{G}\langle\phi,d_c^*(\ast\xi\beta)\rangle\,dvol=\sum_{j\in\mathcal{J}^*(k,w)}\int_{G}\langle\phi_{w-j},d_{c,j}^*(\ast\xi\beta)\rangle\,dvol. $$
Since $\phi\in L^{\chi(k)}$, for any $j\in\mathcal{J}^\ast(w-j)$ we have $\phi_{w-j}\in L^{Q/Q-j}$, by definition of $L^{\chi(k)}$. Hence, applying H\"older's inequality, we obtain $$
|\int_{G}\omega\wedge\xi\beta|\leq \sum_{j\in\mathcal{J}^*(k,w)}\|\phi_{w-j}\|_{Q/(Q-j)}\|\nabla^j\xi\|_{Q/j}\|\beta\|_{\infty}. $$
It is therefore sufficient to take $\xi$ as the cut-off function introduced in Definition \ref{cutoff defin}, so that by Lemma \ref{norms of derivative of xi} we get that $\int_{G}\omega\wedge\beta=0$.
\begin{ese} Euclidean space $\mathbb{R}^n$. Then $\mathcal{W}(k)=\{k\}$ and $\mathcal{J}(k)=\{1\}$ in all degrees. Theorem \ref{w-j} states that the averaging map descends to $L^{q,1}$-cohomology, where $q=\frac{n}{n-1}$. \end{ese}
\begin{ese} Heisenberg groups $\mathbb{H}^{2m+1}$. We have that $\mathcal{W}(k)=\{k\}$ for $k\le m$, and $\mathcal{W}(k)=\{k+1\}$ when $k\geq m+1$. $\mathcal{J}(k)=\{1\}$ in all degrees but $k=m$, where $\mathcal{J}(m)=\lbrace 2\rbrace$, so that Theorem \ref{w-j} states that the averaging map descends to $L^{q,1}$-cohomology, where $q=\frac{Q}{Q-2}$ in degree $m+1$ and $q=\frac{Q}{Q-1}$ in all other degrees. \end{ese}
\subsection{Link with $\ell^{q,1}$ cohomology}
Let $1\leq p\leq q\leq\infty$. According to Theorem 3.3 of \cite{Pansu-Rumin}, every $\ell^{q,p}$ cohomology class of a Carnot group contains a form $\omega$ which belongs to $L^p$ as well as an arbitrary finite number of its derivatives. If the class vanishes, then there exists a primitive $\phi$ of $\omega$ which belongs to $L^q$ as well as an arbitrary finite number of its derivatives. There exists a homotopy between de Rham and Rumin's complex given by differential operators, therefore the same statement applies to Rumin's complex. In particular, Rumin forms can be used to compute $\ell^{q,p}$ cohomology.
Let $\omega$ be a Rumin $k$-form which belongs to $L^1$ as well as a large number of its horizontal derivatives. Assume that $\omega$ represents the trivial cohomology class. Then there exists a Rumin $(k-1)$-form $\phi$ which belongs to $L^q$ as well as its horizontal derivatives up to order $Q$, and such that $d_c\phi=\omega$. By Sobolev's embedding theorem, $\phi$ belongs to $L^\infty$, hence to $L^{q'}$ for all $q'\geq q$. This suggests the following notation.
\begin{defin} \label{jq} Let $G$ be a Carnot group of dimension $n$ and homogeneous dimension $Q$. Let $1\leq k\leq n$. Define $$ j(k)=\min\bigcup_{w\in \mathcal{W}(k)} \mathcal{J}(k-1,w). $$ and $$ q(G,k):=\frac{Q}{Q-j(k)}. $$ \end{defin}
\textbf{Proof of Theorem \ref{defined}}. \begin{proof}
Let $G$ be a Carnot group. Let $\omega$ be a Rumin $k$-form on $G$, which belongs to $L^1$ as well as a large number of its derivatives. Assume that $\omega=d_c\phi$ with $\phi\in L^{q(G,k)}$. Then $$ \forall w\in \mathcal{W}(k),\,\forall j\in\mathcal{J}(k-1,w),\quad \phi_{w-j} \in L^{Q/(Q-j)}, $$ therefore $\phi\in L^{\chi(k)}$. Proposition \ref{w-j} implies that averages $\int\omega\wedge\beta$ vanish. This completes the proof of Theorem \ref{defined}.
\end{proof} \begin{ese} Euclidean space. Then $j(k)=1$ in all degrees. \end{ese}
\begin{ese} Heisenberg groups $\mathbb{H}^{2m+1}$. Then $j(k)=1$ in all degrees but $k=m+1$, where $j(m+1)=2$. \end{ese} For these examples, as we saw before, one need not invoke \cite{Pansu-Rumin} since $\mathcal{J}(k)$ has only one element in each degree.
\begin{ese} Engel group $E^4$. Then $j(k)=1$ in degrees $1$ and $4$, $j(k)=2$ in degrees $2$ and $3$. One concludes that the averaging map is well-defined in $\ell^{q,1}$ cohomology for $q\leq\frac{Q}{Q-1}$ in degrees $1$ and $4$, and for $q\leq\frac{Q}{Q-2}$ in degrees $2$ and $3$. Here, $Q=7$. \end{ese}
\subsection{Results for Heisenberg groups $\mathbb{H}^{2m+1}$}
In \cite{BFP3}, it is proven that every closed $L^1$ $k$-form, $k\leq 2m$, whose averages $\int\omega\wedge \beta$ vanish, is the differential of a form in $L^q$, where $q=q(k)=\frac{Q}{Q-1}$ unless when $k=m+1$, where $q(m+1)=\frac{Q}{Q-2}$. In other words,
\begin{theorem}[\cite{BFP3}] Let $G=\mathbb{H}^{2m+1}$ and let $k=1,\ldots,2m$. The averaging map $L^{q(k),1}H^k(G)\to H^{2m+1-k}(\mathfrak{g})^*$ is injective. \end{theorem}
The goal of subsequent sections is to prove that the image of averaging map is $0$ in all degrees $k\leq 2m$. This will prove that $L^{q(k),1}H^k(G)=0$. Note that for $k=2m+1$, both facts fail: the averaging map is not zero (one can check with compactly supported forms) and it is not injective either (see \cite{BFP3}).
\section{Wedge products between Rumin forms in Heisenberg groups} \label{wedge}
We shall rely on Stokes formula on Heisenberg groups $\mathbb{H}^{2m+1}$. We need a formula of the form $d(\phi\wedge\beta)=(d_c\phi)\wedge\beta\pm \phi\wedge d_c\beta$. This does not always hold in general for Carnot groups. In fact, the complex of Rumin forms $ E_0^\bullet$ equals the Lie algebra cohomology $H^\bullet(\mathfrak{g})$, and therefore carries a natural cup product induced by the wedge product, but which in general differs from the wedge product.
Let us take into consideration the original construction of the Rumin complex in the $(2m+1)$-dimensional Heisenberg group $\mathbb{H}^{2m+1}$ as appears in \cite{rumin1994}.
Given $\Omega^\bullet$ the algebra of smooth differential forms, one can define the following two differential ideals: \begin{itemize} \item $\mathcal{I}^\bullet:=\lbrace \alpha=\gamma_1\wedge\tau+\gamma_2\wedge d\tau\rbrace$, the differential ideal generated by the contact form $\tau$, and \item $\mathcal{J}^\bullet:=\lbrace\beta\in\Omega^\bullet\mid\beta\wedge\tau=\beta\wedge d\tau=0\rbrace$. \end{itemize}
\begin{oss}\label{wedge between ideals} By construction, the ideal $\mathcal{J}^\bullet$ is in fact the annihilator of $\mathcal{I}^\bullet$. In other words, given two arbitrary forms $\alpha\in\mathcal{J}^\bullet$ and $\beta\in\mathcal{I}^\bullet$, we have $\alpha\wedge\beta$=0. \end{oss}
One can quickly check that the subspaces $\mathcal{J}^h=\mathcal{J}^\bullet\cap\Omega^h$ are non-trivial for $h\ge m+1$, whereas the quotients $\Omega^h/\mathcal{I}^h$ are non-trivial for $h\le m$, where $\mathcal{I}^h=\mathcal{I}^\bullet\cap\Omega^h$.
Moreover, the usual exterior differential descends to the quotients $\Omega^\bullet/\mathcal{I}^\bullet$ and restricts to the subspaces $J^\bullet$ as first order differential operators: \begin{align*} d_c:\Omega^\bullet/\mathcal{I}^\bullet\to\Omega^\bullet/\mathcal{I}^\bullet\;\text{ and }\;d_c:\mathcal{J}^\bullet\to\mathcal{J}^\bullet\,. \end{align*}
In \cite{rumin1994} Rumin then defines a second order linear differential operator \begin{align*} d_c:\Omega^m/\mathcal{I}^m\to\mathcal{J}^{m+1} \end{align*} which connects the non-trivial quotients $\Omega^\bullet/\mathcal{I}^\bullet$ with the non-trivial subspaces $\mathcal{J}^\bullet$ into a complex, that is $d_c\circ d_c=0$, \begin{align*} \Omega^0/\mathcal{I}^0\xrightarrow{d_c}\Omega^1/\mathcal{I}^1\xrightarrow{d_c}\cdots\xrightarrow{d_c}\Omega^m/\mathcal{I}^m\xrightarrow{d_c}\mathcal{J}^{m+1}\xrightarrow{d_c}\mathcal{J}^{m+2}\xrightarrow{d_c}\cdots\xrightarrow{d_c}\mathcal{J}^{2m+1}\,. \end{align*}
\begin{prop} \label{dc of Wedge Product} In $\mathbb{H}^{2m+1}$, the wedge product of Rumin forms is well-defined and satisfies the Leibniz rule \begin{align*} d_c(\alpha\wedge\beta)=d_c\alpha\wedge\beta+(-1)^h\alpha\wedge d_c\beta \end{align*} if either $h\ge m+1$ or $k\ge m+1$ or $h+k< m$, where $h=deg(\alpha)$ and $k=deg(\beta)$. \end{prop}
\begin{proof} In order to study whether the wedge product between Rumin forms is well-defined, we will consider this differential operator $d_c$ in the following two cases: \begin{itemize} \item [i.] $d_c:\Omega^h/\mathcal{I}^h\to\Omega^{h+1}/\mathcal{I}^{h+1}$ where $h <m$, \item[ii.] $d_c:\mathcal{J}^h\to\mathcal{J}^{h+1}$ where $h>m$. \end{itemize} Let us first stress that in the first case, given $\alpha\in\Omega^h/\mathcal{I}^h$, we have that \begin{align*} d_c\alpha=d\alpha \,\mod\;\mathcal{I}^{h+1}\;\text{ for }\;h<m\,. \end{align*} Since $\mathcal{I}$ is an ideal, if $h+k\leq m$, $\alpha\wedge\beta\in \Omega^{h+k}/\mathcal{I}^{h+k}$ is well defined.
If $h+k<m$, the identity $d(\alpha\wedge\beta)=(d\alpha)\wedge\beta+(-1)^h\alpha\wedge d\beta$ passes to the quotient.
It is important to notice that, however, if $h+k=m$, $h>0$, $k>0$, $d_c\alpha\wedge\beta+(-1)^h\alpha\wedge d_c\beta$ involves only first derivatives of $\alpha$ and $\beta$, and thus cannot be equal to $d_c(\alpha\wedge\beta)$. If $h=0$ and $k=m$, $d_c(\alpha\wedge\beta)$ involves second derivatives of $\alpha$, and therefore cannot be expressed in terms of $d_c\alpha$.
On the other hand, in the second case, given $\beta\in\mathcal{J}^h$, the Rumin differential coincides with the usual exterior differential, \begin{align*} d_c\beta=d\beta\;\text{ for }\;h>m\,. \end{align*}
Therefore, given $\alpha\in\Omega^{h}/\mathcal{I}^{h}$ and $\beta\in\mathcal{J}^{k}$ with $h<m$ and $k>m$, the wedge product $\alpha\wedge\beta$ is well-defined and belongs to $\mathcal{J}^{h+k}$, and the usual Leibniz rule also applies: \begin{align*} d_c(\alpha\wedge\beta)=d(\alpha\wedge\beta)=(d\alpha)\wedge\beta+(-1)^h\alpha\wedge (d\beta)=d_c\alpha\wedge\beta+(-1)^h\alpha\wedge d_c\beta\,. \end{align*} If $h=m$ and $k\geq m+1$, $h+k\geq 2m+1$, so the identity between differentials holds trivially.
To conclude, the wedge product of Rumin forms is well defined and satisfies the Leibniz rule $d_c(\alpha\wedge\beta)=d_c\alpha\wedge\beta+(-1)^h\alpha\wedge d_c\beta$ if either $h\ge m+1$, or $k\ge m+1$, or $h+k<m$.
\end{proof}
\section{Averages on Heisenberg group: generic case} \label{primitives}
\subsection{Primitives of linear growth}
\begin{lemma}
Let $\beta$ be a left-invariant Rumin $h$-form in the Heisenberg group $\mathbb{H}^{2m+1}$. If $h\not=m+1$, $\beta$ admits a primitive $\alpha$ of linear growth, i.e. at Carnot-Carath\'eodory distance $r$ from the origin, $|\alpha|\leq C\,r$. \end{lemma}
\begin{proof} Let $\beta\in E_0^{h}$ be a left-invariant form. Then $d_c\beta=0$, and $\beta$ has weight $w=h$ (if $h\leq m$) or $h+1$ (if $h>m+1$). We know that the Rumin complex is locally exact, that is $\exists\, \alpha\in E_0^{h-1}$ such that $d_c\alpha=\beta$.
Let us consider the Taylor expansion of $\alpha$ at the origin in exponential coordinates, and let us group terms according to their homogeneity under dilations $\delta_t$: \begin{align*}
\alpha=\alpha_0+\cdots+\alpha_{w-1}+\alpha_w+\alpha_{w+1}+\cdots\, \end{align*} where we denote by $\alpha_d$ the term with homogeneous degree d, i.e. $\delta_t^\ast\alpha_d=t^d\alpha_d$.
Since $d_c$ commutes with the dilations $\delta_\lambda$, the expansion of $d_c\alpha$ is therefore \begin{align*}
d_c\alpha=d_c\alpha_0+\cdots+d_c\alpha_{w-1}+d_c\alpha_w+d_c\alpha_{w+1}+\cdots\,. \end{align*}
The expansion of $\beta$ is given instead by $\beta=\beta$, given it is a left-invariant form, hence homogeneous of degree $w$, so that $\beta=d_c\alpha_w$.
Let us notice that $\alpha_w$ has degree $h-1$ and $h\not=m+1$, so it has weight $w-1$. Since it is homogeneous of degree $w$ under $\delta_\lambda$, its coefficients are homogeneous of degree $1$, that is they are linear in horizontal coordinates, hence $\alpha_w$ has linear growth, that is {$\vert\alpha\vert\le {C}\,{r}$}. \end{proof}
\begin{prop} Given $\omega\in E_0^{k}$ an $L^{1}$, $d_c$-closed Rumin form in $\mathbb{H}^{2m+1}$, then the integral \begin{align*} \int_{\mathbb{H}^{2m+1}}\omega\wedge\beta \end{align*} vanishes for all left-invariant Rumin forms $\beta$ of complementary degree, $\beta\in E_0^{2m+1-k}$, provided $k\not=m$. \end{prop}
\begin{proof}
Let $\omega$ be an $L^1$, $d_c$-closed Rumin $k$-form, $k\not=m$. Let $\beta$ be a left-invariant Rumin $h$-form, with $h=2m+1-k\neq m+1$. Let $\alpha$ be a linear growth primitive of $\beta$, $\vert\alpha\vert\le {C}\,{R}$. Let $\xi$ be a smooth cut-off such that $\xi=1$ on $B(R)$, $\xi=0$ outside $B(\lambda R)$ and $|d_c\xi|\leq C'/R$. Since, according to Proposition \ref{dc of Wedge Product}, \begin{align*}
d(\xi\omega\wedge\alpha)=d_c(\xi\omega\wedge\alpha)
&= d_c(\xi\omega)\wedge\alpha+(-1)^{k}\xi\omega\wedge d_c\alpha\\
&=d_c\xi\wedge\omega\wedge\alpha+(-1)^{k}\xi\omega\wedge\beta, \end{align*}{} Stokes formula gives \begin{align*} \bigg\vert\int_{\mathbb{H}^{2m+1}}\xi\omega\wedge\beta\bigg\vert &=\bigg\vert\int_{B(\lambda R)\setminus B(R)}d_c\xi\wedge\omega\wedge\alpha\bigg\vert\\ &\le\int_{B(\lambda R)\setminus B(R)}\vert d_c\xi\vert\vert\alpha\vert\vert\omega\vert\\ &\le \lambda CC'\Vert\omega\Vert_{L^1(\mathbb{H}^{2m+1}\setminus B(R))}. \end{align*} On the other hand, \begin{align*} \bigg\vert\int_{\mathbb{H}^{2m+1}}(1-\xi)\omega\wedge\beta\bigg\vert &\le \Vert \beta\Vert_\infty \Vert\omega\Vert_{L^1(\mathbb{H}^{2m+1}\setminus B(R))}. \end{align*} Both terms tend to $0$ as $R$ tends to infinity, thus $\int_{\mathbb{H}^{2m+1}}\xi\omega\wedge\beta=0$. \end{proof}
This proves Theorem \ref{vanish} except in degree $k=m$. The argument collapses in this case, since primitives of left-invariant $(m+1)$-forms have at least quadratic growth.
\section{Averages on Heisenberg group: special case} \label{poincare}
We now describe a symmetric argument: produce a primitive of the $L^1$ form $\omega$ with linear growth. It applies for all degrees but $m+1$, and so covers the special case $k=m$.
Since $\omega$ is not in $L^\infty$ but is $L^1$, linear growth needs be taken in the $L^1$ sense: the $L^1$ norm of the primitive in a shell of radius $R$ is $O(R)$. It is not necessary to produce a global primitive with this property. It is sufficient to produce such a primitive $\phi_R$ in the $R$-shell $B(\lambda R)\setminus B(R)$. Indeed, Stokes formula leads to an integral $$ \int_{\mathbb{H}^{2n+1}}\xi\omega\wedge\beta=\pm\int_{B(\lambda R)\setminus B(R)}d_c\xi\wedge\phi\wedge\beta $$ which does not depend on the choice of primitive $\phi$.
\subsection{$\ell^{q,1}$ cohomology of bounded geometry Riemannian and subRiemannian manifolds} \label{discretization}
By definition, the $\ell^{q,p}$ cohomology of a bounded geometry Riemannian manifold is the $\ell^{q,p}$ cohomology of every bounded geometry simplicial complex quasiisometric to it. For instance, of a bounded geometry triangulation.
Combining results of \cite{BFP3} and Leray's acyclic covering theorem (in the form described in \cite{pansu2017cup}), one gets that for $q=\frac{n}{n-1}$, the $\ell^{q,1}$-cohomology of a bounded geometry Riemannian $n$-manifold $M$ is isomorphic to the quotient $$ L^{q,1}H^\cdot(M)=L^1(M)\cap ker(d)/(L^1\cap dL^q(M)) $$ of closed forms in $L^1$ by differentials of forms in $L^q$. In particular, if $M$ is compact, for all $p\leq \frac{n}{n-1}$, $L^{p,1}H^\cdot(M)$ is isomorphic to the usual (topological) cohomology of $M$.
Similarly, if $M$ is a bounded geometry contact subRiemannian manifold of dimension $2m+1$ (hence Hausdorff dimension $Q=2m+2$), for $q=Q/(Q-1)$ (respectively $q=Q/(Q-2)$ in degree $m+1$), the $\ell^{q,1}$ cohomology of $M$ is isomorphic to the quotient $$ L_c^{q,1}H^\cdot(M)=L^1(M)\cap ker(d_c)/(L^1\cap d_c L^q(M)) $$ of $d_c$-closed Rumin forms by $d_c$'s of Rumin forms in $L^q$.
This applies in particular to Heisenberg groups $\mathbb{H}^{2m+1}$, and also to shells in Heisenberg groups, but with a loss on the width of the shell.
\subsection{$L^1$-Poincar\'e inequality in shell $B(\lambda )\setminus B(1)$}
\begin{lemma}\label{shell} There exist radii $0<\mu<1<\lambda<\mu'$ such that every $d_c$-exact $L^1$ Rumin $k$-form $\omega$ on $B(\mu')\setminus B(\mu)$ admits a primitive $\phi$ on $B(\lambda)\setminus B(1)$ such that \begin{align*} \Vert\phi\Vert_{L^1(B(\lambda)\setminus B(1))}\le C\cdot\Vert \omega\Vert_{L^1(B(\mu')\setminus B(\mu))}. \end{align*} \end{lemma}
In Euclidean space, the analogous statement can be proved as follows. Up to a biLipschitz change of coordinates, one replaces shells with products $[0,1]\times S^{n-1}$. On such a product, a differential form writes $\omega=a_t+dt\wedge b_t$ where $a_t$ and $b_t$ are differential forms on $S^{n-1}$. $\omega$ is closed if and only if each $a_t$ is closed and $$ \frac{\partial a_t}{\partial t}=b_t. $$
Given $r\in[0,1]$, define $$ \phi_r=e_t +dt\wedge f_t\quad \mbox{ where }\quad e_t=\int_r^t b_s\,ds,\quad f_t=0. $$ Then $d\phi_r=\omega-a_r$. Set $$ \phi=\int_0^1 \phi_r \,dr, \quad \mbox{ so that }\quad d\phi=\omega-\bar\omega\quad \mbox{ where }\quad \bar\omega=\int_0^1 a_r\,dr. $$ Note that each $a_r$, and hence $\bar\omega$, is an exact form on $S^{n-1}$. Since $$ \Vert\bar\omega\Vert_{L^1(S^{n-1})}\leq \Vert\omega\Vert_{L^1([0,1]\times S^{n-1})}, $$ according to Subsection \ref{discretization}, there exists a form $\bar\phi$ on $S^{n-1}$ such that $d\bar\phi=\bar\omega$ and $$ \Vert\bar\phi\Vert_{L^1(S^{n-1})}\leq C\,\Vert\bar\omega\Vert_{L^1(S^{n-1})}. $$ Hence $\phi-\bar\phi$ is the required primitive.
The Heisenberg group case reduces to the Euclidean case thanks to a smoothing homotopy constructed in \cite{BFP3}. In fact, since $\phi$ merely needs be estimated in $L^1$ norm (and not in the sharp $L^q$ norm), only the first, elementary, steps of \cite{BFP3} are required, resulting in the following result.
\begin{lemma}\label{homotopyshell} For every radii $\mu<1<\lambda<\mu'$, there exists a constant $C$ with the following property. For every $d_c$-exact $L^1$ Rumin form $\omega$ on the large shell $B(\mu')\setminus B(\mu)$ of $\mathbb{H}^{2m +1}$, there exist $L^1$ Rumin forms $T\omega$ and $S\omega$ on the smaller shell $B(\lambda)\setminus B(1)$ such that $\omega=d_c T\omega+S\omega$ on the smaller shell, $$ \Vert T\omega\Vert_{L^1(B(\lambda)\setminus B(1))}+\Vert S\omega\Vert_{W^{1,1}(B(\lambda)\setminus B(1))} \leq C\,\Vert\omega\Vert_{L^1(B(\mu')\setminus B(\mu))}. $$ Here, the $W^{1,1}$ norm refers to the $L^1$ norms of the first horizontal derivatives. \end{lemma} \begin{proof} Pick a smooth function $\chi_1$ with compact support in the large shell $A$. According to Lemma 6.2 of \cite{BFP3}, there exists a left-invariant pseudodifferential operator $K$ such that the identity $$ \chi_1=d_c K\chi_1+Kd_c\chi_1 $$ holds on the space of Rumin forms $$ L^1\cap d_c^{-1}L^1:=\{\alpha\in L^1(A)\,;\,d_c\alpha\in L^1(A)\}. $$ $K$ is the operator of convolution with a kernel $k$ of type $1$ (resp. $2$ in degree $n+1$). Using a cut-off, write $k=k_1+k_2$ where $k_1$ has support in an $\epsilon$-ball and $k_2$ is smooth. Since $k_1=O(r^{1-Q})$ or $O(r^{2-Q})\in L^1$, the operator $K_1$ of convolution with $k_1$ is bounded on $L^1$. Hence $T=K_1 \chi$ is bounded on $L^1$ forms defined on $A$. Whereas $S=d_c K_2\chi_1$ is bounded from $L^1$ to $W^{s,1}$ for every integer $s$. If $\mu'>\lambda+2\epsilon$ and $\mu<1-2\epsilon$, the multiplication of $\omega$ by $\chi_1$ has no effect on the restriction of $d_c K_1\omega$ or $K_1 d_c \omega$ to the smaller shell, hence, in restriction to the smaller shell, $$ d_c T\omega=d_c K_1\chi_1\omega=(d_c K_1+K_1 d_c) \omega. $$ It follows that $$ S\omega =d_c K_2\chi_1\omega=(d_c K_2+K_2 d_c) \omega, $$ and finally, in restriction to the smaller shell, $$ \omega=d_c T\omega+S\omega. $$ \end{proof} \textbf{Proof of Lemma \ref{shell}}.
\begin{proof} Using the exponential map, one can use simultaneously Heisenberg and Euclidean tools. Pick $\lambda,\mu,\mu'$ such that the medium Heisenberg shell $B(\mu'-2\epsilon)\setminus B(\mu+2\epsilon)$ contains the Euclidean shell $A_{eucl}=B_{eucl}(2)\setminus B_{eucl}(1)$, which in turn contains the smaller Heisenberg shell $B(\lambda)\setminus B(1)$. Apply Lemma \ref{homotopyshell} to a $d_c$-closed $L^1$ form $\omega$ defined on the larger Heisenberg shell. Up to $d_c T\omega$, and up to restricting to the medium shell, one can replace $\omega$ with $S\omega$ which has its first horizontal derivatives in $L^1$. Apply Rumin's homotopy $\Pi_E=1-dd_0^{-1}-d_0^{-1}d$ to get a usual $d$-closed differential form $\beta=\Pi_E S\omega$ belonging to $L^1$. Use the Euclidean version of Lemma \ref{shell} to get an $L^1$ primitive $\gamma$, $d\gamma=\beta$, on the Euclidean shell $A_{eucl}$. Apply the order zero homotopy $\Pi_{ E_0}=1-d_0 d_0^{-1}-d_0^{-1}d_0$ to get a Rumin form $\phi=\Pi_{ E_0}\gamma$. Its restriction to the smaller Heisenberg shell satisfies $d_c\phi=\omega$ and its $L^1$ norm is controlled by $\Vert\omega\Vert_1$. \end{proof} \subsection{$L^1$-Poincar\'e inequality in scaled shell $B(\lambda R)\setminus B(R)$}\label{Exact omega on the shell}
Let $0<\mu<1<\lambda<\mu'$. Let $\omega$ be a Rumin $k$-form on the scaled annulus $B(\mu' R)\setminus B(\mu R)$. Assume that there exists a Rumin $(k-1)$-form $\phi$ on the thinner shell on $B(\lambda R)\setminus B(R)$ such that $\omega=d_c\phi$ on that shell.
Let's denote the dilation by $R$ as \begin{align*} \delta_R:B(\lambda)\setminus B(1)\to B(\lambda R)\setminus B(R) \end{align*} then we can consider the pull-back of both forms: \begin{itemize} \item $\omega_R:=\delta^\ast_R(\omega)$ on $B(\lambda)\setminus B(1)$, and \item $\phi_R:=\delta^\ast_R(\phi)$ on $B(\lambda)\setminus B(1)$. \end{itemize}
Since $\delta_R^\ast$ commutes with the Rumin differential $d_c$, we have \begin{align*} \omega_R=\delta_R^\ast(\omega)=\delta_R^\ast(d_c\phi)=d_c(\delta_R^\ast\phi)=d_c\phi_R\,. \end{align*} Then, for $\omega_R$ we have \begin{align*} \Vert\delta_R^\ast\omega\Vert_{L^1(B(\lambda)\setminus B(1))}=&\int_{B(\lambda)\setminus B(1)}\vert\omega(\delta_R(x))\vert\cdot R^wdx\underbrace{=}_{y=\delta_R(x)}R^w\int_{B(\lambda R)\setminus B(R)}\vert\omega\vert(y)\cdot\frac{1}{R^{Q-1}}dy\\=&R^{w-(Q-1)}\int_{B(\lambda R)\setminus B(R)}\vert\omega\vert(y)dy=R^{w-(Q-1)}\cdot \Vert\omega\Vert_{L^1(B(\lambda R)\setminus B(R)}\,, \end{align*} so that \begin{align}\label{omegaL1norm} \Vert\omega_R\Vert_{L^1(B(\lambda)\setminus B(1))}=R^{w-(Q-1)}\Vert\omega\Vert_{L^1(B(\lambda R)\setminus B(R))} \end{align} where $w$ is the weight of the $k$-form $\omega$.
Likewise, for the $(k-1)$-form $\phi$ we get \begin{align}\label{phiL1norm} \Vert\delta_R^\ast\phi\Vert_{L^1(B(\lambda)\setminus B(1))}=R^{\tilde{w}-(Q-1)}\Vert\phi\Vert_{L^1(B(\lambda R)\setminus B(R))}\,, \end{align} where in this case $\tilde{w}$ is the weight of the form $\phi$.
Since we are working on $\mathbb{H}^{2m+1}$ and $\omega=d_c\phi$ (and likewise $\omega_R=d_c\phi_R$), we have \begin{itemize} \item $\tilde{w}=w-1$, if $k\neq m+1$, and \item $\tilde{w}=w-2$, if $k=m+1$. \end{itemize}
According to Lemma \ref{shell}, one can find a $(k-1)$-form $\phi_R$ on $B(\lambda)\setminus B(1)$ such that \begin{align*} \Vert\phi_R\Vert_{L^1(B(\lambda)\setminus B(1))}\le C\cdot\Vert \omega_R\Vert_{L^1(B(\mu')\setminus B(\mu))} \end{align*} so, using the equalities (\ref{omegaL1norm}) and (\ref{phiL1norm}) we get the following inequality: \begin{align*} \Vert\phi\Vert_{L^1(B(\lambda R)\setminus B(R))}\le C\cdot R^{w-\tilde{w}}\Vert \omega\Vert_{L^1(B(\mu' R)\setminus B(\mu R))} \end{align*} which divides into the following two cases \begin{itemize} \item $\Vert\phi\Vert_{L^1(B(\lambda R)\setminus B(R))}\le C\cdot R\cdot\Vert \omega\Vert_{L^1(B(\mu' R)\setminus B(\mu R))}$ if $k\neq m+1$, and \item $\Vert\phi\Vert_{L^1(B(\lambda R)\setminus B(R))}\le C\cdot R^2\cdot\Vert \omega\Vert_{L^1(B(\mu' R)\setminus B(\mu R))}$ if $k=m+1$. \end{itemize} Only the first case is useful for our purpose.
\subsection{Independence on the choice of primitive}
Let us consider the exact Rumin form $\omega\in E_0^{k}$ and let $\phi,\,\psi\in E_0^{k-1}$ be two primitives of $\omega$ on $B(\lambda R)\setminus B(R)$, i.e. $d_c\psi=d_c\phi=\omega$. Let $\beta$ be an arbitrary left-invariant Rumin form $\beta$ of complementary degree $2m+1-k$.
If $k\not=2m$, then $H^{k}(B(\lambda R)\setminus B(R))=0$, which means that there exists a Rumin $(k-2)$-form $\alpha$ such that $d_c\alpha=\psi-\phi$.
If $k\not=m+1$, the degree of $\beta$ is $2m+1-k\not=m$, so $d_c(\xi\beta)=(d_c\xi)\wedge\beta+\xi d_c\beta=(d_c\xi)\wedge\beta$, thus $\gamma=(d_c\xi)\wedge\beta$ is a well-defined $d_c$-closed Rumin form (this is a special case of Proposition \ref{dc of Wedge Product}).
If on top we also assume $k\not=m+2$, given $\gamma=d_c(\xi\beta)=d_c\xi\wedge\beta$, we have that the form $\alpha\wedge\gamma$ has degree $2m\ge m+1$, so we can apply Proposition \ref{dc of Wedge Product} and obtain the following equality
\begin{align*} d(\alpha\wedge\gamma) =d_c(\alpha\wedge\gamma)\pm\alpha\wedge d_c\gamma =(d_c\alpha)\wedge \gamma =(\psi-\phi)\wedge (d_c\xi)\wedge\beta. \end{align*}
Since by construction $d_c\xi$ has compact support in $B(\lambda R)\setminus B(R)$, \begin{align*} \int_{\mathbb{H}^{2m+1}}d_c\xi\wedge(\psi-\phi)\wedge\beta=0. \end{align*} Therefore, when $k\not=m+1,m+2,2m$, we can replace a given primitive $\psi$ with any other arbitrary primitive $\phi$ of $\omega$ on the scaled shell $B(\lambda R)\setminus B(R)$.
\subsection{Vanishing of averages}
\begin{prop} Given $\omega\in E_0^{k}$ an $L^{1}$, $d_c$-closed Rumin form in $\mathbb{H}^{2m+1}$, then the integral \begin{align*} \int_{\mathbb{H}^{2m+1}}\omega\wedge\beta \end{align*} vanishes for all left-invariant Rumin forms $\beta$ of complementary degree, $\beta\in E_0^{2m+1-k}$, provided $k\not=m+1,m+2,2m$. \end{prop}
\begin{proof} We assume that $k\neq m+1, m+2, 2m$.
Let $\psi$ be a global primitive of $\omega$ on $\mathbb{H}^{2m+1}$. Let us first analyse the following identities \begin{align*} \xi\omega\wedge\beta&=\xi(d_c\psi)\wedge\beta=-d_c\xi\wedge\psi\wedge\beta+d(\xi\psi\wedge\beta).
\end{align*}
Let $\phi$ be the primitive of $\omega$ on $B(\lambda R)\setminus B(R)$ introduced in Section \ref{Exact omega on the shell}. We can then replace $\int_{\mathbb{H}^{2m+1}}d_c\xi\wedge\psi\wedge\beta$ with $\int_{\mathbb{H}^{2m+1}}d_c\xi\wedge\phi\wedge\beta$, and by applying Stokes' theorem, we get \begin{align*} \bigg\vert\int_{\mathbb{H}^{2m+1}}\xi\omega\wedge\beta\bigg\vert &=\bigg\vert\int_{B(\lambda R)\setminus B(R)}d_c\xi\wedge\psi\wedge\beta\bigg\vert\\ &=\bigg\vert\int_{B(\lambda R)\setminus B(R)}d_c\xi\wedge\phi\wedge\beta\bigg\vert\\ &\le\Vert d_c\xi\Vert_{\infty}\Vert\beta\Vert_{\infty}\Vert\phi\Vert_{L^{1}(B(\lambda R)\setminus B(R))}. \end{align*}
Finally, knowing that $\Vert d_c\xi\Vert_\infty\le C'/R$ and applying the Poincar\'e inequality on the shell \begin{align*} \Vert\phi\Vert_{L^1(B(\lambda R)\setminus B(R))}\le C\cdot R\cdot\Vert \omega\Vert_{L^1(B(\mu' R)\setminus B(\mu R))} \end{align*} found in Subsection \ref{Exact omega on the shell}, we finally get \begin{align*}
\bigg\vert\int_{\mathbb{H}^{2n+1}}\xi\omega\wedge\beta\bigg\vert &\le CC' \Vert\omega\Vert_{L^1(B(\mu' R)\setminus B(\mu R))}\,. \end{align*}
Using the cut-off function $\xi$ introduced in Definition \ref{cutoff defin}, we have \begin{align*} \int_{\mathbb{H}^{2m+1}}\xi\omega\wedge\beta=\int_{B(R)}\omega\wedge\beta+\int_{B(\lambda R)\setminus B(R)}\xi\omega\wedge\beta\,. \end{align*}
Hence \begin{align*} \bigg\vert\int_{B(R)}\omega\wedge\beta\bigg\vert&\le\bigg\vert\int_{\mathbb{H}^{2n+1}}\xi\omega\wedge\beta\bigg\vert+\bigg\vert\int_{B(\lambda R)\setminus B(R)}\xi\omega\wedge\beta\bigg\vert\\ &\le CC'\Vert\omega\Vert_{B(\mu' R)\setminus B(\mu R)}+\int_{B(\lambda R)\setminus B(R)}\Vert\beta\Vert_\infty\cdot\vert \omega\vert\\ &\le {C''}\Vert\omega\Vert_{L^1(B(\mu' R)\setminus B(\mu R))}. \end{align*} Since $\Vert\omega\Vert_{L^1(B(\mu' R)\setminus B(\mu R))}\to 0$ as $R\to\infty$, we get our result \begin{align*} \int_{\mathbb{H}^{2m+1}}\omega\wedge\beta=0\,. \end{align*}
\end{proof}
This completes the proof of Theorem \ref{vanish}.
\begin{oss} Let us notice that this method would not work in the case where $k=m+1$, since we would obtain the following inequality \begin{align*} \int_{B(R)}\omega\wedge\beta\le C\cdot R\lVert\omega\Vert_{L^1(B(\lambda R)\setminus B(R))} \end{align*} which is not conclusive. \end{oss}
\section*{Acknowledgments}
P.P. is supported by Agence Nationale de la Recherche, ANR-15-CE40-0018 SRGI. F.T. is partially supported by the Academy of Finland grant 288501 and by the ERC Starting Grant 713998 GeoMeG.
\tiny{ \noindent Pierre Pansu \par\noindent Laboratoire de Math\'ematiques d'Orsay, \par\noindent Universit\'e Paris-Sud, CNRS, \par\noindent Universit\'e Paris-Saclay, 91405 Orsay, France. \par\noindent e-mail: [email protected]\newline }
\tiny{ \par\noindent Francesca Tripaldi \par\noindent Department of Mathematics and Statistics, \par\noindent University of Jyv\"askyl\"a, \par\noindent 40014, Jyv\"askyl\"a, Finland. \par\noindent email: [email protected] }
\end{document} | arXiv |
Sensorimotor recalibration of postural control strategies occurs after whole body vibration
Neuromuscular organisation and robustness of postural control in the presence of perturbations
Victor Munoz-Martel, Alessandro Santuz, … Adamantios Arampatzis
Standing on unstable surface challenges postural control of tracking tasks and modulates neuromuscular adjustments specific to task complexity
Lida Mademli, Dimitra Mavridi, … Adamantios Arampatzis
Characterisation of the transient mechanical response and the electromyographical activation of lower leg muscles in whole body vibration training
Isotta Rigoni, Tecla Bonci, … Antonio Fratini
Shoes with active insoles mitigate declines in balance after fatigue
Jeongin Moon, Prabhat Pathak, … Jooeun Ahn
Robot-Driven Locomotor Perturbations Reveal Synergy-Mediated, Context-Dependent Feedforward and Feedback Mechanisms of Adaptation
Giacomo Severini, Alexander Koenig, … Paolo Bonato
Stumbling reactions in hypo and hyper gravity – muscle synergies are robust across different perturbations of human stance during parabolic flights
Janek Holubarsch, Michael Helm, … Ramona Ritzmann
Force accuracy rather than high stiffness is associated with faster learning and reduced falls in human balance
Amel Cherif, Ian Loram & Jacopo Zenzeri
Neuromuscular mechanisms of motor adaptation to repeated gait-slip perturbations in older adults
Shuaijie Wang, Yi-Chung Pai & Tanvi Bhatt
Cortical correlates in upright dynamic and static balance in the elderly
Maria Rubega, Emanuela Formaggio, … Alessandra Del Felice
Isotta Rigoni ORCID: orcid.org/0000-0002-5804-21371,2,
Giulio Degano ORCID: orcid.org/0000-0003-1540-780X3,
Mahmoud Hassan ORCID: orcid.org/0000-0003-0307-50864 &
Antonio Fratini ORCID: orcid.org/0000-0001-8894-461X1
Scientific Reports volume 13, Article number: 522 (2023) Cite this article
Efficient postural control results from an effective interplay between sensory feedbacks integration and muscle modulation and can be affected by ageing and neuromuscular injuries. With this study, we investigated the effect of whole-body vibratory stimulation on postural control strategies employed to maintain an upright posture. We explored both physiological and posturography metrics, through corticomuscular and intramuscular coherence, and muscle networks analyses. The stimulation disrupts balance in the short term, but leads to a greater contribution of cortical activity, necessary to modulate muscle activation via the formation of (new) synergies. We also observed a reconfiguration of muscle recruitment patterns that returned to pre-stimulation levels after few minutes, accompanied by a slight improvement of balance in the anterior–posterior direction. Our results suggest that, in the context of postural control, appropriate mechanical stimulation is capable of triggering a recalibration of the sensorimotor set and might offer new perspectives for motor re-education.
For a few decades, it has been assumed that balance was achieved by merely passive local reflexes, as mammals were found to be able to stand still solely by tonic muscle contractions1. This view has been recently challenged and a new working hypothesis has been proposed suggesting that maintaining balance—as well as any other human movement—requires active planning, and therefore involves global multisensory integration and processing of a higher level2,3. According to this perspective, as we stand, inputs from the external environment are processed online by the central nervous system (CNS) to maintain a stable upright stance4. Vestibular, visual and somatosensory cues are integrated to regulate muscle contraction over time, with the ultimate goal of adjusting the centre of mass position, preventing falls4. Efficient postural control results from an effective interplay between sensory feedback integration and cortical drive to the muscles, which allows to detect, react to and correct the perturbations that normally occur while standing upright5,6. Two effective tools are available to investigate such communication occurring between muscles and the brain: intermuscular coherence (IMC) and corticomuscular coherence (CMC)7.
IMC is thought to represent the common neural input that, originating from efferent and afferent pathways, is fed to motor units (MUs) of different muscles8. To facilitate motor control, the CNS is thought to deal with the several degrees of freedom of the musculoskeletal system by coordinating muscle activation, i.e. recruiting muscles in groups9. The state-of-the-art approach to IMC analyses is to model muscle co-activation via complex network analysis, which has largely been employed to understand the organisation of cortical activity10. Studying muscle-networks across tasks revealed the synchronous modulation of muscles to be functional, showing the potential of such analyses to uncover mechanisms of interest11.
Similarly, CMC has been historically thought to represent signalling from pyramidal neurons to spinal motor neurons, which subsequently control the corresponding muscle fibres12. Recent findings nevertheless suggest that CMC also reflects afferent couplings, i.e., the ascending flow of information that from muscle spindles reaches somatosensory areas in the brain13. This is further supported by the detection of oscillations in beta band (15–30 Hz) both in the motor (M1) and somatosensory (S1) cortex14, that are coherent with the activity recorded from contralateral contracting muscles15. The presence of activity in beta frequency range has been vastly documented in the human motor system and it is the reason why CMC is mostly investigated in beta band13,16. Studies looking at CMC are frequently designed to involve steady-state motor output, such as precision grip and isometric contractions17,18,19,20,21,22, or dynamic voluntary movements23,24,25,26, such as walking. Very few studies evaluated CMC during standing balance27,28,29 and even less highlighted any CMC during postural control30,31,32.
It is however sensible to investigate postural control mechanisms by inspecting couplings in such frequency range. Oscillations between 15 and 30 Hz have in fact been linked to the attempt of preserving the ongoing sensorimotor state. More in detail, Engel and Fries suggest that beta band activity (BBA) acts as a facilitator of proprioceptive feedback processing33. In the intended attempt of preserving the status quo and regaining balance after a disturbance, the enhanced activity observed in beta band is thought to promote the handling of proprioceptive signals coming from the periphery, on which we rely for the perception and recalibration of the sensorimotor system34.
Among all the inputs that we use online to modify our posture, the somatosensory ones play perhaps the most important role: since proprioceptive thresholds are smaller than visual and vestibular ones, proprioceptive inputs are the most sensitive in detecting centre of pressure (COP) velocity35. Among the proprioceptors, muscle spindles, being stretch-sensitive and yielding information about the length and contraction of muscles36, are responsive to mechanical vibrations applied to muscle bellies or tendons.
When vibrations are applied, reflex responses arise (the tonic vibration reflex, TVR) and translate to an increased MU firing rate and a bigger electromyographic (EMG) response37,38,39. Conflicting studies are also present in the literature, including results suggesting that motorneuron excitability of spastic limbs is decreased after WBV, while that of the unaffected limb is immutated40. TVR seems to play a role even when vibrations are delivered to the whole body via an oscillating platform, as it occurs in whole body vibration (WBV) stimulation. WBV is in fact included in training and rehabilitative programmes as a mean to evoke neuromuscular responses from various muscles and therefore enhance muscle contractions41. Because TVR has been appointed as the main mechanism responsible for the enhanced sensitivity of muscle spindles primary endings41,42,43. WBV represent a reasonable candidate for the stimulation of proprioceptive structures. The goal of this study is to investigate if an appropriate use of mechanical vibrations-intended as a proprioceptive stimulation—can recalibrate the sensorimotor set employed to maintain balance in upright posture. Vibrations are delivered to specifically target muscles, soleus (SOL) and gastrocnemius lateralis (GL), that contribute the most to the primary strategy put into place during undisturbed human standing: the ankle strategy.
Our results confirm our hypothesis: postural control strategies are affected by vibratory stimulation. In particular, a greater interplay between the CNS and the periphery is observed in parallel to a recalibration of the sensorimotor set, appreciable not only as an observed increase of the cortical activity measured in beta band but also as a different set of muscle synergies employed to maintain balance.
To study the changes in balance control strategies, we recorded four one-minute trials of undisturbed balance before (baseline balance—BB) and after (post-stimulation balance—PSB) the delivery of WBV stimulation at 30 Hz that 17 participants received while on their fore-feet, which was shown to effectively target calf muscles44. Muscle activity was recorded from GL, SOL, tibialis anterior (TA), biceps femoris (BF) and rectus femoris (RF) of both legs; electroencephalographic (EEG) activity was recorded from 64 channels (10–20 arrangement), and centre of pressure (COP) trajectories via a force platform. All analyses were run both on the first minute and on all four trials, to evaluate eventual differences in the acute- and long-term response to the stimulation. Not all participants were able to reposition on the force platform within the given time, consequently the first trials after the WBVs were realigned and cropped appropriately across participants. This resulted in two subjects being removed from the acute-term analyses as the time occurred between the vibratory stimulations and the balance trials were too short and too long (11.6 s and 40.4 s). The alignment of balance recordings resulted in trials of the length of 44.8 s. The average time occurred between the WBV stimulation and the first balance trial was 27.9 ± 4.1 s. For the analyses on the longer-term effect, all 17 subjects were retained, and the concatenation of balance trials resulted in two 4-min long epochs.
Dominant calf-muscles are targeted by the WBVs
To confirm that WBVs stimulated the targeted muscles (GL, SOL and TA), the root mean square (RMS) value of their EMG signals was analysed. Trials before and after the stimulation were compared to rule out any potential confounding factor given by motion artefacts during WBVs45,46,47,48. The median frequency (MF) of the EMG power spectrum was also analysed to exclude muscle fatigue49.
The acute-term analyses revealed that muscular activation increased especially among the muscles on the dominant side (the right for most subjects—11 out of 15): GL (p < 0.01) and SOL (p < 0.01) measured a bigger contraction after the vibrations were delivered. Similarly, a RMSPSB bigger than RMSBB was found also for the left GL (p < 0.05) (Fig. 1). The right GL and SOL –dominant muscles for 12 out of 17 subjects- showed an increased RMSPSB even across the 4 min following the WBVs (p < 0.05). The only muscle that was found showing signs of fatigue was the left GL: its MF decreased significantly after the WBVs when computed over the 45 s-long sEMG signals and on the 4 min-long ones (p < 0.05). Otherwise, the power spectral density (PSD) of the other muscles did not shift towards lower frequencies after the stimulation.
sEMG RMS before and after the WBVs. RMS values of lower limb muscles and their difference between the 45 s-long baseline (BB) and post-stimulation (PSB) trials: (*) and (**) indicate p < 0.05 and p < 0.01, respectively.
Acute-term increase of CNS-periphery interplay
Proprioceptors detect the muscle length changes during the natural back and forward body oscillation while standing still35. The information originating at these sensorial structures is then processed and integrated at higher levels to shape appropriate corrective postural responses4. Therefore, we hypothesised that, by stimulating these structures, we would have increased such exchange of information and this would have had a measurable impact on the balance control sub-systems. To investigate how the interplay between the CNS and the periphery was affected by WBVs, CMC was studied between those muscles that play the biggest role in actuating postural responses during undisturbed balance -the calf muscles (SOL, GL and TA)- and the EEG channels. The permutation test run on 45 s-long epochs detected two significant clusters in correspondence of a bigger CMC after the WBVs. Specifically, a stronger coupling was observed around 22 Hz between the right GL and electrodes situated mostly in the contralateral hemisphere (p < 0.05, Fig. 2a.1*–b.1). The effect had the same directionality for the right SOL and the same central contralateral electrodes but was centred around 18 Hz (p < 0.05, Fig. 2a.2–b.2). No significant difference was identified on the CMC vectors obtained from the concatenated EEG and sEMG epochs.
Corticomuscular coherence topographic maps and spectra. On the left-side, voltage topographic maps of the difference between EMG-EEG couplings before and EMG-EEG couplings after the WBVs (CMCBB and CMCPSB, respectively), averaged across subjects, are reported for the right GL (a.1) and the right SOL (a.2). The asterisks depict electrodes that showed a significantly bigger CMC around 23 Hz (a.1) and 19 Hz (a.2). On the right-side, CMC spectra (mean ± standard error) of EMG-EEG couplings before and after the stimulation are reported for the right GL (b.1) and the right SOL (b.2). The asterisks depict the frequency bin at which the CMC signals differed between conditions, as identified by the cluster-based permutation test. Results are shown for CMC vectors obtained from the 45 s-long baseline (BB) and post-stimulation (PSB) trials.
Acute-term increase of cortical power in beta band
Since the activity recorded in beta frequency range—mostly confined to S1—is suggested to facilitate proprioceptive feedback integration33, we expected a greater extent of information processing, observable as an increase of cortical activity in such frequency range, when stimulating proprioceptive structures via WBV. Further motivated by the higher CMC observed after the WBV, we used a one-side Wilcoxon test to test the hypothesis that even S1 beta-band activity would increase after the stimulation. BBA was quantified as the average power in beta band (15–30 Hz) recorded from sensors overlying S150 and the test did indeed show that it increased in the acute-term balance after the WBVs (p < 0.01, Fig. 3), but not in the longer one.
Changes in beta band activity. Box plot of the average power estimated in beta band (area under the curve, averaged over S1 channels) for EEG epochs recorded at C3, C4, CP3 and CP4 during balance trials before (purple) and after (orange) the vibratory stimulation. Significance (p < 0.01) is indicated by the asterisk and results are reported for 45-s long epochs.
Acute-term recalibration of muscle networks
IMC was computed between the 45 pairs of muscles (5 muscles per leg) as the coherence between the two EMG signals, both in the acute- and long-term, to investigate changes in the recruitment pattern of muscles during the balance task. The IMC spectra were factorised into a matrix of weights and a matrix of basis vectors. The latter yielded the six frequency bands over which the IMC was decomposed. Its elements were reorganised, for each frequency component, into two adjacency matrices (10 × 10) that reflected the coherence between each muscle pair before and after the WBVs11. IMC reflects the common neural input to two different muscles and therefore represents the extent to which the muscles are jointly recruited7. Network analyses were employed to investigate the level of changes induced by WBV stimulation. We quantified changes in the clustering coefficient (CC, reflecting network segregation) and participation coefficient (PC, reflecting network integration), as well as changes in the connection strength between the nodes of the networks (muscles). The six components obtained from the factorization of the coherence spectra of 45-s long epochs are displayed in Fig. 4. The first 4 components represent spectra peaking at specific frequencies: 0 Hz; 2 Hz; 8 Hz and 14 Hz. The fifth component is characterized by three frequency peaks at 10, 20 and 30 Hz. The last component instead does not show any predominant frequency. The edge-wise analysis yielded the following results (Fig. 4):
The connection strength between the two SOL muscles decreased at 2 Hz after the stimulation (p < 0.05, FDR corrected). In addition, the connections between left GL-right SOL and between right SOL-right GL decreased after the WBVs (p < 0.01);
The connection between left RF and left BF at 8 Hz increased after the stimulation (p < 0.05, FDR corrected);
The strength of the connection between left RF and the left SOL increased after the WBVs (p < 0.05, FDR corrected), for the broad frequency component peaking at 10, 20 and 30 Hz;
The connection strength between left GL-right RF and left GL-right BF increased after the WBVs (p < 0.01) in the broad frequency component.
Edge-wise analysis of muscle networks. The left column depicts the six frequency components in which the coherence spectra were decomposed via NNMF (one for each row). The second and third columns show the adjacency matrixes, averaged across subjects, obtained for each frequency component from the 45-s long trials before and after the stimulation, respectively. These matrixes give the connection strengths, at different frequencies, between the 10 muscles (nodes of the network). The fourth column depicts the muscle networks averaged across subjects and conditions: the weight of the edges is obtained by averaging the two adjacency matrixes (pre and post stimulation) for each frequency component. The widths of the edges in column 4 reflect the strength of the connection. The fifth column depicts the statistical results of the edge-wise analysis. For each frequency component, only those edges that were significantly different (p < 0.05) between conditions (pre vs post) are depicted. Comparisons that resulted particularly significant (p < 0.01) are depicted via a dotted line; comparisons that resulted significant after FDR correction are depicted via a dashed line. The colours of the edges reflect the direction of the effect, with red lines indicating that the connection strength decreased after the stimulation and green lines indicating an increase after the stimulation. The width of green and red lines indicates the size of the effect.
The node-wise analysis showed a significant decrease in segregation of the calf muscles at 2 Hz. Specifically, the CC of right SOL, left SOL, right GL and left GL decreased significantly after the stimulation (p < 0.05, FDR corrected) (Fig. 5).
Node-wise analyses of muscle networks. Clustering coefficient results are reported for the frequency component where at least one node changed significantly between conditions. (a) Depicts the frequency component (2 Hz). (b,c) show the graph measure values for each node before and after the stimulation, respectively. (d) represents the statistical results of the node-wise analysis. The nodes that were significantly different (p < 0.05) between conditions (pre vs post) are depicted and those that survived FDR correction are contoured by a yellow line. The colours of the nodes reflect the direction of the effect, with red nodes indicating that the specific metric decreased after the stimulation. The size of the nodes indicates the size of the effect.
Similar frequency components were obtained from the concatenated epochs. However, although a decrease of CC was measured at 2 Hz for both the left and right soleus, these results did not pass the FDR correction. Similarly, a decrease in the strength connection between left SOL-right SOL and between left SOL-left GL was significant, but did not pass the FDR correction.
Acute-term instability is restored in the long-term
Finally, to fully characterise the changes occurred in the postural mechanisms due to the WBVs, we analysed the COP trajectories in the anterior–posterior (AP) and medial–lateral (ML) directions. We extracted common metrics such as the mean distance (MD) and mean velocity (MV) values, and more novel ones, such as the complexity index (CI) that quantifies the level of regularity of the signal. An efficient postural control is the one that does not require particular attention and is reflected by a greater irregularity of the COP trajectories and therefore a greater CI51,52. Wilcoxon tests run on 45-s long trials revealed that MDML, MVML and MVAP increased immediately after the WBV (p < 0.001, p < 0.01, p < 0.001 respectively), while the mean distance covered by the COP in the anterior–posterior direction was not affected by the WBV stimulation (p > 0.025). Moreover, CIML decreased significantly in the minute following the mechanical vibrations (see Fig. 6). When the analyses were run on 4-min long trials, the postural control in the ML direction appeared recovered and the only significant results concerned metrics in the AP direction, registering a significant increase in MVAP and CIAP after the WBVs (p < 0.001 and p < 0.05 respectively).
Multiscale sample entropy and complexity index. Plots of sample entropy averaged across subjects in the anterior–posterior (1) and in the medial–lateral direction (2) before and after the WBVS (round and cross marker, respectively). In (2), the area under the curve is displayed to represent the calculation of the CI for the ML direction, post stimulation. In (3) boxplots of the complexity index are displayed for the ML and AP direction (CIML and CIAP) and significant differences are depicted by the asterisk (p < 0.05).
While it is renowned that during WBVs muscle contraction is boosted, this is the first study that demonstrates that an increased muscle activation persists even after the stimulation, during upright standing. Although there is no universal consensus on whether WBV increases or decreases motorneuron excitability of healthy limbs40,53, this increased and sustained muscle activation—measured as a bigger EMG RMS across both GLs and the right SOL—is likely to be consequential to an enhancement in the sensitivity of the spindles of the related muscles41. As EMG RMS is not a direct measurement of muscle spindle activity, to confirm our results we also assessed if the increase was caused by muscle fatigue (median frequency shifting). Our data confirmed that the median frequency of the dominant GL and SOL did not change, indicating that their sEMG increase was not due to fatigue but rather reflecting a sustained proprioceptors sensitivity, maintained even once the stimulation was off.
This inference relates well to our results on corticomuscular coherence supporting the hypothesis of a greater information exchange between the muscles and the cortex occurring after the WBVs16. CMC has been historically utilised to describe the cortical drive to the muscles12, but recent findings support the idea that it also reflects afferent couplings from the periphery to the cortex itself13. We may therefore speculate that more sensitive muscle proprioceptors translated into a more abundant flux of proprioceptive feedbacks from the relevant muscles—right SOL and GL. However, considering also the original interpretation, the increased CMC measured after the WBVs points to an overall greater interplay between afferent and efferent signals that from the periphery reached the brain—and vice versa—with the purpose of detecting perturbations and correcting them via muscle modulation.
The enhanced activity observed in beta band after the stimulation supports the second and more recent interpretation. An increased BBA in the somatosensory areas of the brain—measured as a greater spectral power over C3, CP3, C4 and CP4—indicates a greater attempt of the sensorimotor system to preserve the status quo33,54. More specifically, a bigger BBA allows the system to more efficiently process proprioceptive signals from the periphery with the ultimate goal of recalibrating the sensorimotor set33 or, in this case, adjusting the COP position after an unexpected loss of balance.
The increased need for recalibration and the increased CMC observed after the WBVs are well justified by the fact that the ability of balancing declined immediately after the stimulation. In fact, we observe a clear degradation of stability in the medial–lateral direction after the mechanical stimulation, as shown by a larger COP displacement and velocity in the ML plane55. In addition to these unambiguous results, a reduced complexity of postural sway in the ML direction further confirms the reduction of the ability to maintain balance and the increased attempt of the system to maintain an upright stance. A smaller CI does in fact indicate a bigger regularity of COP paths, which is positively correlated to the attention needed to balance and reflects a less automatic postural control51,52. From our results it could therefore be inferred that, being the ML COP trajectories more regular after the WBVs, participants employed a greater amount of effort to maintain the same upright stance, as BBA results indicate too. The observed balance disruption in the ML direction is reasonable as this is the one that is mostly affected by the WBVs. In the AP direction instead, we observe only an increased COP velocity, which—if not associated with a bigger COP displacement—is not easily associated to worsening balance55,56.
To link the posturographic results with the electrophysiological ones, when the postural instability grows, lager COP movements lead to greater uncertainty in the periphery, which triggers a bigger demand for oscillatory recalibration34.
The increased need for cortical recalibration is reflected in the modified muscular networks that are observed after the WBVs (Figs. 4 and 5), where the weights of the networks are modified—or recalibrated—across different frequency ranges. Specifically, the greatest changes are observed in correspondence of those muscles from which the affluence of afferent information increased, namely the SOL and GL. Changes in connection strength are observable across these muscles—bilaterally—especially at very low frequencies (< 5 Hz), which resemble the frequency ranges reported by previous studies on IMC during balance11,23. Moreover, significant differences are detectable for the clustering coefficient at the very same frequency, where it decreases after the stimulation for the bilateral SOL and GL.
This could lead to the speculation—well aligned to the current literature (see next paragraph)—that these muscles were less synchronised with the rest of the network and therefore modulated more individually.
It is in fact suggested that IMC at frequencies below 6 Hz reflects subcortical inputs or reflexes, while IMC at higher frequencies reflects cortical ones57. In details, the first backs stiffness control while the second supports muscle control via synergy formation. In our case, the reduced CC found for the plantarflexors indicate a reduction of synchronisation of the latter with other neighbouring muscles—or reduced IMC—for frequencies below 5 Hz. This might suggest that subcortical inputs were less employed for the modulation of the plantarflexors after the WBVs. Because the plantarflexors were significantly more active after the stimulation and their corticomuscular coherence at higher frequencies increased but less spinal modulation was present, it is possible that more cortical control was employed for these muscles. This possible shift toward cortical control—observed during the more challenging task of balancing after the stimulation—is in line with recent findings suggesting that postural threat leads to a shift toward more supraspinal control of balance58. Different views are present in the literature in regard of whether IMC at low frequencies reflects modulation via subcortical and spinal inputs59,60, but besides the origin of such modulation, the network reconfiguration observed after the WBVs suggest that a change in the modulation of muscle activation has occurred after the stimulation while a cortical-muscular loop is likely to have played an important role in this recalibration.
As for the long-term analyses, significant results are found only for the sEMG and COP data. The first ones indicate that WBVs' effect on the enhanced activation of the dominant calf muscle, differently than the recalibration effect, persisted for 4 min.The interpretation of posturography results is instead less straightforward. When computed across the four minutes after the WBVs, the COP parameters seem to stabilise in the ML plane and only an increase of COP velocity in the AP direction is found, which, if not associated with an increase of COP displacement, is more insidious to interpret55. Davids et al. did in fact report a smaller mean COP velocity in participants who had an anterior cruciate ligament complete rupture than in the control group56, contradicting the common belief that a higher velocity reflects a worse ability of controlling posture. The simultaneous recording of a significant increase of mean velocity and no significant change of COP mean distance might suggest that the average amount of time necessary to cover a fixed distance decreases after vibrations are applied. Although not straightforward, it is reasonable to infer that the decrease of time between one COP position and the next one might be linked to an increased muscle's ability to adjust their length to counterbalance a body sway. This interpretation is supported by what the AP CI results suggest: that not only balance was restored in the long term, but also that was possibly improved, as the increased AP complexity index suggests51.
Our results indicate that WBVs undermines balance in the first place, triggering the need for a bigger effort to control the upright stance and shifting muscle modulation toward supraspinal control, resulting in a recalibration of muscle recruitment. However, the system seems to recover from such disruption and regain control over a longer time interval. Indeed, while muscle recruitment and cortical effort appear unaltered over the long term, the balance seems not only restored but also improved, besides the still clearly affected calf muscles. Our work suggests that WBV stimulation is worth further investigations as a mean to recalibrate postural control mechanisms and potentially restore balance. Moreover, our results provide new perspectives on WBV applications, paving the way for future research on the interaction between the CNS and the peripheral muscles.
Subjects and experimental design
Eleven females and six males (age: 23.06 ± 2.51 years; height: 167.22 ± 9.74 cm; mass: 61.32 ± 10.69 kg) volunteered in the study after giving written consent. History of neuromuscular or balance disorder and recent injuries to the lower limbs were the exclusion criteria. Among the inclusion criteria, participants were required to do more than 5 h per week of physical activity to ensure a minimum of athleticism. Even though WBV is a mild vibratory stimulation, the 4-min-long postural task requires a certain level of muscular engagement. Participants were also asked to not consume alcoholic beverages and to not assume medication over the 24 h prior to the experiment. The protocol of the study received approval by the Ethics Committee on Life and Health Sciences of Aston University.
A single-group, repeated-measure design was used and the data were collected at the Aston Laboratory for Immersive Virtual Environments. Electroencephalographic (EEG), COP and surface EMG (sEMG) signals of calf muscles were collected while participants underwent a balance task, performed before and after WBV mechanical stimulation.
Experimental protocol and data recording
A familiarisation session was run for participants to get acquainted with the WBV device before the study began. After the recording equipment was set up, the first part of the study consisted in recording four baseline balance (BB) trials of 60 s. Participants were instructed to "stand as still as possible"61 with feet shoulder-width apart and arm along the trunk while fixating their gaze at a tape cross placed on the wall in front of them, at approximately 2.5 m distance. After the baseline trials were collected, the second part of the study took place: a one-minute vibratory stimulation was delivered with a side-alternating platform (Galileo® Med, Novotec GmbH, Pforzheim, Germany) that operated at 30 Hz, peak-to-peak amplitude of 4 mm. While undergoing the WBVs, participants stood on their forefeet, knees unlocked, keeping contact between heels and a 4 cm tall foam parallelepiped glued to the platform62. This combination of WBV settings (stimulation frequency and subjects' posture) was chosen as it triggers the greatest response of the plantarflexors muscles—SOL and GL44. The WBV stimulation was followed by the recording of four balance trials of 60 s each, which will be referred to as Post-Stimulation balance (PSB) trials. After the WBVs, participants were asked to position themselves on the platform as soon as they felt ready to undergo the postural task. We will refer to BB and PSB as the two conditions tested in this study.
EEG and sEMG data were acquired using EEGO sports ES-232 (ANT neuro, Enschede Netherlands). To collect the electrical brain activity (EEG), a 64-channel Ag/AgCl wet-electrode waveguard cap was used in connection to a portable amplifier fixated on the participants' backs. Data were continuously collected with a standardised 10–20 system montage and were sampled at 1000 Hz. Caps of different sizes were fitted on the head of participants; the correct placement was obtained by checking that Cz was placed at mid-distance between the nasion and inion anatomical points and at mid-distance between the left and right lobes. Muscle signals (sEMG) were collected via bipolar Ag/AgCl electrodes (Arbo Solid Gel, KendallTM, CovidienTM 30 mm × 24 mm, centre-to-centre distance 24 mm) that were connected to the portable amplifier via cascaded bipolar adaptors XS-271.A, XS270.B, XS-270.C. Electrodes were placed over the TA, GL, SOL, RF and BF muscles of both legs and arranged along the presumed direction of muscle fibres, as recommended by the SENIAM guidelines63,64. The reference electrode was placed over the tuberosity of the right tibia. To reduce inter-electrode resistance, the skin area was shaved and degreased by mean of light abrasion with a disinfectant. To quantify balance, COP trajectories were recorded and sampled at 1000 Hz via an AMTI OR 6–7 force platform (Advanced Mechanical Technology, Watertown, MA, USA) connected to a motion capture system (Vicon Nexus, Vicon Motion Systems Limited, Oxford, UK). A LabJack U3-HV acquisition unit was programmed in Matlab®R2019a (The Mathworks, Inc., Natick, MA) with a custom-made script to synchronise electrophysiological acquisitions and posturography data. The trigger signal was sent as a 5 V TTL signal to the DB25 port of the EEGO sports master amplifier and as a 1.25 V TTL signal to the rear of the Vicon Lock unit.
Since participants reacted differently to the stimulation, different delays were recorded between the WBV stimulation and the first PSB trial.
To allow a consistent comparison while evaluating the acute effect of WBV on balance, the first BB and PSB trials were preprocessed to match the delay between the WBVs and the first PSB across participants. The distribution of the time employed by each participant to reposition on the force platform (delays) was analysed and the subjects whose delay represented an outlier values were removed from the dataset. The recordings (EEG, sEMG and COP) of the retained subjects were then synchronised to those of the subject that scored the greatest delay and were further cropped to match the duration of the shortest recording. The same cropping procedure was applied to the first BB trials, which were cropped at the same time point of the corresponding PSB trial to make the trials collected before and after the WBVs comparable. These preprocessed signals will be hereafter referred to as the cropped ones.
The long-term effect of WBV stimulation on balance was evaluated by concatenating the four BB trials and the four PSB trials into two 240 s-long epochs, which were then used for analyses. These signals will be hereafter referred to as the concatenated ones.
Data preprocessing: EEG and sEMG
The cropped and the concatenated EEG epochs were preprocessed in Matlab®R2019a (The Mathworks, Inc., Natick, MA) in a semi-automatic fashion, following the steps proposed in the software Cartool65. The EEG signals were de-trended, mirrored to avoid edge-artefacts, filtered in beta band (13–32 Hz) with a zero-phase FIR filter, cropped and de-trended again. Eye-blinks were removed automatically by subtracting the signals recorded at FP1, FPz and FP2 sites from the remaining 60 EEG channel data, proportionally to their contribution to each channel66,67. Channels with standard deviation (SD) exceeding the average SD by a factor of 1.7 were replaced by the average of the signals from neighbouring channels10. The value of the factor was placed at 1.7 after a visual inspection—confirmed also by a second researcher—of the preprocessed channels confirmed that this threshold was not too conservative, leading to the interpolation of channels exceeding the physiological range of ± 80 μV10. Channels were finally re-referenced to the common average.
The cropped and the concatenated sEMG epochs were preprocessed and analysed in Matlab®R2019a (The Mathworks, Inc., Natick, MA), using custom-made scripts. The power line noise was occasionally present not only at 50 Hz, but also at superior and inferior harmonics. To suppress such noise, a comb stop-band filter was used at harmonics between 25 and 150 Hz, using a stop band of 1 Hz. The signals were band-pass filtered between 20 and 260 Hz with a zero-phase Butterworth filter, after padding was applied to avoid edge-artefacts. Filtered sEMG epochs were then rectified12,68 with a Hilbert transform, which provides results similar to a full-wave rectification28,68,69. The preprocessing was done using Fieldtrip ft_preprocessing function70.
CMC analyses were performed only on those muscles targeted by the vibrations: the lower limb ones (SOL, GL and TA). Coherence estimates were computed between the preprocessed EEG and sEMG signals (both between the cropped epochs and the concatenated ones), for a total of 360 couplings [EEG channels (60) × muscles (6)] per condition. Power spectral densities (PSD) of EEG and sEMG signals were estimated with Welch's averaged periodogram method, segmenting the epochs in Hamming windows of one seconds and zero overlap30. The CMC between the signal \(x\)—EEG and \(y\)—sEMG was then calculated as:
$${C}_{xy}\left(f\right)=\frac{{\left|{P}_{xy}\left(f\right)\right|}^{2}}{{P}_{xx}\left(f\right){P}_{yy}\left(f\right)},$$
where \({P}_{xy}\) is the cross-spectral density between the input signals for a given frequency \(f\), and \({P}_{xx}\) and \({P}_{yy}\) are the auto-spectral densities of \(x\) and \(y\), respectively. In total, 720 CMC vectors [EEG channels (60) × muscles (6) × condition (2)] were retained for statistical analyses.
To quantify beta band activity changes between before and after the WBVs, the beta power-defined as the average area under the PSD curve in [15–30 Hz]—was used as a summary statistic17. PSDs were estimated with Welch's averaged periodogram, using non-overlapping Hamming windows of one second, and were obtained for cropped and concatenated EEG epochs recorded from the sensors overlying the sensorimotor cortex (C3, C4, CP3 and CP4)50. For each subject, the area under the curve in beta frequency range was averaged across the four channels and the two resulting values (\({BetaPower}_{BB}\) and \({BetaPower}_{PSB}\)) were retained for statistical analyses.
IMC and muscle networks
sEMG signals from all muscles were instead used for connectivity analyses. IMC was estimated between all muscle pairs using mschoere Matlab function. Forty-five combinations (\({C}_{n,k}\)) resulted from the formula:
$${C}_{n,k}=\frac{n!}{k!\left(n-k\right)!},$$
where \(n=10\) is the total number of muscles (nodes) and \(k=2\) is the number of muscles in each arrangement (pair). Magnitude-squared coherence values were computed over sEMG power spectral density (PSD) using Welch method, with window length of 1 s and overlap of 75%11. To break down the connectivity values into different frequency components and to make the coupling strength comparable between conditions, a non-negative matrix factorization (NNMF) algorithm was run on a \(N\times [M\times P\times C]\) (frequency bin × [muscle pairs × participants × conditions]) weighted undirected connectivity matrix. Nnmf Matlab function was used to decompose the coherence spectra (0–40 Hz) in two matrixes \({W}_{AllSubj}\) \([N\times K]\)) and \({H}_{AllSubj}\) \([K\times [M\times P\times C]]\), where \(K\) equals 6 and is the number of frequency components in which the matrix was factorised. 5000 iterations were used. The two matrixes were then reshaped into subject-specific \(W\) \([N\times K]\) and \(H\) \([K\times M]\) for both conditions (\({W}_{BB} [42\times 6]\) and \({H}_{BB}[6\times 45]\); \({W}_{PSB} [42\times 6]\) and \({H}_{PSB}[6\times 45]\)). For every subject, \(H\) was reshaped into six weighted undirected connectivity matrixes \(C\), which yielded the connection strengths between nodes (muscles) in every frequency band. CBB and CPSB resulted in \(10\times 10\) connectivity matrixes. To conduct a static node-wise analysis, two graph measures were calculated for every node of the subject-specific CBB and CPSB of every frequency component: clustering coefficient (CC, reflecting segragation) and participation coefficient (PC, reflecting integration)71. Themeasures obtained from the same muscle were used for statistical analysis. To run a static edge-wise analysis, the connection strengths of the 45 muscle pairs yielded by CBB–CPSB were retained for statistical analysis.
sEMG
A root mean square (RMS) value was obtained from the preprocessed sEMG epochs of those muscles targeted by the WBVs (SOL, GL and TA) before and after the stimulation. In total, 12 [muscle (6) × condition (2)] RMS values were obtained for each participant and retained for statistical analysis. To quantify the level of muscle fatigue induced by the stimulation, a frequency parameter was extracted from the filtered unrectified sEMG epochs. Specifically, the median frequency (MFBB–MFPSB) of the sEMG power spectrum—computed with the same specifics used for IMC estimation—was identified for each participant and for each muscle as the frequency that divides the spectrum in two parts of equal power49.
The cropped COP coordinates and the concatenated ones were analysed in Matlab®R2019a (The Mathworks, Inc., Natick, MA) with custom-made scripts. The mean of each signal was subtracted from anterior–posterior (AP) and medial–lateral (ML) COP coordinates, which were then low-pass filtered at 12.5 Hz using a 4th order Butterworth filter51. For each condition and each participant, as described in Ref.72, COP mean distance (MD) and COP mean velocity (MV) values were computed for both directions: MDAP, MDML, MVAP and MVML. The four stabilometric parameters were normalised by participant's height, weight and age by applying a simultaneous detrending normalisation73.
To compute the multiscale sample entropy (MSE), the preprocessed AP and ML time-series were further normalised by their standard deviation values51. MSE was obtained with a Matlab File Exchange function and the recommended default values of the pattern length and similarity criterion were used74,75. The complexity index (CI) was computed as the sum of the individual sample entropies obtained for every time scale in both directions—ML and AP—obtaining CIML and CIAP respectively52,76. For the long-term analyses—due to the dimension of the data—the MSE was not computed on the concatenated epochs, but on the individual epochs of one minute. The MSE values obtained from the four epochs preceding (and following) the WBVs were averaged to obtain \({MSE}_{BB}\) (and \({MSE}_{PSB}\)) that were used to compute the respective complexity index values.
For every muscle, the 60 CMC vectors obtained from BB trials were compared to the respective ones obtained from the PSB trials via mean of a cluster-based permutation test77, which was carried out in Fieldtrip using 2000 permutations70. Comparisons were run in the beta frequency range and this procedure was applied to the CMC vectors obtained from both the cropped and concatenated EEG and sEMG epochs.
A one-tailed Wilcoxon test was run to evaluate whether any significant difference was present in BBA in the somatosensory cortex between before and after the vibratory stimulation. The directionality of results from the permutation test was used to infer on the effect of WBVs on BBA.
Since we had no prior information on the effect of WBVs on muscular connectivity during balance, two-tailed Wilcoxon signed rank tests were used to test it. Tests were run for every muscle in every frequency component between the connectivity metric obtained from CBB and CPSB. For the node-wise analysis, 4 × 10 × 6 ([metrics × muscles × frequency components]) tests were performed on the connectivity metrics obtained from the balance results. For the edge-wise analysis, 45 × 6 ([muscle pairs × frequency components]) tests were run on the connection strengths computed from the balance trials, respectively. To correct for the multiple comparisons, the False Discovery Rate (FDR)78 procedure was applied for every metric (CC, PC, BC, STR, edge) across muscles, within each frequency band.
To quantify the differences in muscle activations between the balance trials (before and after WBV stimulation), six two-tailed Wilcoxon signed rank test were run between RMSBB and RMSPSB, one for each muscle. To test the hypothesis that a greater level of muscle fatigue was induced by the WBVs, six one-tailed Wilcoxon signed tests were run between MFBB and MFPSB, one for each muscle.
Since we had no a priori expectation on the effect of WBVs on postural stability, a two-tailed Wilcoxon signed rank test was run for every normalised parameter (MDAP, MDML, MVAP and MVML). Since two parameters were obtained from each time-series (ML and AP), a Bonferroni correction was applied to the significance level that was therefore set at 0.025. A Spearman correlation coefficient was used to quantify the level of collinearity between the COP parameters and the anthropometrics.
Another two-tailed Wilcoxon signed rank test was used to test for significant differences between complexity indexes measured in moth directions before and after the mechanical vibrations.
The study was carried out according to the Declaration of Helsinki (2013) and was approved by the University Research Ethics Committee at Aston University (reference number: 1561).
Consent to participate
All participants provided informed consent before participating.
Since sharing data in an open-access repository was not included in our participant's consent and therefore compromises our ethical standards, data are only available on request from the corresponding author.
The code used for the analyses of the data will be shared upon request to the corresponding author.
Magnus, R. The Croonian Lecture: Animal posture. Proc. R. Soc. B 98, 339–353 (1925).
Morasso, P. G. & Schieppati, M. Can muscle stiffness alone stabilize upright standing?. J. Neurophysiol. 82, 1622–1626 (1999).
Morasso, P. G. & Sanguineti, V. Ankle muscle stiffness alone cannot stabilize balance during quiet standing. J. Neurophysiol. 88, 2157–2162 (2002).
Horak, F. B. & Macpherson, J. M. 7. Postural orientation and equilibrium. In Handbook of Physiology, Exercise: Regulation and Integration of Multiple Systems (ed. Rowell, L. B. S. J.) 255–292 (Oxford University Press, 1996).
Takakusaki, K. Functional neuroanatomy for posture and gait control. J. Mov. Disord. 10, 1–17 (2017).
Ivanenko, Y. & Gurfinkel, V. S. Human postural control. Front. Neurosci. 12, 1–9 (2018).
Boonstra, T. W. The potential of corticomuscular and intermuscular coherence for research on human motor control. Frontiers (Boulder). 7, 907–915 (2013).
MathSciNet Google Scholar
Sears, B. Y. T. A. & Stagg, D. Short-term synchronization of intercostal motorneurone activity. J. Physiol. 263, 357–381 (1976).
Bernstein, N. The Co-ordination and Regulation of Movements,. (Pergamon, 1967).
Hassan, M. & Wendling, F. Electroencephalography source connectivity. IEEE Signal Process. Mag. https://doi.org/10.1109/MSP.2017.2777518 (2018).
Boonstra, T. W. et al. Muscle networks: Connectivity analysis of EMG activity during postural control. Sci. Rep. 5, 1–14 (2015).
Halliday, D. M. et al. A framework for the analysis of mixed time series/point process data-theory and application to the study of physiological tremor, single motor unit discharges and electromyograms. Prog. Biophys. Mol. Biol. 64, 237–278 (1995).
Witham, C. L., Riddle, C. N., Baker, M. R. & Baker, S. N. Contributions of descending and ascending pathways to corticomuscular coherence in humans. J. Physiol. 15, 3789–3800 (2011).
Witham, C. L., Wang, M. & Baker, S. N. Cells in somatosensory areas show synchrony with beta oscillations in monkey motor cortex. Eur. J. Neurosci. 26, 2677–2686 (2007).
Baker, S. N., Olivier, E. & Lemon, R. N. Coherent oscillations in monkey motor cortex and hand muscle EMG show task-dependent modulation. J. Physiol. 501, 225–241 (1997).
Liu, J., Sheng, Y. & Liu, H. Corticomuscular coherence and its applications: A review. Front. Hum. Neurosci. 13, 1–16 (2019).
Omlor, W., Patino, L., Hepp-Reymond, M. C. & Kristeva, R. Gamma-range corticomuscular coherence during dynamic force output. Neuroimage 34, 1191–1198 (2007).
Watanabe, T., Nojima, I., Mima, T., Sugiura, H. & Kirimoto, H. Magnification of visual feedback modulates corticomuscular and intermuscular coherences differently in young and elderly adults. Neuroimage 220, 117089 (2020).
Kilner, J. M. et al. Task-dependent modulation of 15–30 Hz coherence between rectified EMGs from human hand and forearm muscles. J. Physiol. 516, 559–570 (1999).
Kristeva, R., Fritsch, C., Timmer, J. & Lu, C. Effects of attention and precision of exerted force on beta range EEG- EMG synchronization during a maintained motor contraction task. Clin. Neurophysiol. 113, 124–131 (2002).
Kristeva, R., Patino, L. & Omlor, W. Beta-range cortical motor spectral power and corticomuscular coherence as a mechanism for effective corticospinal interaction during steady-state motor output. Neuroimage 36, 785–792 (2007).
Ushiyama, J., Yamada, J., Liu, M. & Ushiba, J. Individual difference in β-band corticomuscular coherence and its relation to force steadiness during isometric voluntary ankle dorsiflexion in healthy humans. Clin. Neurophysiol. 128, 303–311 (2017).
Peterson, S. M. & Ferris, D. P. Group-level cortical and muscular connectivity during perturbations to walking and standing balance. Neuroimage 198, 93–103 (2019).
Roeder, L., Boonstra, T. W. & Kerr, G. K. Corticomuscular control of walking in older people and people with Parkinson's disease. Sci. Rep. 10, (2020).
Andrykiewicz, A. et al. Corticomuscular synchronization with small and large dynamic force output. BMC Neurosci. 8, 1–12 (2007).
Petersen, T. H., Conway, B. A. & Nielsen, J. B. The motor cortex drives the muscles during walking in human subjects. J. Physiol. 590, 2443–2452 (2012).
Masakado, Y. et al. EEG-EMG coherence changes in postural tasks. Electromyogr. Clin. Neurophysiol. 48, 27–33 (2008).
Murnaghan, C. D., Squair, J. W., Chua, R., Inglis, J. T. & Carpenter, M. G. Cortical contributions to control of posture during unrestricted and restricted stance. J. Neurophysiol. 111, 1920–1926 (2014).
Alkaff, S. D. & Ushiyama, J. The presence of corticomuscular coherence during unipedal stance. BioRxiv https://doi.org/10.1515/9783110682045-204 (2021).
Jacobs, J. V., Wu, G. & Kelly, K. M. Evidence for beta corticomuscular coherence during human standing balance: Effects of stance width, vision, and support surface. Neuroscience 298, 1–11 (2015).
Vecchio, F. et al. Functional cortico-muscular coupling during upright standing in athletes and nonathletes: A coherence electroencephalographic-electromyographic study. Behav. Neurosci. 122, 917–927 (2008).
Ozdemir, R. A., Contreras-Vidal, J. L. & Paloski, W. H. Cortical control of upright stance in elderly. Mech. Ageing Dev. 169, 19–31 (2018).
Engel, A. K. & Fries, P. Beta-band oscillations-signalling the status quo?. Curr. Opin. Neurobiol. 20, 156–165 (2010).
Baker, S. N. Oscillatory interactions between sensorimotor cortex and the periphery. Curr. Opin. Neurobiol. 17, 649–655 (2007).
Fitzpatrick, R. & Mccloskey, D. I. Proprioceptive, visual and vestibular thresholds for the perception of sway during standing in humans. J. Physiol. 478, 173–186 (1994).
Goodwin, G. M., Mccloskey, D. I. & Matthews, P. B. C. The contribution of muscle afferents to kinesthesia shown by vibration induced illusions of movement and by the effects of paralysing joint afferents. Brain 95, 705–748 (1972).
Burke, D. & Schiller, H. H. Discharge pattern of single motor units in the tonic vibration reflex of human triceps surae. J. Neurol. Neurosurg. Psychiatry 39, 729–741 (1976).
Homma, S., Kanda, K. & Watanabe, S. Preferred spike intervals in the vibration reflex. Jpn. J. Physiol. 22, 421–432 (1972).
Hagbarth, K. E., Hellsing, G. & Löfstedt, L. TVR and vibration-induced timing of motor impulses in the human jaw elevator muscles. J. Neurol. Neurosurg. Psychiatry 39, 719–728 (1976).
Miyara, K. et al. Effect of whole body vibration on spasticity in hemiplegic legs of patients with stroke. Top. Stroke Rehabil. 25, 90–95 (2018).
Cardinale, M. & Bosco, C. The use of vibration as an exercise intervention. Exerc. Sport Sci. Rev. 31, 3–7 (2003).
Ritzmann, R., Kramer, A., Gruber, M., Gollhofer, A. & Taube, W. EMG activity during whole body vibration: Motion artifacts or stretch reflexes?. Eur. J. Appl. Physiol. 110, 143–151 (2010).
Pollock, R. D., Woledge, R. C., Martin, F. C. & Newham, D. J. Effects of whole body vibration on motor unit recruitment and threshold. J. Appl. Physiol. 112, 388–395 (2012).
Rigoni, I., Bonci, T., Bifulco, P. & Fratini, A. Characterisation of the transient mechanical response and the electromyographical activation of lower leg muscles in whole body vibration training. Sci. Rep. 12, 1–10 (2022).
Fratini, A., Cesarelli, M., Bifulco, P. & Romano, M. Relevance of motion artifact in electromyography recordings during vibration treatment. J. Electromyogr. Kinesiol. 19, 710–718 (2009).
Di Iorio, F. et al. The effect of whole body vibration on oxygen uptake and electromyographic signal of the rectus femoris muscle during static and dynamic squat. J. Exerc. Physiol. Online 15, (2012).
Fratini, A., Bifulco, P., Romano, M., Clemente, F. & Cesarelli, M. Simulation of surface EMG for the analysis of muscle activity during whole body vibratory stimulation. Comput. Methods Progr. Biomed. 113, 314–322 (2014).
Romano, M. et al. On the power spectrum of motor unit action potential trains synchronized with mechanical vibration. IEEE Trans. Neural Syst. Rehabil. Eng. 26, 646–653 (2018).
Stolen, F. B., De Luca, C. J. & De Luca, C. J. Frequency parameters of the myoelectric signal as a measure of muscle conduction velocity. IEEE Trans. Biomed. Eng. 28, 515–523 (1981).
Müller, T., Ball, T., Kristeva-Feige, R., Mergner, T. & Timmer, J. Selecting relevant electrode positions for classification tasks based on the electro-encephalogram. Med. Biol. Eng. Comput. 38, 62–67 (2000).
Donker, S. F., Roerdink, M., Greven, A. J. & Beek, P. J. Regularity of center-of-pressure trajectories depends on the amount of attention invested in postural control. Exp. Brain Res. 181, 1–11 (2007).
Busa, M. A. & van Emmerik, R. E. A. Multiscale entropy: A tool for understanding the complexity of postural control. J. Sport Health. Sci. 5, 44–51 (2016).
Miyara, K. et al. Effects of lower limb segmental muscle vibration on primary motor cortex short-latency intracortical inhibition and spinal excitability in healthy humans. Exp. Brain Res. 240, 311–320 (2022).
Spitzer, B. & Haegens, S. Beyond the status quo: A role for beta oscillations in endogenous content (RE)activation. eNeuro 4, (2017).
Palmieri-Smith, R. M., Ingersoll, C. D., Stone, M. B. & Krause, B. A. Center-of-pressure parameters used in the assessment of postural control. J. Sport Rehabil. 11, 51–66 (2002).
Davids, K., Kingsbury, D., George, K., O'Connell, M. & Stock, D. Interacting constraints and the emergence of postural behavior in acl-deficient subjects. J. Mot. Behav. 31, 358–366 (1999).
Nandi, T., Hortobágyi, T., van Keeken, H. G., Salem, G. J. & Lamoth, C. J. C. Standing task difficulty related increase in agonist-agonist and agonist-antagonist common inputs are driven by corticospinal and subcortical inputs respectively. Sci. Rep. 9, 1–12 (2019).
Zaback, M., Adkin, A. L., Chua, R., Timothy Inglis, J. & Carpenter, M. G. Facilitation and habituation of cortical and subcortical control of standing balance following repeated exposure to a height-related postural threat. Neuroscience https://doi.org/10.1016/j.neuroscience.2022.01.012 (2022).
Loram, I. D., Maganaris, C. N. & Lakie, M. Active, non-spring-like muscle movements in human postural sway: How might paradoxical changes in muscle length be produced?. J. Physiol. 564, 281–293 (2005).
Loram, I. D., Maganaris, C. N. & Lakie, M. Human postural sway results from frequent, ballistic bias impulses by soleus and gastrocnemius. J. Physiol. 564, 295–311 (2005).
Zok, M., Mazzà, C. & Cappozzo, A. Should the instructions issued to the subject in traditional static posturography be standardised?. Med. Eng. Phys. 30, 913–916 (2008).
Ritzmann, R., Kramer, A., Bernhardt, S. & Gollhofer, A. Whole body vibration training—Improving balance control and muscle endurance. PLoS One 9, (2014).
Hermens, H. J. et al. European recommendations for surface ElectroMyoGraphy, results of the SENIAM project. Roessingh Res. Dev. 8, 8–11 (1999).
Hermens, H. J., Bart, F., Catherine, D.-K. & Gunter, R. Development of recommendations for SEMG sensors and sensor placement procedures. J. Electromyogr. Kinesiol. 10, 361–374 (2000).
Brunet, D., Murray, M. M. & Michel, C. M. Spatiotemporal analysis of multichannel EEG: CARTOOL. Comput. Intell. Neurosci. 2011, (2011).
Parra, L. C., Spence, C. D., Gerson, A. D. & Sajda, P. Recipes for the linear analysis of EEG. Neuroimage 28, 326–341 (2005).
Barollo, F. et al. Cortical pathways during postural control: New insights from functional EEG source connectivity. IEEE Trans. Neural Syst. Rehabil. Eng. 30, 72–84 (2022).
Myers, L. J. et al. Rectification and non-linear pre-processing of EMG signals for cortico-muscular analysis. J. Neurosci. Methods 124, 157–165 (2003).
Boonstra, T. W. & Breakspear, M. Neural mechanisms of intermuscular coherence: Implications for the rectification of surface electromyography. J. Neurophysiol. 107, 796–807 (2012).
Oostenveld, R., Fries, P., Maris, E. & Schoffelen, J. M. FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput. Intell. Neurosci. 2011, (2011).
Rubinov, M. & Sporns, O. Complex network measures of brain connectivity: Uses and interpretations. Neuroimage 52, 1059–1069 (2010).
Prieto, T. E., Myklebust, J. B., Hoffmann, R. G., Lovett, E. G. & Myklebust, B. M. Measures of postural steadiness: Differences between healthy young and elderly adults. IEEE Trans. Biomed. Eng. 43, 956–966 (1996).
Chiari, L., Rocchi, L. & Cappello, A. Stabilometric parameters are affected by anthropometry and foot placement. Clin. Biomech. 17, 666–677 (2002).
Malik, J. Multiscale Sample Entropy (https://www.mathworks.com/matlabcentral/fileexchange/62706-multiscale-sample-entropy), MATLAB Central File Exchange. Retrieved June 18, 2020. (2020).
Costa, M., Goldberger, A. L. & Peng, C. K. Multiscale entropy analysis of complex physiologic time series. Phys. Rev. Lett. 89, 6–9 (2002).
Busa, M. A., Jones, S. L., Hamill, J. & van Emmerik, R. E. A. Multiscale entropy identifies differences in complexity in postural control in women with multiple sclerosis. Gait Posture 45, 7–11 (2016).
Maris, E. & Oostenveld, R. Nonparametric statistical testing of EEG- and MEG-data. J. Neurosci. Methods 164, 177–190 (2007).
Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. R. Stat. Soc. 57, 289–300 (1995).
This work was supported by an Aston funded PhD studentship (A/C Code: 10656).
Biomedical Engineering, College of Engineering and Physical Sciences, Aston University, Birmingham, UK
Isotta Rigoni & Antonio Fratini
EEG and Epilepsy Unit, Clinical Neuroscience Department, University Hospital and Faculty of Medicine of Geneva, Geneva, Switzerland
Isotta Rigoni
University of Geneva, Geneva, Switzerland
Giulio Degano
Department of Engineering, School of Technology, Reykjavík University, Reykjavík, Iceland
Mahmoud Hassan
Antonio Fratini
I.R.: conceptualization, data collection, data analyses, data interpretation, manuscript writing; G.D.: conceptualization, data interpretation, manuscript editing; M.H.: data interpretation, manuscript editing; A.F.: conceptualization, data interpretation, funding acquisition, manuscript editing. All co-authors were aware of the publication of this study.
Correspondence to Antonio Fratini.
Rigoni, I., Degano, G., Hassan, M. et al. Sensorimotor recalibration of postural control strategies occurs after whole body vibration. Sci Rep 13, 522 (2023). https://doi.org/10.1038/s41598-022-27117-7 | CommonCrawl |
\begin{document}
\title{On value sets of fractional ideals}
\author{E. M. N. de Guzm\'an\footnote{Supported by a fellowship from CAPES} \and A. Hefez\footnote{Partially supported by the CNPq Grant 307873/2016-1} }
\date{ } \maketitle
\noindent{\bf Abstract} The aim of this work is to study duality of fractional ideals with respect to a fixed ideal and to investigate the relationship between value sets of pairs of dual ideals in admissible rings, a class of rings that contains the local rings of algebraic curves at singular points. We characterize canonical ideals by means of a symmetry relation between lengths of certain quotients of associated ideals to a pair of dual ideals. In particular, we extend the symmetry among absolute and relative maximals in the sets of values of pairs of dual fractional ideals to other kinds of maximal points. Our results generalize and complement previous ones by other authors.
\noindent Keywords: Singular points of curves, admissible rings, duality of fractional ideals, value sets of fractional ideals.
\noindent Mathematics Subject Classification: 13H10, 14H20 \section{Introduction}
Value sets or semigroups of rings of irreducible plane curves germs, called plane branches, were studied by Zariski in \cite{Za1} and their importance is due to the fact that they constitute, over $\mathbb C$, a complete set of discrete invariants for their topological classification. From the work of Ap\'ery \cite{Ap}, it follows that this semigroup in the set $\mathbb N$ of natural numbers is in some sense symmetric. Many years later, Kunz, in \cite{Ku}, motivated by a question asked by Zariski, showed that a one dimensional noetherian domain, with some additional technical conditions, is Gorenstein if and only if its semigroup of values is symmetric.
For a germ of a singular plane curve with several branches over $\mathbb C$, Waldi in \cite{W}, based on the work \cite{Za1} of Zariski, showed that also in this case the topological type of the germ is characterized by the semigroup of values of the local ring of the curve, this time, a semigroup of $\mathbb N^r$, where $r$ is the number of branches of the curve. Although not finitely generated, this semigroup was shown by Garcia in \cite{Ga}, for $r=2$, to be determined in a combinatorial way by a finite set of points that he called \emph{maximal points}. Garcia also showed that these maximal points of the semigroup of a plane curve have a certain symmetry. These results were generalized later, for any value of $r$, by Delgado in \cite{D87}, where two kinds of maximal points were emphasized: the relative and absolute maximals, showing that the relative maximals determine the semigroup of values in an inductive and combinatorial way and that the relative and absolute maximals determine each other, extending Garcia's symmetry. A short time later, Delgado, in \cite{D88}, generalizing the work of Kunz, introduced a concept of symmetry for value semigroups in $\mathbb N^r$ and showed that this symmetry is equivalent to the Gorensteiness of the ring of the curve.
In \cite{Da}, D'Anna, generalizing the work of J\"ager, in \cite{ja}, and of Campillo, Delgado and Kiyek, in \cite{CDK}, extended the properties of value semigroups for some class of one dimensional noetherian rings to value sets of their regular fractional ideals and characterized a normalized canonical ideals of a given ring in terms of a precisely described value set obtained from the value semigroup of the ring.
Also, recently, Pol, in the work \cite{Pol17}, extended Delgado's result by showing that the Gorensteiness of the ring of a singularity is equivalent to some symmetry relation among sets of values of any dual pair of regular fractional ideal, duality taken with respect to the ring itself, and also showed that in this case one has a pairing between absolute and relative maximal points of the value set of an ideal and that of its dual. In the work \cite{KST} (see also \cite{Pol18}), the authors show that this symmetry relation among value sets of dual pairs of ideals holds without any extra assumption on the ring, if one takes duality with respect to a canonical ideal.
In this paper, we generalize the work \cite{CDK} that characterizes the Gorensteiness of an admissible ring in terms of lengths of certain quotients of complementary ideals by establishing similar conditions, valid without the Gorenstein assumption, for pairs of dual regular fractional ideals with respect to any fixed fractional ideal. This will allow us to unify, generalize and complement previous results in \cite{Pol17}, \cite{Pol18} and \cite{KST} and get new symmetry relations among other types of maximal points, other than absolute and relative maximal points, in value sets of pairs of dual fractional ideals.
\section{Admissible rings, fractional ideals and value sets}
Following \cite[Definition 3.5]{KST}, a noetherian, Cohen-Macauley, one dimensional local ring $(R,\mathfrak M)$ is called \emph{admissible}, if it is analytically reduced, residually rational and $\# R/\mathfrak M \geq r$, where $r$ is the number of valuation rings over $R$ of the total ring of fractions $Q$ of $R$. In this context, all valuation rings are discrete valuation rings (cf. \cite[Theorem 3.1]{KST}).
This class of rings, without any special given name, was previously considered in \cite{CDK} and in \cite{Da}, generalizing the important family of local coordinate rings of reduced curves at a singular point.
Throughout this work we will assume that $R$ is an admissible ring. We denote by $\mathbb Z$ the set of integer and by $I$ the set $\{1,\ldots,r\}$. If $v_1,\ldots,v_r$ are the valuations associated to the discrete valuation rings of $Q$ over $R$, then we have a value map $v\colon Q^{reg} \to \mathbb Z^r$, where $Q^{reg}$ is the set of regular elements of $Q$, defined by $h\mapsto v(h)=(v_1(h),\ldots,v_r(h))$ (cf. \cite[Definition 3.2]{KST}). We will consider on $\mathbb Z^r$ the natural partial order $\leq$ induced by the order of $\mathbb Z$.
An $R$-submodule $\mathcal I$ of $Q$ will be called a \emph{fractional ideal} if there is a regular element $d$ in $R$ such that $d\,\mathcal I \subset R$. The ideal $\mathcal{I}$ will be said a \emph{regular fractional ideal}, if it contains a regular element of $Q$.
Examples of regular fractional ideals of $R$ are $R$ itself, the integral closure $\widetilde{R}$ of $R$ in $Q$, any ideal of $R$ or of $\widetilde{R}$ that contains a regular element and the ideals of the form $$\mathcal{J} \colon \mathcal{I}=\{x\in Q; \ x\mathcal{I} \subset \mathcal{J}\},$$ where $\mathcal I$ and $\mathcal J$ are regular fractional ideals. In particular, the conductor $\mathcal C=R\colon \widetilde{R}$, which is the largest common ideal of $R$ and $\widetilde{R}$, is a regular fractional ideal. Notice that $\mathcal{J}\colon R=R$ for all fractional ideal $\mathcal{J}$.
In the class of admissible rings one has that there is a natural isomorphism $\mathcal J \colon \mathcal I \simeq Hom_R(\mathcal I,\mathcal J)$ (cf. \cite[Lemma 2.4]{KST}), for any fractional ideals $\mathcal{I}$ and $\mathcal{J}$. It is always true that $\mathcal{I} \subseteq \mathcal{J}\colon(\mathcal{J}\colon \mathcal{I})$. The fractional ideal $\mathcal{J}$ is called a \emph{canonical ideal} if the last inclusion is an equality for every fractional ideal $\mathcal{I}$. In our context, canonical ideals exist (cf. \cite{Da} or \cite{KST}); two canonical ideals differ up to a multiplication by a unit in $Q$ and a multiple by such a unit of a canonical ideal is also a canonical ideal. The ring $R$ will be called \emph{Gorenstein} if $R$ itself is a canonical ideal.
We define the \emph{value set} of a regular fractional ideal $\mathcal{I}$ of $R$ as being $$E(\mathcal{I})=v(\mathcal I^{reg})\subset \mathbb Z^r.$$
The value set $E(R)$ of $R$ is a subsemigroup of $\mathbb N^r$, called the \emph{semigroup of values} of $R$. The value set $E(\mathcal I)$ of a fractional ideal $\mathcal I$ is not necessarily closed under addition, but it is such that $E(R)+E(\mathcal I)\subset E(\mathcal I)$. For this reason $E(\mathcal{I})$ is called a \emph{semigroup ideal} of the semigroup $E(R)$. More generally, one has \begin{equation}\label{$E+E^*$} E(\mathcal{I})+E(\mathcal{J}\colon \mathcal{I})\subset E(\mathcal{J}). \end{equation}
A value set $E$ of a regular fractional ideal has the following fundamental properties (cf. \cite[Proposition 3.9]{KST}):
\noindent $E_0$: \ There are $\alpha\in \mathbb Z^r$ and $\beta \in \mathbb N^r$ such that $\beta+\mathbb N^r\subset E \subset \alpha+\mathbb N^r$;
\noindent $E_1$: \ If $\alpha=(\alpha_1,\ldots,\alpha_r)$ and $\beta=(\beta_1,\ldots,\beta_r)$ belong to $E$, then
$$\min(\alpha,\beta)=(\min(\alpha_1,\beta_1),\ldots,\min(\alpha_r,\beta_r))\in E;$$ \noindent $E_3$: \ If $\alpha=(\alpha_1,\ldots,\alpha_r), \beta=(\beta_1,\ldots,\beta_r)$ belong to $E$, $\alpha\neq\beta$ and $\alpha_i=\beta_i$ for some $i\in\{1,\ldots,r\}$, then there exists $\gamma\in E$ such that $\gamma_i>\alpha_i=\beta_i$ and $\gamma_j\geq min\{\alpha_j,\beta_j\}$ for each $j\neq i$, with equality holding if $\alpha_j\neq\beta_j$.
Given a semigroup $S$ of $\mathbb Z^r$ and a subset $E$ of $\mathbb Z^r$, with $S+E\subset E$ and such that $S$ and $E$ have the above properties $E_0$, $E_1$ and $E_2$, then we call $S$ a \emph{good semigroup} and $E$ a \emph{good semigroup ideal} of $S$.
So, if $S=E(R)$ and $E=E(\mathcal{I})$, where $\mathcal{I}$ is a regular fractional ideal, then $E$ a good semigroup ideal of $S$.
For a good semigroup ideal $E$, combining Properties $E_0$ and $E_1$, it follows that there exists a unique $m=m_E=\min(E)$.
On the other side, one has that if $\beta, \beta'\in E$ are such that $\beta +\mathbb N^r \subset E$ and $\beta' +\mathbb N^r \subset E$, then $\min(\beta,\beta') +\mathbb N^r \subset E$. This guarantees that there is a unique least element $\gamma\in E$ with the property that $\gamma+\mathbb N^r \subset E$. This element is called the \emph{conductor} of $E$ and denoted by $c(E)$. In particular, when $E=E(\mathcal I)$ for a fractional ideal $\mathcal{I}$, then we write $c(\mathcal{I})$ for $c(E)$.
For a good semigroup ideal $E$, we will use the following notation: \[\mathfrak f(E) =c(E)-e, \quad \text{where} \ e=(1,\ldots,1),\] which is called the \emph{Frobenius vector} of $E$. For $J\subset I$, we define $e_J$ the vector such that $pr_{\{i\}}(e_J)=1$ if $i\in J$ and $pr_{\{i\}}(e_J)=0$ if $i\not\in J$ and define $e_i=e_{\{i\}}$. If $E=E(\mathcal{I})$, we write $\mathfrak f(\mathcal{I})$ intead of $\mathfrak f(E)$.
Since completion and value sets of fractional ideals are compatible (cf. \cite[\S 1]{Da} or \cite[Theorem 3.19]{KST}), we may assume that $R$ is complete with respect to the $\mathfrak M$-adic topology. In this case, the number $r$ of discrete valuation rings of $Q$ over $R$ coincides with the number of minimal primes of $R$.
A fundamental notion in our context, which we define below, is that of a \emph{fiber} of an element $\alpha=(\alpha_1, \ldots, \alpha_r)\in E \subset \mathbb Z^r$ with respect to a subset $J=\{j_1<\cdots <j_s\}\subset I$. We define $pr_J(\alpha)=(\alpha_{j_1},\ldots,\alpha_{j_s})$.
Given $E\subset \mathbb Z^r$, $\alpha\in\mathbb{Z}^r$ and $\emptyset \neq J\subset I$, we define: \[ \begin{array}{lcl} F_J(E,\alpha)&=&\{\beta\in E;\; pr_J(\beta)=pr_J(\alpha) \ \text{and} \ \beta_i > \alpha_i, \forall i\in I\setminus J\}, \\
\overline{F}_J(E,\alpha)&=&\{\beta\in E;\; pr_J(\beta)=pr_J(\alpha), \ \text{and} \ \beta_i \geq \alpha_i, \forall i\in I\setminus J\}, \\ F(E,\alpha)&=& \bigcup_{i=1}^rF_i(E,\alpha), \quad \text{where} \ F_i(E,\alpha)=F_{\{i\}}(E,\alpha). \end{array} \]
The last set, above, will be called the \emph{fiber} of $\alpha$. Notice that $F_I(E,\alpha)=\{\alpha\}$, if and only if $\alpha\in E$, otherwise $F_I(E,\alpha)=\emptyset$.
The importance of this notion may be seen, for example, by the following result (cf. \cite{Da}): Up to a multiplicative unit in $\widetilde{R}$, there is a unique canonical ideal $\omega^0$ such that $R\subset \omega^0 \subset\widetilde{R}$ and $E(\omega^0)=E^0$, where \begin{equation}\label{valuecanonical0} E^0=\{\alpha \in \mathbb Z^r; \ F(E(R),\mathfrak f(R)-\alpha)=\emptyset\}. \end{equation} Notice that $\mathfrak f(\omega^0)=\mathfrak f(R)$ (cf. \cite[Lemma 5.10]{KST}).
Since any canonical ideal $\omega$ of $R$ is a multiple of $\omega^0$ by a unit $u$ in $Q$, then $E(\omega)$ is a translation of $E(\omega^0)$ by $v(u)=\mathfrak f(\omega)-\mathfrak f(R)$. This leeds to the following: \[ E(\omega)=\{\alpha \in \mathbb Z^r; \ F(E(R),\mathfrak f(\omega)-\alpha)=\emptyset\}. \]
A property related to the fibers of the frobenius of a good semigroup ideal $E$ is that (cf. \cite[Lemma 4.1.10 ]{KST}): \begin{equation}\label{frobfiber} F(E,\mathfrak f(E))=\emptyset. \end{equation}
We will use later the following remark that follows readily from the definitions of fibers.
\begin{rem} \label{fibrafechada} If $E\subset \mathbb Z^r$, $\alpha\in \mathbb Z^r$, $J\subset I$ and $J^c=I\setminus J$, then one has \[ F_J(E,\alpha)= \bigcap_{i\in J} \overline{F}_i(E,\alpha+e_{J^c}) \] In particular, for $J=\{i\}$ one has that \[ F_i(E,\alpha)=\overline{F}_i(E,\alpha+e-e_i). \]\end{rem}
Another fundamental notion is that of maximal points of good semigroup ideals.
Let $E\subset \mathbb Z^r$ and $\alpha\in E$. We will say that $\alpha$ is a \emph{maximal} point of $E$ if $F(E,\alpha)=\emptyset$.
This means that there is no element in $E$ with one coordinate equal to the corresponding coordinate of $\alpha$ and the other ones bigger.
When $E$ is a good semigroup ideal, since it has a minimum $m_E$ and a conductor $\gamma=c(E)$, one has immediately that all maximal elements of $E$ are in the limited region \[ \{(x_1,\ldots,x_r)\in \mathbb Z^r; \ m_{E}\leq x_i < \gamma_i, \ \ i=1,\ldots,r\}. \]
This implies that $E$ has finitely many maximal points.
Next, we will describe some special types of maximal points that may occur in a good semigroup ideal $E$. For $\alpha\in E$, let $$\begin{array}{l} p(E,\alpha)=\max\{n; F_J(E,\alpha)=\emptyset,\forall J\subset I,\#J\leq n\}, \ \text{and}\\ \\ q(E,\alpha)=\min\{n; F_J(E,\alpha)\neq\emptyset,\forall J\subset I,\#J\geq n\}. \end{array}$$
Notice that $p(E,\alpha)<q(E,\alpha)$, and that $\alpha\in E$ if and only if $q(E,\alpha) \leq r$. Also, $\alpha\in E$ is a maximal point of $E$ if, and only if, $p(E,\alpha) \geq 1$.
Let $\alpha$ be a maximal point of $E$. We will call $\alpha$ an \emph{absolute maximal}, if $F_J(E,\alpha)=\emptyset$ for every $J\subset I$, $J\neq I$; that is, if and only if $p(E,\alpha)=r-1$. We will call $\alpha$ a \emph{relative maximal}, if $F_J(E,\alpha)\neq\emptyset$, for every $J\subset I$ with $\#J\geq2$; that is, $p(E,\alpha)=1$ and $q(E,\alpha)=2$. If $p=p(E,\alpha)\geq 1$ and $q=q(E,\alpha)\leq r$, we call $\alpha$ a \emph{maximal of type} $(p,q)$.
Delgado in \cite[Theorem 1.5]{D87} showed that $E(R)$ is determined recursively, in a combinatorial sense, by its set of relative maximals. With essentially the same proof, one may show the same for any good semigroup ideal $E$.
\section{Symmetry}
The central results in this section will be several generalizations of results in \cite{D88}, \cite{CDK}, \cite{Pol17}, \cite{Pol18} and \cite{KST}, which establish some symmetry among $E(\mathcal{J}\colon \mathcal{I})$ and $E(\mathcal{I})$ mediated by $E(\mathcal{J})$ and among their maximal points.
\begin{lema}\label{fibra} Let $\mathcal{I}$ and $\mathcal{J}$ be any fractional ideals of $R$, then one has \[
E(\mathcal{J}:\mathcal{I}) \subseteq \{\beta\in\mathbb{Z}^r; F(E(\mathcal{I}),\mathfrak f(\mathcal{J})-\beta)=\emptyset\}. \] \end{lema} \noindent{\bf Proof\ \ } Let $\beta\in E(\mathcal{J}:\mathcal{I})$ and suppose that $F(E(\mathcal{I}),\mathfrak f(\mathcal{J})-\beta)\neq\emptyset$. Then there exist $i\in I$ and $\alpha\in E(\mathcal{I})$ such that $\alpha_i=\mathfrak f(\mathcal{J})_i-\beta_i$ and $\alpha_j>\mathfrak f(\mathcal{J})_j-\beta_j$, for $j\neq i$, $j\in I$. From this it follows that $\alpha_i+\beta_i=\mathfrak f(\mathcal{J})_i$ and $\alpha_j+\beta_j>\mathfrak f(\mathcal{J})_j$, for all $i\neq j$ and since from (\ref{$E+E^*$}) we know that $\alpha+\beta\in E(\mathcal{J})$, we get that $\alpha+\beta\in F_i(E(\mathcal{J}),\mathfrak f(\mathcal{J}))$, which is a contradiction, because from (\ref{frobfiber}) we know that $F(E(\mathcal{J}),\mathfrak f(\mathcal{J}))=\emptyset$. \cqd
\begin{teo}\label{J:I} Let $\mathcal{I}$ and $\mathcal{J}$ be fractional regular ideals of $R$. The following are equivalent: \begin{enumerate}[\rm i)]
\item $\mathcal{J}$ is a canonical ideal;
\item $E(\mathcal{J}\colon\mathcal{I})=\{\beta\in\mathbb Z^r; F(E(\mathcal{I}),\mathfrak f(\mathcal{J})-\beta)=\emptyset\}$, for all $\mathcal{I}$. \end{enumerate} \end{teo} \noindent{\bf Proof\ \ } i) $\Rightarrow$ ii) Since $\mathcal{J}$ is a canonical ideal, from \cite[Proposition 5.18]{KST} we know that $E(\mathcal{J})=\alpha_0+E(\omega^0)$, where $\alpha_0= \mathfrak f(\mathcal{J})-\mathfrak f(\omega^0)\in\mathbb Z^r$. From \cite[Theorem 5.27]{KST}, for any fractional ideal $\mathcal{I}$, we have $$\begin{array}{lcl} E(\mathcal{J}\colon\mathcal{I})&=&E(\mathcal{J})-E(\mathcal{I})\\ &=&(\alpha_0+E(\omega^{0}))-E(\mathcal{I}) \\ &=& \alpha_0+(E(\omega^{0})-E(\mathcal{I})) \\ &=&\alpha_0+\{\beta\in\mathbb Z^r, F(E(\mathcal{I}),\mathfrak f(\omega^{0})-\beta)=\emptyset\}\\ &=&\{\beta\in\mathbb Z^r;F(E(\mathcal{I}),\mathfrak f(\mathcal{J})-\beta)=\emptyset\}, \end{array}$$ where the fourth equality follows from \cite[Lemma 5.16]{KST}.
\noindent ii) $\Rightarrow$ i) Suppose that $E(\mathcal{J}\colon\mathcal{I})=\{\beta\in\mathbb Z^r; F(E(\mathcal{I}),\mathfrak f(\mathcal{J})-\beta)=\emptyset\}$, for all fractional ideal $\mathcal{I}$. In particular, for $\mathcal{I}=R$ we have that \begin{equation} \label{E(J)} E(\mathcal{J})=E(\mathcal{J}\colon R)=\{\beta\in\mathbb Z^r; F(E(R),\mathfrak f(\mathcal{J})-\beta)=\emptyset\}. \end{equation}
We will show that $E(\mathcal{J})=\alpha+E(\omega^{0})$, for $\alpha= \mathfrak f(\mathcal{J})-\mathfrak f(\omega^{0})$, which will imply, by \cite[Proposition 5.18 and Theorem 5.25.]{KST}, that $\mathcal{J}$ is a canonical ideal.
Let $\beta\in E(\omega^{0})$, from (\ref{valuecanonical0}), we get $$ F(E(R),\mathfrak f(\mathcal{J})-(\alpha+\beta))= F(E(R),\mathfrak f(\omega^{0})-\beta)=\emptyset.
$$ Hence, from (\ref{E(J)}), $\alpha+\beta \in E(\mathcal{J})$, which yields $\alpha + E(\omega^{0})\subset E(\mathcal{J})$.
Let now $\beta\in E(\mathcal{J})$ and write $\beta=\alpha+\beta'$, with $\beta'\in\mathbb Z^r$. Since $\beta\in E(\mathcal{J})$, we have $$ \emptyset= F(E(R),\mathfrak f(\mathcal{J})- \beta)= F(E(R),\mathfrak f(\omega^{0})-\beta'),
$$ hence, from (\ref{valuecanonical0}), $\beta'\in E(\omega^{0})$; and therefore $E(\mathcal{J})\subset \alpha+E(\omega^{0})$, concluding the proof of our result.\cqd
As mentioned in the proof of the above theorem, the implication (i) $\Rightarrow$ (ii) was proved in the particular case in which $\mathcal{J}=\omega^0$ in \cite[Lemma 5.16 and Theorem 5.27]{KST} (see also \cite[Theorem 2.15]{Pol18}), while the converse is new.
This leeds to the following result:
\begin{cor} \label{gor} The following are equivalent: \begin{enumerate}[\rm i)] \item $E(R\colon \mathcal{I})= \{ \beta\in \mathbb Z^r; \, F(E(\mathcal{I}),\mathfrak f(R)-\beta)=\emptyset\}$, for all fractional ideal $\mathcal{I}$;
\item $R$ is Gorenstein. \end{enumerate} \end{cor} \noindent{\bf Proof\ \ } If one has equality for all fractional ideal $\mathcal{I}$, then for $\mathcal{I}=R$ one has that \[ E(R)=E(R\colon R)=\{ \beta\in \mathbb Z^r; \, F(E(R),\mathfrak f(R)-\beta)=\emptyset\}, \] which says that $E(R)$ is symmetric, hence $R$ is Gorenstein (cf. \cite[Proposition 5.29]{KST}). Conversely, if $R$ is Gorenstein, then $R=\omega^0$ is a canonical ideal, and the result follows from Theorem \ref{J:I}. \cqd
For $\alpha\in \mathbb Z^r$ and $\mathcal{I}$ a fractional ideal of $R$, we define \[ \mathcal{I}(\alpha)=\{h\in \mathcal{I}^{reg}; \ v(h)\geq \alpha\}. \]
We denote by $\ell(M)$ the length of an $R$-module $M$. We have the following result.
\begin{lema}[{\rm \cite[Proposition 2.2]{Da} or \cite[Lemma 3.18]{KST}}] \label{lema} If $\alpha\in\mathbb{Z}^r$, then we have $$\ell \left(\dfrac{\mathcal{I}(\alpha)}{\mathcal{I}(\alpha+e_i)}\right)=\left\{ \begin{array}{ll} 1, & \ if \ \overline{F}_i(E(\mathcal{I}),\alpha)\neq \emptyset, \\ \\ 0, & \ \text{otherwise}, \end{array} \right.$$ \end{lema}
The following theorem generalizes \cite[Theorem 3.6]{CDK}.
\begin{teo}\label{l_E}
Let $\mathcal{J}$ and $\mathcal{I}$ be fractional ideals of $R$ and let $\alpha,\beta\in\mathbb{Z}^r$, with $\alpha+\beta=c(\mathcal{J})$. Then
\begin{equation} \label{leq1}
\ell\left(\frac{\mathcal{I}(\alpha)}{\mathcal{I}(\alpha+e_i)}\right)+\ell\left(\frac{(\mathcal{J}\colon \mathcal{I})(\beta-e_i)}{(\mathcal{J}\colon \mathcal{I})(\beta)}\right)\leq1, \ \ \text{for every} \ i\in I ,\end{equation}
with equality holding for every $\alpha,\beta$ such that $\alpha+\beta=c(\mathcal{J})$ and for every fractional ideal $\mathcal{I}$ if and only if $\mathcal{J}$ is a canonical ideal.
\end{teo} \noindent{\bf Proof\ \ } Since by Lemma \ref{lema} each summand in (\ref{leq1}) is less than or equal to $1$, it is sufficient to show that they are not both equal to $1$.
Suppose by reductio ad absurdum that both summands in (\ref{leq1}) are equal to $1$. From Lemma \ref{lema}, it follows that $\overline{F}_i(E(\mathcal{I}),\alpha)\neq\emptyset$ and $\overline{F}_i(E(\mathcal{J}:\mathcal{I}),\beta-e_i)\neq \emptyset$.
Take $\theta$ in the first of the above two sets and $\theta'$ in the second one, then according to (\ref{$E+E^*$}) we have $\theta+\theta'\in E(\mathcal{J})$; even more, we have that $\theta+\theta'\in F_i(E(\mathcal{J}),\mathfrak f(\mathcal{J}))$, because $\theta_i+\theta'_i=\mathfrak f(\mathcal{J})_i$ and $\theta_j+\theta'_j>\mathfrak f(\mathcal{J})_j$ for all $j\neq i$, which is a contradiction, since $F(E(\mathcal{J}),\mathfrak f(\mathcal{J}))=\emptyset$.
Assuming that the equality holds in (\ref{leq1}), we will show that $\mathcal{J}$ is a canonical ideal.
Notice that, in view of Lemma \ref{lema}, equality in (\ref{leq1}) is equivalent to \begin{equation}\label{equiv}
\overline{F}_i(E(\mathcal{I}),\alpha) = \emptyset \ \Longleftrightarrow \ \overline{F}_i(E(\mathcal{J}:\mathcal{I}),\beta-e_i) \neq \emptyset, \ \forall i\in I. \end{equation}
From the inclusion in Lemma \ref{fibra} we know that \begin{equation}\label{a} E(\mathcal{J}\colon \mathcal{I}) \subset \{\beta\in \mathbb Z^r; \ F(E(\mathcal{I}),\mathfrak f(\mathcal{J})-\beta)=\emptyset\}. \end{equation}
On the other hand, suppose that $\beta$ is such that $F(E(\mathcal{I}),\mathfrak f(\mathcal{J})-\beta)=\emptyset$, hence for all $i\in I$, $F_i(E(\mathcal{I}),\mathfrak f(\mathcal{J})-\beta)=\emptyset$. Then, from Remark \ref{fibrafechada}, it follows that $\overline{F}_i(E(\mathcal{I}),\mathfrak f(\mathcal{J})+e-\beta-e_i)=\emptyset$, for all $i\in I$. Now, from (\ref{equiv}), we get $\overline{F}_i(E(\mathcal{J}\colon \mathcal{I}),\beta)\neq \emptyset$, for all $i\in I$, which implies that $\beta\in E(\mathcal{J}\colon \mathcal{I})$.
Hence, we have shown that the inclusion in (\ref{a}) is an equality, therefore, from Theorem \ref{J:I}, we have that $\mathcal{J}$ is a canonical ideal.
Let us assume conversely that $\mathcal{J}$ is a canonical ideal. From Theorem \ref{J:I} we know that \[ \beta\notin E(\mathcal{J}\colon \mathcal{I}) \ \Leftrightarrow \ F(E(\mathcal{I}),\mathfrak f(\mathcal{J})-\beta)\neq \emptyset. \]
To conclude the proof of this part of the theorem, it is clearly enough to show that \[ \forall \ i\in I, \ \ \overline{F}_i(E(\mathcal{J}\colon \mathcal{I}),\beta-e_i)=\emptyset \ \Longrightarrow \ \overline{F}_i(E(\mathcal{I}),\alpha)\neq \emptyset. \]
Suppose that $\overline{F}_i(E(\mathcal{J}\colon \mathcal{I}),\beta-e_i)=\emptyset$, for some $i$. This implies that $\beta-e_i\notin E(\mathcal{J}\colon \mathcal{I})$, so, from Theorem \ref{J:I} we get $F(E(\mathcal{I}),\mathfrak f(\mathcal{J})-\beta +e_i)\neq \emptyset$. Take now $\theta \in F_i(E(\mathcal{I}),\mathfrak f(\mathcal{J})-\beta +e_i)$, for some $i\in I$. So, $\theta_i=\mathfrak f(\mathcal{J})_i-\beta_i +1=c(\mathcal{J})_i-\beta_i=\alpha_i$ and $\theta_j\geq \mathfrak f(\mathcal{J})_j-\beta_j+1=c(\mathcal{J})_j-\beta_j=\alpha_j$, for all $j\neq i$. So, we have that $\theta\in \overline{F}_i(E(\mathcal{I}),\alpha)$, hence this set is nonempty. \cqd
We have the following consequence of the above theorem.
\begin{cor}\label{dimGo} For every fractional ideal $\mathcal{I}$ of $R$ and every $\alpha,\beta\in \mathbb Z^r$, with $\alpha+\beta=c(R)$, one has \[ \ell\left(\frac{\mathcal{I}(\alpha)}{\mathcal{I}(\alpha+e_i)}\right)+\ell\left(\frac{(R\colon \mathcal{I})(\beta-e_i)}{(R\colon \mathcal{I})(\beta)}\right) =1 \ \text{for every} \ i\in I \] if and only if $R$ is Gorenstein. \end{cor}
Let $\mathcal{I}$ and $\mathcal{J}$ be fractional ideals of $R$ and let $\alpha\in \mathbb Z^r$. Let us define \[ \rho_{\mathcal{J}}(\mathcal{I},\alpha)=p(E(\mathcal{I}),\alpha)+q(E(\mathcal{J}\colon \mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)-1. \]
The following theorem generalizes \cite[Theorem 5.3]{CDK}. Recall that we defined $J^c$ as being the complement in $I$ of any subset $J$ of $I$.
\begin{teo}\label{p+q}
For any fractional ideals $\mathcal{I}$ and $\mathcal{J}$ of $R$ and for any $\alpha\in\mathbb{Z}^r$, we have \begin{equation}\label{p,q} \rho_{\mathcal{J}}(\mathcal{I},\alpha) \geq r. \end{equation} Moreover, equality holds in (\ref{p,q}), for every fractional ideal $\mathcal{I}$ of $R$ and every $\alpha\in\mathbb{Z}^r$ if, and only if, $\mathcal{J}$ is a canonical ideal. \end{teo} \noindent{\bf Proof\ \ } Suppose that $q(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)=r-n+1$, from the definition of $q$, we know that \begin{equation}\label{K} F_{K}(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)\neq\emptyset, \ \forall K \subset I, \ \#K \geq r-n+1. \end{equation}
If we take any $J\subset I$ with $\#J\leq n$ and let $K=J^c\cup\{i\}$, where $i\in J$ is fixed, then $\#K\geq r-n+1$. Hence from (\ref{K}) we get easily that $$\overline{F}_{K}(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha+e_{K^c})\neq\emptyset. $$
Since $e_{K^c}=e-e_{K}=e-(e_{J^c}+e_i)$, we get that $$ \overline{F}_{K}(E(\mathcal{J}\colon\mathcal{I}),c(\mathcal{J})-\alpha-e_{J^c}-e_i)\neq\emptyset.$$
This implies that $$\overline{F}_{i}(E(\mathcal{J}\colon\mathcal{I}),c(\mathcal{J})-\alpha-e_{J^c}-e_i)\neq\emptyset, $$ that, from Theorem \ref{l_E} and Lemma \ref{lema}, implies that $$\overline{F}_{i}(E(\mathcal{I}),\alpha+e_{J^c})=\emptyset, \ \forall i\in J,$$ which in view of Remark \ref{fibrafechada}, implies that $F_J(E(\mathcal{I}),\alpha)=\emptyset$.
So, we have shown, for any $J\subset I$, with $\#J\leq n$, that $F_J(E(\mathcal{I}),\alpha)=\emptyset$. Hence it follows that $p(E(\mathcal{I}),\alpha)\geq n$, and consequently, $\rho_{\mathcal{J}}(\mathcal{I},\alpha)\geq r$.
Now, suppose that $\mathcal{J}$ is a canonical ideal. To prove equality in (\ref{p,q}) holds, we must show that $p(E(\mathcal{I}),\alpha)=n$. Suppose by reductio ad absurdum that $p(E(\mathcal{I}),\alpha)\geq n+1$. Then, from the definition of $p$, we have that $$F_J(E(\mathcal{I}),\alpha)=\emptyset, \ \forall J\subset I, \ \text{with} \ \#J=n+1, $$
which implies that
\begin{equation}\label{J^c}
\overline{F}_i(E(\mathcal{I}),\alpha+e_{J^c})=\emptyset, \ \forall i\in J, \ \text{with}\ \#J\leq n+1,
\end{equation}
because, otherwise, we would have for some $i\in J$ that $\overline{F}_i(E(\mathcal{I}),\alpha+e_{J^c})\neq \emptyset$. Take $\theta$ is this last nonempty set, then $\theta_i=\alpha_i$, $\theta_j \geq \alpha_j$, $\forall j\in J$ and $\theta_l>\alpha_l$, $\forall l\not\in J$. Let $J'$ be the subset of elements $j\in J$ such that $\theta_j =\alpha_j$, hence $\theta \in F_{J'}(E(\mathcal{I}),\alpha)$, which implies that $F_{J'}(E(\mathcal{I}),\alpha)\neq \emptyset$, with $\#J' \leq n+1$, contradicting the fact that $p(E(\mathcal{I}),\alpha)\geq n+1$.
For any $K\subset I$, with $\#K=r-n$, define the set $J=K^c\cup \{i\}$, where $i\in K$. Since $J$ has $n+1$ elements, it follows from (\ref{J^c}) that
$$\overline{F}_i(E(\mathcal{I}),\alpha+e_{J^c})=\emptyset, \ \forall i\in J.$$
Since, $\mathcal{J}$ is a canonical ideal, from Theorem \ref{l_E} and Lemma \ref{lema}, it follows that
$$\overline{F}_i(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha+e_{K^c})\neq\emptyset,$$
and since, $i$ was any element of $K$, we have that
$$\overline{F}_i(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha+e_{K^c})\neq\emptyset,\ \forall K\subset I, \ \#K=r-n, \forall i\in K.$$
For every $i\in K$, take $\theta^i\in \overline{F}_i(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha+e_{K^c})$, then $\theta^i_i=\mathfrak f(\mathcal{J})_i-\alpha_i$. $\theta^i_k\geq \mathfrak f(\mathcal{J})_k-\alpha_k$, for $k\in K$ and $\theta^i_j>\mathfrak f(\mathcal{J})_j-\alpha_j$, for $j\not\in K$. If we take $\theta=\min\{\theta^i; \ i\in K\}$, it follows that $\theta \in F_K(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)$, hence \[ F_K(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)\neq \emptyset, \ \forall K\subset I, \ \#K=r-n, \] which contradicts the fact that $q(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)=r-n+1$. Therefore, we must have $p(E(\mathcal{I}),\alpha)=n$.
Now, assume that we have equality in (\ref{p,q}). Let $\mathcal{I}$ be a fractional ideal of $R$ and let $\alpha\in\mathbb{Z}^r$. If $\overline{F}_i(E(\mathcal{I}),\alpha)=\emptyset$, for some $i\in I$, then, from \cite[Lemma 4.7]{CDK}, there exists $\beta$ with $\beta_i=\alpha_i$ and $\beta_j<\alpha_j$ for every $j\neq i$, such that $F(E(\mathcal{I}),\beta)=\emptyset$. From this last condition, we get that $p(E(\mathcal{I}),\beta)\geq1$, so, from the equality in (\ref{p,q}), we get that $q(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\beta)\leq r$, which means that $\mathfrak f(\mathcal{J})-\beta\in E(\mathcal{J}\colon\mathcal{I})$. This implies that $\overline{F}_i(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\beta)\neq\emptyset$. Now, since $\mathfrak f(\mathcal{J})_i-\beta_i=\mathfrak f(\mathcal{J})_i-\alpha_i$ and $\mathfrak f(\mathcal{J})_j-\beta_j> \mathfrak f(\mathcal{J})_j-\alpha_j$, it follows that $\emptyset \neq \overline{F}_i(E(\mathcal{J}\colon\mathcal{I}), \mathfrak f(\mathcal{J})-\beta) \subset \overline{F}_i(E(\mathcal{J}\colon\mathcal{I}), \mathfrak f(\mathcal{J})-\alpha)$, hence this last set is nonempty. So, we proved that equality holds in (\ref{leq1}), which, by Theorem \ref{l_E}, implies that $\mathcal{J}$ is a canonical ideal. \cqd
This leads immediately to the following result:
\begin{cor} The following two conditions are equivalent:
\noindent i) $\rho_R(\mathcal{I},\alpha) = r$, for all fractional ideal $\mathcal{I}$ and all $\alpha\in \mathbb Z^r$;
\noindent ii) $R$ is Gorenstein. \end{cor}
The following result will generalize \cite[Theorem 2.10]{D87}.
\begin{teo}[\textbf{Symmetry of maximals}]\label{ed3} Let $\mathcal{I}$ and $\mathcal{J}$ be fractional ideals of $R$. Suppose that $\alpha\in E(\mathcal{I})$ and $\mathfrak f(\mathcal{J})-\alpha\in E(\mathcal{J}\colon \mathcal{I})$. Then $\alpha$ is maximal of $E(\mathcal{I})$ if and only if $\mathfrak f(\mathcal{J})-\alpha$ is maximal of $E(\mathcal{J}\colon \mathcal{I})$. Moreover, if $\alpha$ is a maximal of type $(p,q)=(p(E(\mathcal{I}),\alpha),q(E(\mathcal{I}),\alpha))$ then $\mathfrak f(\mathcal{J})-\alpha$ is maximal of type $(p',q')$, where
$p'=\rho_\mathcal{J}(\mathcal{J}\colon(\mathcal{J}\colon \mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)+1-q(E(\mathcal{J}\colon(\mathcal{J}\colon \mathcal{I})),\alpha), \ \text{and}$
$q'=\rho_\mathcal{J}(\mathcal{I},\alpha)+1-p(E(\mathcal{I}),\alpha).$ \end{teo} \noindent{\bf Proof\ \ } Suppose that $\alpha$ is a maximal of $E(\mathcal{I})$, then $p(E(\mathcal{I}),\alpha)\geq1$. From Theorem \ref{p+q} we have \[\rho_{\mathcal{J}}(\mathcal{J}\colon\mathcal{I},\mathfrak f(\mathcal{J})-\alpha)=p(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)+q(E(\mathcal{J}\colon(\mathcal{J}\colon\mathcal{I})),\alpha)-1\geq r.\]
Since $\mathcal{I}\subset \mathcal{J}\colon(\mathcal{J}\colon\mathcal{I})$, by the definition of the number $q$ we have \[q(E(\mathcal{J}\colon(\mathcal{J}\colon\mathcal{I})),\alpha)\leq q(E(\mathcal{I}),\alpha), \ \text{for any} \ \alpha\in\mathbb Z^r.\]
Hence, $$ \begin{array}{lcr} r&\leq &p(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)+q(E(\mathcal{J}\colon(\mathcal{J}\colon\mathcal{I})),\alpha)-1 \\
&\leq & p(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)+q(E(\mathcal{I}),\alpha)-1,
\end{array}$$ so $p(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)\geq1$ and, since $\mathfrak f(\mathcal{J})-\alpha\in E(\mathcal{J}\colon \mathcal{I})$, it follows that $\mathfrak f(\mathcal{J})-\alpha$ is a maximal of $E(\mathcal{J}\colon \mathcal{I})$.
The proof of the converse of this statement is completely analogous.
Furthermore, since $p'=p(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)$, we have \[ p'=p(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)=\rho_\mathcal{J}(\mathcal{J}\colon(\mathcal{J}\colon \mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)+1-q(E(\mathcal{J}\colon(\mathcal{J}\colon \mathcal{I})),\alpha); \] and since, for $\alpha\in E(\mathcal{I})$, we have \[ \rho_{\mathcal{J}}(\mathcal{I},\alpha)=p(E(\mathcal{I}),\alpha)+q(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha)-1, \] then \[q'=q(E(\mathcal{J}\colon\mathcal{I}),\mathfrak f(\mathcal{J})-\alpha) =\rho_{\mathcal{J}}(\mathcal{I},\alpha)+1-p(E(\mathcal{I}),\alpha). \] \cqd
If $\mathcal{J}$ is a canonical ideal, do not have to assume that both $\alpha\in E(\mathcal{I})$ and $\mathfrak f(\mathcal{J})-\alpha\in E(\mathcal{J}\colon \mathcal{I})$, since in this case, $$\alpha \ \text{is maximal of } \ E(\mathcal{I}) \ \Longleftrightarrow \ \mathfrak f(\mathcal{J})-\alpha \ \text{is maximal of} \ E(\mathcal{J}\colon \mathcal{I}).$$ Also, $\mathcal{J}\colon(\mathcal{J}\colon \mathcal{I})=\mathcal{I}$ and $\rho_{\mathcal{J}}(\mathcal{I},\alpha)=r$. Taking this into account, we get the following result:
\begin{cor} Suppose that $\mathcal{J}$ is a canonical ideal, then one has that $\alpha$ is maximal of $E(\mathcal{I})$ if, and only if, $\mathfrak f(\mathcal{J})-\alpha$ is maximal of $E(\mathcal{J}\colon \mathcal{I})$. Moreover, $\alpha$ is a maximal of type $(p,q)$ if and only if $\mathfrak f(\mathcal{J})-\alpha$ is a maximal of type $(r+1-q,r+1-p)$. \end{cor}
\end{document} | arXiv |
\begin{document}
\title{On the Number of Cholesky Roots of the Zero Matrix over $\mathbb{F}_2$} \author{Hays Whitlatch} \maketitle \begin{abstract}
A square, upper-triangular matrix $U$ is a Cholesky root of a matrix $M$ provided $U^*U=M$, where $^*$ represents the conjugate transpose. Over finite fields, as well as over the reals, it suffices for $U^TU=M$. In this paper, we investigate the number of such factorizations over the finite field with two elements, $\mathbb{F}_2$, and prove the existence of a rank-preserving bijection between the number of Cholesky roots of the zero matrix and the upper-triangular square roots the zero matrix. \end{abstract}
\section{Introduction}
In this paper we will discuss enumerating distinct Cholesky factorizations of a (symmetric) matrix with entries in $\mathbb{F}_2$. We will give a count, by size and rank, for the number of Cholesky factorizations of the $\mathbb{F}_2$-zero matrix as well as prove a rank-preserving bijection between it and the set of upper-triangular square roots the zero matrix. Before doing so, we will briefly discuss Cholesky factorizations over the real and complex fields. Let $M$ be a matrix with complex (or real) entries. We say $M$ has a Cholesky factorization if it can be expressed as the product of a lower triangular matrix $L$ and its conjugate transpose $L^*$. Observe that in this case $M^*=(LL^*)^*=(L^*)^*L^*=LL^*=M$ so $M$ must be square and equal to its own conjugate transpose ($M$ is Hermitian). Over the reals, this just implies that $M$ must be symmetric. Let $n\geq 1$ and let $0^\mathbb{C}_n, I^\mathbb{C}_n$ denote the additive and multiplicative complex valued $n\times n$ identity matrices (respectively). Suppose $LL^*$ gives a Cholesky factorization for $0^\mathbb{C}_n$, then for all $1\leq i,j\leq n$ the dot product of the $i^\textrm{th}$ row of $L$ and the $j^{\textrm{th}}$ column of $L^*$ must equal zero. However, the $j^{\textrm{th}}$ column of $L^*$ is simply complex-conjugate of the $j^{\textrm{th}}$ row of $L$ so $$\sum_{k=1}^{n}L[i,k]L[j,k]^*=0 \quad \textrm{ for all }1\leq i,j\leq n$$ So $L=0^\mathbb{C}_n$ and it follows that $0^\mathbb{C}_n$ has a unique Cholesky factorization. Similarly, the identity matrix and every other Hermitian positive-definite matrix have unique Cholesky factorizations if we insist the diagonal entries be non-negative. This uniqueness is lost for Hermitian positive-semidefinite matrices as well as over finite fields. In this paper we investigate the non-uniqueness of Cholesky factorizations over $\mathbb{F}_2$.
\section{Cholesky Roots over $\mathbb{F}_2$} \begin{definition} Let $M$ be a $n\times n$ symmetric matrix with entries in $\mathbb{F}_2$. We say $M$ has a \emph{Cholesky decomposition} if there exists a lower-triangular matrix $L$ such that $LL^T=M$ or equivalently if there exists an upper-triangular matrix $U$ such that $U^TU=M$. In such case, we call $U$ a \emph{Cholesky root of $M$ }. \end{definition}
For all positive integer $n$, we let $I_n$ and $0_n$ denote the $n\times n$, $\mathbb{F}_2$ multiplicative and additive identity matrices (respectively). For $r \leq n$, we let $\mathcal{U}_n(r)$ be the set of $n\times n$, rank $r$, upper-triangular matrices with entries from $\mathbb{F}_2$. For $n\geq 1$ and $r\leq n$ we define
$$\mathcal{A}_n(r) = \{U\in \mathcal{U}_n(r)\mid U^2=I_n \}\quad \textrm{and}\quad \mathcal{A}_n=\bigcup_{0\leq r\leq n} \mathcal{A}_n(r);$$
$$\mathcal{B}_n(r) = \{U\in \mathcal{U}_n(r)\mid U^2=0_n\} \quad \textrm{and}\quad \mathcal{B}_n=\bigcup_{0\leq r\leq n} \mathcal{B}_n(r);$$
$$\mathcal{C}_n(r) = \{U\in \mathcal{U}_n(r)\mid U^TU=0_n\}\quad \textrm{and}\quad \mathcal{C}_n=\bigcup_{0\leq r\leq n} \mathcal{C}_n(r)$$
\begin{observation} For all $n\geq 1$: $$\vert\mathcal{A}_n\vert=\vert\mathcal{B}_n\vert $$ \end{observation} \begin{proof} Observe that $(X+I_n)^2=X^2+2X+I_n=X^2+I_n$. Hence, for all $X\in U_n$, $X^2=0$ if and only if $(X+I_n)^2=I_n$. \end{proof}
\begin{theorem} For all $n\geq 1$ and $r\leq n$: $$\vert\mathcal{B}_n(r)\vert=\vert\mathcal{C}_n(r)\vert $$ \end{theorem} \begin{proof} Observe that $$\mathcal{B}_1=\mathcal{B}_1(0)=\left\{\begin{bmatrix} 0 \end{bmatrix}\right\}=\mathcal{C}_1(0)=\mathcal{C}_1.$$ We proceed by induction. Let $n>1$ and assume that $\vert\mathcal{B}_{n-1}(r)\vert=\vert\mathcal{C}_{n-1}(r)\vert$ for all $r\leq n-1$. Choose and fix a rank $r$, $n\times n$ upper-triangular matrix $B$. Observe that by Sylvester's rank inequality $\mathcal{B}_n(n)=\mathcal{C}_n(n)=\emptyset$, so we may proceed with the assumption that $r<n$. Let $B'$ be the $n-1\times n-1$ principal submatrix of $B$.
$$B^2= \left[ \begin{array}{c|c} B' & \mathbf{v} \\ \hline \mathbf{0}^T & b \\
\end{array} \right]\left[ \begin{array}{c|c} B' & \mathbf{v} \\ \hline \mathbf{0}^T & b \\
\end{array} \right]=\left[ \begin{array}{c|c} B'^2 & B'\mathbf{v} + b\mathbf{v} \\ \hline \mathbf{0}^T & b^2 \\ \end{array} \right].$$ Then $B \in \mathcal{B}_{n}$ if and only if $b=0$ and $B'\mathbf{v}=\mathbf{0}$ and
$B'\in \mathcal{B}_{n-1}$. However $B'\mathbf{v}=\mathbf{0}$ if and only if $v\in Null(B')$, the null space of $B'$. If $B'\in \mathcal{B}_{n-1}$ then the column space of $B'$, $Col(B')$, must be a subset of $Null(B')$. It follows that if $B\in \mathcal{B}_n$ then $v\in Col(B')$ or $v\in Null(B')\setminus Col(B')$. Hence, for each $r$: $$|\mathcal{B}_{n}(r)|=|\mathcal{B}_{n-1}(r)|\cdot 2^{r}+|\mathcal{B}_{n-1}(r-1)|\cdot \left(2^{\dim(Null(B'))}-2^{r-1}\right)$$
$$|\mathcal{B}_{n}(r)|=|\mathcal{B}_{n-1}(r)|\cdot 2^{r}+|\mathcal{B}_{n-1}(r-1)|\cdot\left(2^{n-r}-2^{r-1}\right)$$\newline
Choose and fix a rank $r$, $n\times n$ upper-triangular matrix $C$. Let $C'$ be the $n-1\times n-1$ principal submatrix of $C$.
$$C^TC= \left[ \begin{array}{c|c} C'^T & \mathbf{0} \\ \hline \mathbf{w}^T & c \\
\end{array} \right]\left[ \begin{array}{c|c} C' & \mathbf{w} \\ \hline \mathbf{0}^T & c \\
\end{array} \right]=\left[ \begin{array}{c|c} C'^TC' & C'^T\mathbf{w} \\ \hline \mathbf{w}^TC' & \mathbf{w}^T\mathbf{w}+c^2 \\ \end{array} \right].$$ $C \in \mathcal{C}_{n}$ if and only if $\mathbf{w}^T\mathbf{w}+c^2=0$ and $\mathbf{w}^TC'=\mathbf{0}$ and $C'\in \mathcal{C}_{n-1}$. Equivalently $C \in \mathcal{C}_{n}$ if and only if $C'\in \mathcal{C}_{n-1}$ and
$$\overline{C}\overline{\mathbf{w}}=\left[ \begin{array}{c|c} C'^T & \mathbf{0} \\ \hline \mathbf{1} & 1 \\ \end{array} \right]\left[ \begin{array}{c} \mathbf{w}\\ \hline c \\ \end{array} \right]=\mathbf{0}.$$ This occurs if and only if $\overline{w}\in Null(\overline{C})$. Observe that if $c=0$ then $\overline{w}\in Null(\overline{C})$ exactly when $w\in Row(C')\cap Null(\overline{C})$ or $w\in Null(\overline{C})\setminus Row(C')$. On the other hand if $c=1$ then $\overline{w}\in Null(\overline{C})$ exactly when $w\in Null(\overline{C})\setminus Row(C')$ .
It follows that for each $r$: $$|\mathcal{C}_{n}(r)|=|\mathcal{C}_{n-1}(r)|\cdot 2^{r}+|\mathcal{C}_{n-1}(r-1)|\cdot \left(2^{\dim(Null(\overline{C}))}-2^{r-1}\right)$$
$$|\mathcal{C}_{n}(r)|=|\mathcal{C}_{n-1}(r)|\cdot 2^{r}+|\mathcal{C}_{n-1}(r-1)|\cdot \left(2^{n-r}-2^{r-1}\right)$$
\end{proof} As a consequence of the previous proof we get the following corollary without appealing to the rank-nullity theorem. \begin{corollary} For all $n\geq 1$, $\mathcal{B}_n(r)=\mathcal{C}_n(r)=\emptyset$ whenever $r\geq n/2$. \end{corollary} \begin{proof} If $B\in \mathcal{B}_n(r)$ then Col$(U)\subset$ Null$(U)$ where the inclusion is strict since $[0,\ldots,0,1]^T\in $ Null$(U)\setminus$Col$(U)$. That is $r<n-r$, hence $r<n/2$. \end{proof} In \cite{ekhad1996number} the authors give a count for the number of upper-triangular matrices over $\mathbb{F}_q$ whose square is the zero matrix. By restricting to $q=2$ we have the following result. \begin{theorem}(Theorem 1 of \cite{ekhad1996number}) \begin{eqnarray} \vert \mathcal{B}_{2n}\vert &=& \sum\limits_{j}\left[ \binom{2n}{n-3j}-\binom{2n}{n-3j-1}\right]2^{n^2-3j^2-j}\nonumber \\ \vert \mathcal{B}_{2n+1}\vert&=&\sum\limits_{j}\left[ \binom{2n+1}{n-3j}-\binom{2n+1}{n-3j-1}\right]2^{n^2+n-3j^2-2j}\nonumber \end{eqnarray} \end{theorem}
\begin{corollary} For all $n>0$
$$|\mathcal{A}_n|=|\mathcal{B}_n|=|\mathcal{C}_n|= \sum\limits_{j}\left[ \binom{n}{\lfloor \frac{n}{2}\rfloor-3j}-\binom{n}{\lfloor \frac{n}{2}\rfloor-3j-1}\right]2^{\lfloor \frac{n}{2}\rfloor\lceil \frac{n}{2}\rceil-3j^2-(\lceil \frac{n}{2}\rceil-\lfloor \frac{n}{2}\rfloor+1)j}$$ \end{corollary}
Given a $n\times n$ matrix $M$ with entries in $\mathbb{F}_2$, we let $M_k$ denote the $k^{\textrm{th}}$ leading principal submatrix of $M$, $1\leq k\leq n$. We say is $M$ is in leading principal non-singular (LPN) form if $$\textrm{det}_k(M)=\begin{cases}1,& \textrm{if $k\leq$ rank$(M)$}\\ 0,& \textrm{if rank$(M)<k\leq n$}\\ \end{cases}$$ It was shown in \cite{cooper2016successful} that if $M$ is a full-rank, symmetric matrix with entries in $\mathbb{F}_2$ then $M=U^TU$ from some upper-triangular matrix $U$ if and only if $M$ is in LPN form. Furthermore, this Cholesky decomposition is unique. In \cite{cooper2017uniquely} it was demonstrated that uniqueness fails when $M$ is not full-rank, however, if $M$ is in LPN form then there exists a natural choice determined by the pressing instructions of a graph that has $M$ as its adjacency matrix.
\begin{corollary} Let $A\in {\mathbb{F}_2}^{n\times n}$ of rank $r$ be in leading principal minors form. The number of distinct Cholesky factorizations for $A$ is $\vert\mathcal{C}_n(n-r)\vert$. \end{corollary} \begin{proof} Let $A_{1,1}$ be the principal $r\times r$ submatrix of $A$ and suppose
$B^TB=A$ is a Cholesky factorization of $A$. Then
$$B^TB=\left[ \begin{array}{c|c} B_{1,1}^T & 0 \\ \hline B_{1,2}^T & B_{2,2}^T \\
\end{array} \right]\left[ \begin{array}{c|c} B_{1,1} & B_{1,2} \\ \hline 0 & B_{2,2} \\
\end{array} \right]=\left[ \begin{array}{c|c} B_{1,1}^TB_{1,1} & B_{1,1}^TB_{1,2} \\ \hline B_{1,2}^TB_{1,1} & B_{1,2}^TB_{1,2}+B_{2,2}^TB_{2,2} \\ \end{array} \right]$$ where $B_{1,1}$ is an $r\times r$ matrix. However, \cite{cooper2017uniquely} demonstrated that $A$ has an (instructional) Cholesky decomposition of the form
$$V^TV=\left[ \begin{array}{c|c} V_{1,1}^T & 0 \\ \hline V_{1,2}^T & 0 \\
\end{array} \right]\left[ \begin{array}{c|c} V_{1,1} & V_{1,2} \\ \hline 0 & 0 \\
\end{array} \right]=\left[ \begin{array}{c|c} V_{1,1}^TV_{1,1} & V_{1,1}^TV_{1,2} \\ \hline V_{1,2}^TV_{1,1} & V_{1,2}^TV_{1,2} \\ \end{array} \right]$$ Since $A_{1,1}$ is a full-rank matrix it has a unique Cholesky decomposition over $\mathbb{F}_2$ (see proof in \cite{cooper2016successful}). That is, $B_{1,1}=V_{1,1}$. Then by invertibility we have $B_{1,2}=\left(V_{1,1}^T\right)^{-1}B_{1,1}^TB_{1,2}=\left(V_{1,1}^T\right)^{-1}V_{1,1}^TV_{1,2}=V_{1,2}$ and hence $$V_{1,2}^TV_{1,2}=B_{1,2}^TB_{1,2}+B_{2,2}^TB_{2,2}\Rightarrow B_{2,2}^TB_{2,2}=0$$ \end{proof}
\section{Future Work} We have seen that for the special case that a $\mathbb{F}_2$, square matrix is in leading principal minors form, then the number of Cholesky decompositions it yields is dictated by the count discussed in this paper (with rank parameter being replaced with corank). This fails to be the case when a matrix is not in leading principal minors form. For example $$\left[\begin{array}{cc}0&0\\0&1\\\end{array}\right]^T\left[\begin{array}{cc}0&0\\0&1\\\end{array}\right]=\left[\begin{array}{cc}0&0\\0&1\\\end{array}\right]=\left[\begin{array}{cc}0&1\\0&0\\\end{array}\right]^T\left[\begin{array}{cc}0&1\\0&0\\\end{array}\right]$$ but the number of Cholesky roots of the $1\times 1$ zero matrix (over $\mathbb{F})_2$) is $1$.
Another topic of interest would be to study the asymptotic behavior of $$\vert\mathcal{C}_n(r)\vert=\sum\limits_{j}\left[ \binom{n}{\lfloor \frac{n}{2}\rfloor-3j}-\binom{n}{\lfloor \frac{n}{2}\rfloor-3j-1}\right]2^{\lfloor \frac{n}{2}\rfloor\lceil \frac{n}{2}\rceil-3j^2-(\lceil \frac{n}{2}\rceil-\lfloor \frac{n}{2}\rfloor+1)j}$$ Observe that $-\frac{n+3}{6}\leq j\leq \frac{n}{6}$ or the summand is zero. When $j=\lceil-\frac{n+3}{6}\rceil$ or $j=\lfloor\frac{n}{6}\rfloor$ the summand yields $2^{O(n^2)}$. At $j=0$ the summand yields $\left(1+o(1)\right)\frac{2}{n\sqrt{\pi n}}(2e)^{n/2+o(1)}2^{n^2/4}$. This however cannot be used as a lower bound since many of the terms in the summand can be negative.
Finally, it is worth mentioning that the bijection between the three sets (the Cholesky roots of zero, the upper-triangular roots of zero, and the upper-triangular roots of the identity) does not extend to other finite fields (in part because $(X+I)^2=X^2+I$ is unique to $\mathbb{F}_2$). It follows that to count the number of Cholesky roots of a zero matrix over other finite fields one would need different techniques than the ones used in this paper, nevertheless it would be an interesting continuation of this work.
\end{document} | arXiv |
\begin{document}
\title[Invertibility of the Gabor frame operator on certain modulation spaces]
{A note on the invertibility of the Gabor frame operator on certain modulation spaces}
\author[D.G. Lee]{Dae Gwan Lee} \address{{\bf D.G.~Lee:} KU Eichst\"att--Ingolstadt, Mathematisch--Geographische Fakult\"at, Os\-ten\-stra\-\ss{}e~26, Kollegiengeb\"aude I Bau B, 85072 Eichst\"att, Germany} \email{[email protected]}
\author[F. Philipp]{Friedrich Philipp} \address{{\bf F.~Philipp:} Technische Universit\"at Ilmenau, Institute for Mathematics, Weimarer Stra\ss e 25, D-98693 Ilmenau, Germany} \email{[email protected]}
\author[F. Voigtlaender]{Felix Voigtlaender} \address{{\bf F.~Voigtlaender:} Department of Mathematics, Technical University of Munich, 85748 Garching bei München, Germany} \email{[email protected]}
\begin{abstract}
We consider Gabor frames generated by a general lattice and a window function
that belongs to one of the following spaces:
the Sobolev space $V_1 = H^1(\ensuremath{\mathbb R}^d)$,
the weighted $L^2$-space $V_2 = L_{1 + |x|}^2(\ensuremath{\mathbb R}^d)$,
and the space $V_3 = \mathbb{H}^1(\ensuremath{\mathbb R}^d) = V_1 \cap V_2$
consisting of all functions with finite uncertainty product;
all these spaces can be described as modulation spaces with respect to suitable weighted
$L^2$ spaces.
In all cases, we prove that the space of Bessel vectors in $V_j$ is mapped bijectively onto itself
by the Gabor frame operator.
As a consequence, if the window function belongs to one of the three spaces,
then the canonical dual window also belongs to the same space.
In fact, the result not only applies to frames, but also to frame sequences.
\end{abstract}
\subjclass[2010]{Primary: 42C15. Secondary: 42C40, 46E35, 46B15.} \keywords{Gabor frames, Sobolev space, Invariance, Dual frame, Regularity of dual window.}
\maketitle \thispagestyle{empty}
\section{Introduction}
Analyzing the time-frequency localization of functions is an important topic in harmonic analysis.
Quantitative results on this localization are usually formulated in terms of function spaces such as Sobolev spaces, modulation spaces, or Wiener amalgam spaces.
An especially important space is the \emph{Feichtinger algebra} $S_0 = M^1$ \cite{FeichtingerNewSegalAlgebra,JakobsenNoLongerNewSegalAlgebra} which has numerous remarkable properties; see, e.g., \cite[Section A.6]{ChristensenBook} for a compact overview. Yet, in some cases it is preferable to work with more classical spaces like the Sobolev space $H^1 (\ensuremath{\mathbb R}^d) = W^{1,2}(\ensuremath{\mathbb R}^d)$,
the weighted $L^2$-space $L_{1 + |x|}^2(\ensuremath{\mathbb R}^d) = \{ f : \ensuremath{\mathbb R}^d \to \ensuremath{\mathbb C} : (1 + |x|) f(x) \in L^2 \}$,
or the space $\mathbb{H}^1(\ensuremath{\mathbb R}^d) = H^1 (\ensuremath{\mathbb R}^d) \cap L_{1 + |x|}^2(\ensuremath{\mathbb R}^d)$ which consists of all functions $g \in L^2(\ensuremath{\mathbb R}^d)$ with finite uncertainty product \begin{equation}\label{eqn:FUP}
\left(
\int_{\ensuremath{\mathbb R}^d}
|x|^2
\cdot |g(x)|^2
\, dx
\right)
\left(
\int_{\ensuremath{\mathbb R}^d}
|\omega|^2
\cdot |\widehat g(\omega)|^2
\,d \omega
\right)
< \infty . \end{equation} Certainly, one advantage of these classical spaces is that membership of a function in the space can be decided easily. We remark that all of these spaces fall into the scale of modulation spaces (see Section~\ref{s:Invariance}).
In Gabor analysis, it is known (see e.g., \cite[Proposition~5.2.1]{GroechenigTFFoundations} and \cite[Theorem~12.3.2]{ChristensenBook}) that for a Gabor frame generated by a lattice, the canonical dual frame is again a Gabor system (over the same lattice), generated by the so-called \emph{dual window}. An important question is what kind of time-frequency localization conditions are inherited by the dual window. Precisely, if $g \in L^2(\ensuremath{\mathbb R}^d)$ belongs to a certain ``localization Banach space'' $V$ and if $\Lambda \subset \ensuremath{\mathbb R}^{2d}$ is such that $(g,\Lambda)$ forms a Gabor frame for $L^2(\ensuremath{\mathbb R}^d)$, then does the canonical dual window belong to $V$ as well?
A celebrated result in time-frequency analysis states that this is true for the Feichtinger algebra $V = S_0(\ensuremath{\mathbb R}^d)$; see \cite{GroechenigLeinert} for separable lattices $\Lambda$ and \cite[Theorem~7]{BalanDensityOvercompletenessLocalization} for irregular sets $\Lambda$.
In the case of separable lattices, the question has been answered affirmatively also for the Schwartz space $V = \calS(\ensuremath{\mathbb R})$ \cite[Proposition~5.5]{j} and for the Wiener amalgam space $V = W(L^{\infty},\ell_v^1)$ with a so-called \emph{admissible weight} $v$; see \cite{ko}. Similarly, the setting of the spaces $V = W(C_\alpha,\ell_v^q)$ (with the H\"{o}lder spaces $C_\alpha$) is studied in \cite{w}---but except in the case $q = 1$, some additional assumptions on the window function $g$ are imposed.
To the best of our knowledge, the question has not been answered for modulation spaces other than
$V = M^1_v$, and in particular, not for any of the spaces $V = H^1(\ensuremath{\mathbb R}^d)$, $V = L_{1 + |x|}^2(\ensuremath{\mathbb R}^d)$, and $V = \mathbb{H}^1(\ensuremath{\mathbb R}^d)$.
In this note, we show that the answer is affirmative for all of these spaces:
\begin{thm}\label{t:main}
Let $V \in \{ H^1(\ensuremath{\mathbb R}^d), L_{1 + |x|}^2(\ensuremath{\mathbb R}^d), \mathbb{H}^1(\ensuremath{\mathbb R}^d) \}$.
Let $g \in V$ and let $\Lambda \subset \ensuremath{\mathbb R}^{2d}$ be a lattice such that
the Gabor system $(g, \Lambda)$ is a frame for $L^2(\ensuremath{\mathbb R}^d)$ with frame operator $S$.
Then the canonical dual window $S^{-1} g$ belongs to $V$.
Furthermore, $(S^{-1/2} g, \Lambda)$ is a Parseval frame for $L^2(\ensuremath{\mathbb R}^d)$ with $S^{-1/2} g \in V$. \end{thm}
As mentioned above, the corresponding statement of Theorem~\ref{t:main} for $V = S_0(\ensuremath{\mathbb R}^d)$ with separable lattices $\Lambda$ was proved in \cite{GroechenigLeinert}. In addition to several deeper insights, the proof given in \cite{GroechenigLeinert} relies on a simple but essential argument showing that the frame operator $S = S_{\Lambda,g}$ maps $V$ boundedly into itself, which is shown in \cite{GroechenigLeinert} based on Janssen's representation of $S_{\Lambda,g}$. In our setting, this argument is not applicable, because---unlike in the case of $V = S_0(\ensuremath{\mathbb R}^d)$--- there exist functions $g \in \mathbb{H}^1$ for which $(g,\Lambda)$ is not an $L^2$-Bessel system.
In addition, the series in Janssen's representation is not even guaranteed to converge unconditionally in the strong sense for $\mathbb{H}^1$-functions, even if $(g,\Lambda)$ is an $L^2$-Bessel system; see Proposition~\ref{prop:MainResult}. To bypass these obstacles, we introduce for each space $V \in \{ H^1, L_{1 + |x|}^2, \mathbb{H}^1 \}$ the associated subspace $V_\Lambda$ consisting of all those functions $g \in V$ that generate a Bessel system over the given lattice $\Lambda$.
We remark that most of the existing works concerning the regularity of the (canonical) dual window rely on deep results related to Wiener's $1/f$-lemma on absolutely convergent Fourier series. In contrast, our methods are based on elementary spectral theory (see \Cref{s:spectra}) and on certain observations regarding the interaction of the Gabor frame operator with partial derivatives; see Proposition~\ref{p:bounded}.
The paper is organized as follows: Section~\ref{s:BesselVectors} discusses the concept of Gabor Bessel vectors and introduces some related notions. Then, in Section~\ref{s:Invariance}, we endow the space $V_\Lambda$
(for each choice $V \in \{ H^1, L_{1 + |x|}^2, \mathbb{H}^1 \}$) with a Banach space norm and show that the frame operator $S$ maps $V_\Lambda$ boundedly into itself, provided that the Gabor system $(g,\Lambda)$ is an $L^2$-Bessel system and that the window function $g$ belongs to $V$.
Finally, we prove in Section~\ref{s:spectra} that for any $V \in \{ H^1, L_{1 + |x|}^2, \mathbb{H}^1 \}$ the spectrum of $S$ as an operator on $V$ coincides with the spectrum of $S$ as an operator on $L^2$. This easily implies our main result, Theorem~\ref{t:main}.
\section{Bessel vectors} \label{s:BesselVectors}
For $a,b\in\ensuremath{\mathbb R}^d$ and $f\in L^2(\ensuremath{\mathbb R}^d)$ we define the operators of translation by $a$ and modulation by $b$ as \[
T_a f(x) := f(x-a)
\quad \text{and} \quad
M_b f(x) := e^{2\pi ib\cdot x} \cdot f(x), \] respectively. Both $T_a$ and $M_b$ are unitary operators on $L^2(\ensuremath{\mathbb R}^d)$ and hence so is the {\em time-frequency shift} \[
\pi(a,b)
:= T_a M_b
= e^{-2\pi ia\cdot b} \, M_b T_a . \] The Fourier transform $\calF$ is defined on $L^1(\ensuremath{\mathbb R}^d) \cap L^2(\ensuremath{\mathbb R}^d)$ by $\mathcal{F} f (\xi) = \widehat{f}(\xi) = \int_{\ensuremath{\mathbb R}^d} f(x) e^{-2 \pi i x \cdot \xi} \, d x$ and extended to a unitary operator on $L^2(\ensuremath{\mathbb R}^d)$.
For $z = (z_1, z_2) \in \ensuremath{\mathbb R}^d \times \ensuremath{\mathbb R}^d \cong \ensuremath{\mathbb R}^{2d}$ and $f \in L^2(\ensuremath{\mathbb R}^d)$, a direct calculation shows that \begin{equation}\label{e:FTpi}
\calF [\pi(z)f] = e^{-2\pi iz_1\cdot z_2}\cdot\pi(Jz)\widehat f, \end{equation} where \[
J = \mat 0{I}{-I}0. \]
A (full rank) {\em lattice} in $\ensuremath{\mathbb R}^{2d}$ is a set of the form $\Lambda = A\ensuremath{\mathbb Z}^{2d}$, where $A\in\ensuremath{\mathbb R}^{2d\times 2d}$ is invertible. The volume of $\Lambda$ is defined by $\operatorname{Vol}(\Lambda) := |\!\det A|$ and its density by $d(\Lambda) := \operatorname{Vol}(\Lambda)^{-1}$. The {\em adjoint lattice} of $\Lambda$ is denoted and defined by $\Lambda^\circ := JA^{-{\!\top}}\ensuremath{\mathbb Z}^{2d}$.
The Gabor system generated by a window function $g\in L^2(\ensuremath{\mathbb R}^d)$ and a lattice $\Lambda\subset\ensuremath{\mathbb R}^{2d}$ is given by \[
(g,\Lambda) := \bigl\{ \pi(\lambda)g : \lambda\in\Lambda \bigr\}. \] We say that $g\in L^2(\ensuremath{\mathbb R}^d)$ is a {\em Bessel vector} with respect to $\Lambda$ if the system $(g,\Lambda)$ is a Bessel system in $L^2(\ensuremath{\mathbb R}^d)$, meaning that the associated {\em analysis operator} $C_{\Lambda,g}$ defined by
\begin{equation}\label{eq:CoefficientOperator}
C_{\Lambda,g} f := \big(\<f,\pi(\lambda)g \rangle\big)_{\lambda\in\Lambda},
\qquad f \in L^2(\ensuremath{\mathbb R}^d) , \end{equation} is a bounded operator from $L^2(\ensuremath{\mathbb R}^d)$ to $\ell^2(\Lambda)$. We define \[
\calB_\Lambda
:= \big\{g\in L^2(\ensuremath{\mathbb R}^d) : (g,\Lambda)\text{ is a Bessel system}\big\} , \]
which is a dense linear subspace of $L^2(\ensuremath{\mathbb R}^d)$ because each Schwartz function is a Bessel vector with respect to any lattice; see \cite[Theorem~3.3.1]{fz}. It is well-known that $\calB_\Lambda = \calB_{\Lambda^\circ}$ (see, e.g., \cite[Proposition~3.5.10]{fz}). In fact, we have for $g \in \calB_\Lambda$ that \begin{equation}\label{e:Cnorms}
\big\| C_{\Lambda^\circ,g} \big\|
= \operatorname{Vol}(\Lambda)^{1/2} \cdot \big\| C_{\Lambda,g} \big\| ;
\end{equation} see \mbox{\cite[proof of Theorem~2.3.1]{k}}.
The {\em cross frame operator} $S_{\Lambda,g,h}$ with respect to $\Lambda$ and two functions $g,h\in\calB_\Lambda$ is defined by \[
S_{\Lambda,g,h} := C_{\Lambda,h}^*C_{\Lambda,g}. \] In particular, we write $S_{\Lambda,g} := S_{\Lambda,g,g}$ which is called the {\em frame operator} of $(g,\Lambda)$.
The system $(g,\Lambda)$ is called a \emph{frame} if $S_{\Lambda,g}$ is bounded and boundedly invertible on $L^2(\ensuremath{\mathbb R}^d)$, that is, if $A \operatorname{Id}_{L^2(\ensuremath{\mathbb R}^d)} \leq S_{\Lambda,g} \leq B \operatorname{Id}_{L^2(\ensuremath{\mathbb R}^d)}$ for some constants $0 < A \leq B < \infty$ (called the frame bounds). In particular, a frame with frame bounds $A=B=1$ is called a \emph{Parseval frame}.
In our proofs, the so-called {\em fundamental identity of Gabor analysis} will play an essential role. This identity states that \begin{equation}\label{e:fi}
\sum_{\lambda\in\Lambda}\<f,\pi(\lambda)g\rangle\langle\pi(\lambda)\gamma,h\rangle
= d(\Lambda) \cdot \sum_{\mu\in\Lambda^\circ}
\langle\gamma,\pi(\mu)g\rangle
\langle\pi(\mu)f,h\rangle. \end{equation} It holds, for example, if $f, h \in M^1(\ensuremath{\mathbb R}^d) = S_0(\ensuremath{\mathbb R}^d)$ (the Feichtinger algebra) and $g,\gamma\in L^2(\ensuremath{\mathbb R}^d)$; see \mbox{\cite[Theorem~3.5.11]{fz}}. We will use the following version of the fundamental identity:
\begin{lem}
The fundamental identity \eqref{e:fi} holds if $g,h\in\calB_\Lambda$ or $f,\gamma\in\calB_\Lambda$. \end{lem}
\begin{proof} In \cite[Subsection~1.4.1]{j2}, the claim is shown for separable lattices in $\ensuremath{\mathbb R}^2$. Here, we provide a short proof for the general case. If $g,h \in \calB_{\Lambda}$, then Equation~\eqref{e:Cnorms} shows that both sides of Equation~\eqref{e:fi} depend continuously on $f,\gamma \in L^2$. Similarly, if $f,\gamma \in \calB_{\Lambda}$ then both sides of Equation~\eqref{e:fi} depend continuously on $g,h \in L^2$. Therefore, and because $\calB_\Lambda$ is dense in $L^2(\ensuremath{\mathbb R}^d)$, it is no restriction to assume that $f, g, h, \gamma \in \calB_\Lambda$. Let $\Lambda = A\ensuremath{\mathbb Z}^{2d}$ and define the function \[
G(x)
= \sum_{n\in\ensuremath{\mathbb Z}^{2d}}
\big\<f,\pi(A(n-x))g\big\rangle
\big\langle\pi(A(n-x))\gamma,h\big\rangle ,
\quad x\in\ensuremath{\mathbb R}^{2d}. \] Writing $A x = ( (A x)_1, (A x)_2 ) \in \ensuremath{\mathbb R}^d \times \ensuremath{\mathbb R}^d$, a direct computation shows that \[
\langle f, \pi(A(n-x)) g \rangle
= e^{2 \pi i (A x)_2 \cdot ( (A x)_1 - (A n)_1)}
\cdot \langle \pi (A x) f, \pi(A n) g \rangle . \] Therefore, and because of $g, \gamma \in \calB_\Lambda$ and since $z \mapsto \pi(z) u$ is continuous on $\ensuremath{\mathbb R}^{2d}$ for each $u \in L^2(\ensuremath{\mathbb R}^d)$, the function $G$ is continuous. Furthermore, we have \begin{align*}
\sum_{n \in \ensuremath{\mathbb Z}^{2d}}
|\langle f, \pi(A(n-x)) g \rangle
\cdot \langle \pi( A(n-x)) \gamma, h \rangle|
& = \sum_{n \in \ensuremath{\mathbb Z}^{2 d}}
|\langle \pi(A x) f, \pi(A n) g \rangle|
\cdot |\langle \pi(A n) \gamma, \pi(A x) h \rangle| \\
& \leq \| C_{\Lambda,g} [\pi(A x) f] \|_{\ell^2}
\cdot \| C_{\Lambda,\gamma} [\pi(A x) h] \|_{\ell^2} \\
& \leq \| C_{\Lambda,g} \|
\cdot \| C_{\Lambda,\gamma} \|
\cdot \| \pi(A x) f \|_{L^2}
\cdot \| \pi(A x) h \|_{L^2} \\
& = \| C_{\Lambda,g} \|
\cdot \| C_{\Lambda,\gamma} \|
\cdot \| f \|_{L^2}
\cdot \| h \|_{L^2} , \end{align*} which will justify the application of the dominated convergence theorem in the following calculation.
Indeed, $G$ is $\ensuremath{\mathbb Z}^{2d}$-periodic and the $k$-th Fourier coefficient of $G$ (for $k \in \ensuremath{\mathbb Z}^{2d}$) is given by \begin{align*}
c_k
&= \int_{Q}
G(x) e^{-2\pi i k x}
\, dx
= \sum_{n \in \ensuremath{\mathbb Z}^{2d}}
\int_{Q}
\big\<f, \pi(A(n-x))g\big\rangle
\big\langle\pi(A(n-x))\gamma,h\big\rangle
e^{2\pi ik(n-x)}
\,d x \\
&= \int_{\ensuremath{\mathbb R}^{2d}}
\big\<f,\pi(Ax)g\big\rangle
\big\langle\pi(Ax)\gamma,h\big\rangle
e^{2\pi ikx}
\,dx
= \frac{1}{|\det A|}
\int_{\ensuremath{\mathbb R}^{2d}}
V_g f(y)
\overline{V_\gamma h(y)}
\cdot e^{2\pi i A^{-{\!\top}}k\cdot y}
\, d y \\
&= d(\Lambda)
\int_{\ensuremath{\mathbb R}^{2d}}
V_{\pi(z_k)g}[\pi(z_k)f](y)
\overline{V_\gamma h(y)}
\, d y
= d(\Lambda) \cdot \langle\pi(z_k)f,h\rangle \langle\gamma,\pi(z_k)g\rangle, \end{align*} where $Q := [0,1]^{2d}$, $z_k := -JA^{-{\!\top}}k\in\Lambda^\circ$,
and for $f_1, g_1 \in L^2(\ensuremath{\mathbb R}^d)$, $V_{g_1} f_1 (z) = \langle f_1, \pi(z) g_1 \rangle$ for $z \in \ensuremath{\mathbb R}^{2d}$ is the \emph{short-time Fourier transform} of $f_1$ with respect to $g_1$. Here, we used the orthogonality relation for the short-time Fourier transform (see \mbox{\cite[Theorem~3.2.1]{GroechenigTFFoundations}}) and the identity $V_{\pi(z)g}[\pi(z)f] = e^{2\pi i\<Jz,\cdot\rangle}\cdot V_gf$ (\cite[Lemma~1.4.4(b)]{k}). Now, as also $f,g\in\calB_\Lambda = \calB_{\Lambda^\circ}$, we see that $(c_k)_{k \in \ensuremath{\mathbb Z}^{2d}} \in \ell^1(\ensuremath{\mathbb Z}^{2d})$. Since $G$ is continuous and $\ensuremath{\mathbb Z}^{2d}$-periodic, this implies that the Fourier series of $G$ converges uniformly and coincides pointwise with $G$. Hence, \[
G(x) = \sum_{k\in\ensuremath{\mathbb Z}^{2d}}
c_k \, e^{2\pi ikx}
\quad \text{for all} \;\; x\in\ensuremath{\mathbb R}^{2d} , \]
and setting $x = 0$ yields the claim. \end{proof}
\section{Certain subspaces of modulation spaces invariant under the frame operator} \label{s:Invariance}
The $L^2$-Sobolev-space $H^1 (\ensuremath{\mathbb R}^d) = W^{1,2}(\ensuremath{\mathbb R}^d)$ is the space of all functions $f \in L^2(\ensuremath{\mathbb R}^d)$ whose distributional derivatives $\partial_jf := \frac{\partial f}{\partial x_j}$, $j \in \{ 1,\ldots,d \}$, all belong to $L^2(\ensuremath{\mathbb R}^d)$. We will frequently use the well-known characterization
$H^1 (\ensuremath{\mathbb R}^d) = \bigl\{ f \in L^2(\ensuremath{\mathbb R}^d) : (1+| \cdot |) \widehat{f} (\cdot) \in L^2 \bigr\}$
of $H^1(\ensuremath{\mathbb R}^d)$ in terms of the Fourier transform. With the weight function $w : \ensuremath{\mathbb R}^d \to \ensuremath{\mathbb R}$, $x \mapsto 1 + |x|$, we define the weighted $L^2$-space $L_w^2 (\ensuremath{\mathbb R}^d) := \bigl\{ f : \ensuremath{\mathbb R}^d \to \ensuremath{\mathbb C} \colon w (\cdot) f (\cdot) \in L^2 \bigr\}$
which is equipped with the norm $\| f \|_{L_w^2} := \| w \, f \|_{L^2}$. It is then clear that $L_w^2 (\ensuremath{\mathbb R}^d) = \mathcal{F} [ H^1 (\ensuremath{\mathbb R}^d) ] = \mathcal{F}^{-1} [ H^1 (\ensuremath{\mathbb R}^d) ]$. Finally, we define $\mathbb{H}^1(\ensuremath{\mathbb R}^d) = H^1 (\ensuremath{\mathbb R}^d) \cap L_w^2 (\ensuremath{\mathbb R}^d)$ which is the space of all functions $f \in H^1(\ensuremath{\mathbb R}^d)$ whose Fourier transform $\widehat f$ also belongs to $H^1(\ensuremath{\mathbb R}^d)$. Equivalently, $\mathbb{H}^1(\ensuremath{\mathbb R}^d)$ is the space of all functions $g \in L^2(\ensuremath{\mathbb R}^d)$ with finite uncertainty product \eqref{eqn:FUP}.
It is worth to note that each of the spaces $H^1 (\ensuremath{\mathbb R}^d)$, $L_w^2 (\ensuremath{\mathbb R}^d)$, and $\mathbb{H}^1(\ensuremath{\mathbb R}^d)$ can be expressed as a modulation space \(
M_m^2(\ensuremath{\mathbb R}^d)
= \{
f \in L^2(\ensuremath{\mathbb R}^d)
:
\int_{\ensuremath{\mathbb R}^{2d}}
| \<f,\pi(z)\varphi \rangle |^2 \, |m(z)|^2
\, dz
< \infty
\} \) for some weight function $m: \ensuremath{\mathbb R}^{2d} \rightarrow \ensuremath{\mathbb C}$, where $\varphi \in \mathcal{S} (\ensuremath{\mathbb R}^d) \backslash \{ 0 \}$ is any fixed function \footnote{The definition of $M_m^2$ is known to be independent of the choice of $\varphi$; see e.g., \cite[Proposition~11.3.2]{GroechenigTFFoundations}.}, for instance a Gaussian. Indeed, we have \[
H^1(\ensuremath{\mathbb R}^d) = M_{m_1}^2(\ensuremath{\mathbb R}^d),\quad
L_w^2(\ensuremath{\mathbb R}^d) = M_{m_2}^2(\ensuremath{\mathbb R}^d),\quad
\text{and}\quad
\mathbb{H}^1(\ensuremath{\mathbb R}^d) = H^1 (\ensuremath{\mathbb R}^d) \cap L_w^2(\ensuremath{\mathbb R}^d) = M_{m_3}^2(\ensuremath{\mathbb R}^d), \]
with $m_1(x,\omega) = 1+|\omega|$, $m_2(x,\omega) = 1+|x|$, and $m_3(x,\omega) = \sqrt{1 + |x|^2 + |\omega|^2}$, respectively; see \cite[Proposition 11.3.1]{GroechenigTFFoundations} and \cite[Corollary~2.3]{HeilTinaztepe}.
Our main goal in this paper is to prove for each of these spaces that if the window function $g$ of a Gabor frame $(g,\Lambda)$ belongs to the space, then so does the canonical dual window. In this section, we will mostly concentrate on the space $H^1(\ensuremath{\mathbb R}^d)$, since this will imply the desired result for the other spaces as well.
The corresponding result for the Feichtinger algebra $S_0(\ensuremath{\mathbb R}^d)$ was proved in \cite{GroechenigLeinert} by showing the much stronger statement that the frame operator maps $S_0(\ensuremath{\mathbb R}^d)$ boundedly into itself and is in fact boundedly invertible on $S_0(\ensuremath{\mathbb R}^d)$. However, the methods used in \cite{GroechenigLeinert} cannot be directly transferred to the case of a window function in $\mathbb{H}^1(\ensuremath{\mathbb R}^d)$ (or $H^1(\ensuremath{\mathbb R}^d)$), since the proof in \cite{GroechenigLeinert} leverages
two particular properties of the Feichtinger algebra which are not shared by $\mathbb{H}^1(\ensuremath{\mathbb R}^d)$: \begin{enumerate}
\item[(a)] Every function from $S_0(\ensuremath{\mathbb R}^d)$ is a Bessel vector with respect to any given lattice;
\item[(b)] The series in Janssen's representation of the frame operator converges strongly
(even absolutely in operator norm) to the frame operator
when the window function belongs to $S_0(\ensuremath{\mathbb R}^d)$. \end{enumerate} Indeed, it is well-known that $g\in L^2(\ensuremath{\mathbb R})$ is a Bessel vector with respect to $\ensuremath{\mathbb Z}\times\ensuremath{\mathbb Z}$ if and only if the Zak transform of $g$ is essentially bounded (cf.~\cite[Theorem~3.1]{BenedettoDifferentiationAndBLT}), but \mbox{\cite[Example~3.4]{BenedettoDifferentiationAndBLT}} provides an example of a function $g\in\mathbb{H}^1(\ensuremath{\mathbb R})$ whose Zak transform is not essentially bounded; this indicates that (a) does not hold for $\mathbb{H}^1(\ensuremath{\mathbb R}^d)$ instead of $S_0(\ensuremath{\mathbb R}^d)$. Concerning the statement (b) for $\mathbb{H}^1(\ensuremath{\mathbb R}^d)$, it is easy to see that if Janssen's representation converges strongly (with respect to some enumeration of $\ensuremath{\mathbb Z}^2$) to the frame operator of $(g,\Lambda)$, then the frame operator must be bounded on $L^2(\ensuremath{\mathbb R})$ and thus the associated window function $g$ is necessarily a Bessel vector. Therefore, the example above again serves as a counterexample: namely, the statement (b) fails for such a non-Bessel window functions $g\in\mathbb{H}^1(\ensuremath{\mathbb R})$.
Even more, we show in the Appendix that there exist Bessel vectors $g\in\mathbb{H}^1(\ensuremath{\mathbb R})$ for which Janssen's representation neither converges unconditionally in the strong sense nor conditionally in the operator norm.
We mention that in the case of the Wiener amalgam space $W(L^{\infty},\ell_v^1)$ with an admissible weight $v$, the convergence issue was circumvented by employing Walnut's representation instead of Janssen's to prove the result for $W(L^{\infty},\ell_v^1)$ in \cite{ko}.
Fortunately, it turns out that establishing the corresponding result for $V = H^1(\ensuremath{\mathbb R}^d)$, $L_w^2 (\ensuremath{\mathbb R}^d)$, and $\mathbb{H}^1(\ensuremath{\mathbb R}^d)$ only requires the invertibility of the frame operator on a particular subspace of $V$. Precisely, given a lattice $\Lambda \subset \ensuremath{\mathbb R}^{2d}$, we define \[
H^1_\Lambda(\ensuremath{\mathbb R}^d) := H^1(\ensuremath{\mathbb R}^d)\cap\calB_\Lambda,
\quad
\mathbb{H}_\Lambda^1(\ensuremath{\mathbb R}^d) := \mathbb{H}^1(\ensuremath{\mathbb R}^d)\cap\calB_\Lambda ,
\quad \text{and} \quad
L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d) := L_w^2(\ensuremath{\mathbb R}^d) \cap \calB_{\Lambda} . \] We equip the first two of these spaces with the norms \[
\|f\|_{H^1_\Lambda} := \|\nabla f\|_{L^2} + \|C_{\Lambda,f}\|_{L^2 \to \ell^2}
\qquad\text{and}\qquad
\|f\|_{\mathbb{H}^1_\Lambda} := \|\nabla f\|_{L^2} + \|\nabla\widehat f\|_{L^2} + \|C_{\Lambda,f}\|_{L^2 \to \ell^2}, \] respectively, where \[
\|\nabla f\|_{L^2}
:= \sum_{j=1}^d
\|\partial_jf\|_{L^2} \] and $C_{\Lambda, f}$ is the analysis operator defined in \eqref{eq:CoefficientOperator}. Finally, we equip the space $L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d)$ with the norm \[
\NormWeightedBessel{f} := \| f \|_{L_w^2} + \| C_{\Lambda,f} \|_{L^2, \ell^2} ,
\qquad \text{where} \qquad
\| f \|_{L_w^2} := \| w \cdot f \|_{L^2} . \] We start by showing that these spaces are Banach spaces.
\begin{lem}\label{lem:BesselH1IsBanach}
For a lattice $\Lambda\subset\ensuremath{\mathbb R}^{2d}$, the spaces $H^1_\Lambda(\ensuremath{\mathbb R}^d)$, $L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d)$,
and $\mathbb{H}^1_\Lambda(\ensuremath{\mathbb R}^d)$ are Banach spaces which are continuously embedded in $L^2(\ensuremath{\mathbb R}^d)$.
\end{lem}
\begin{proof} We naturally equip the space $\calB_\Lambda\subset L^2(\ensuremath{\mathbb R}^d)$ with the norm
$\|f\|_{\calB_\Lambda} := \|C_{\Lambda,f}\|_{L^2\to\ell^2}$. Then $(\calB_\Lambda,\|\cdot\|_{\calB_\Lambda})$ is a Banach space by \cite[Proposition~3.1]{HanLarson}. Moreover, for $f\in\calB_\Lambda$, \begin{equation}
\|f \|_{L^2}
= \big\| C_{\Lambda, f}^* \, \delta_{0,0} \big\|_{L^2}
\leq \|C_{\Lambda,f}^*\|_{\ell^2\to L^2}
= \|f\|_{\calB_\Lambda},
\label{eq:BesselEmbedsIntoL2} \end{equation} which implies that $\calB_\Lambda\hookrightarrow L^2(\ensuremath{\mathbb R}^d)$. Hence, if $(f_n)_{n\in\ensuremath{\mathbb N}}$ is a Cauchy sequence in $H^1_\Lambda(\ensuremath{\mathbb R}^d)$, then it is a Cauchy sequence in both $H^1(\ensuremath{\mathbb R}^d)$
(equipped with the norm $\| f \|_{H^1} := \| f \|_{L^2} + \| \nabla f \|_{L^2}$) and in $\calB_\Lambda$. Therefore, there exist $f\in H^1(\ensuremath{\mathbb R}^d)$ and $g\in\calB_\Lambda$
such that $\|f_n-f\|_{H^1}\to 0$ and $\|f_n-g\|_{\calB_\Lambda}\to 0$ as $n\to\infty$. But as $H^1(\ensuremath{\mathbb R}^d)\hookrightarrow L^2(\ensuremath{\mathbb R}^d)$ and $\calB_\Lambda\hookrightarrow L^2(\ensuremath{\mathbb R}^d)$, we have $f_n\to f$ and $f_n\to g$ also in $L^2(\ensuremath{\mathbb R}^d)$, which implies $f=g$. Hence, $\|f_n - f\|_{H^1_\Lambda}\to 0$ as $n\to\infty$, which proves that $H^1_\Lambda(\ensuremath{\mathbb R}^d)$ is complete. The proof for $L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d)$ and $\mathbb{H}^1_\Lambda(\ensuremath{\mathbb R}^d)$ is similar. \end{proof}
\begin{prop}\label{p:bounded}
Let $\Lambda\subset\ensuremath{\mathbb R}^{2d}$ be a lattice.
If $g,h\in H^1_\Lambda(\ensuremath{\mathbb R}^d)$, then $S_{\Lambda,g,h}$ maps $H^1_\Lambda(\ensuremath{\mathbb R}^d)$ boundedly into itself
with operator norm not exceeding $\|g\|_{H^1_\Lambda}\|h\|_{H^1_\Lambda}$.
For $f\in H^1_\Lambda(\ensuremath{\mathbb R}^d)$ and $j \in \{1,\ldots,d\}$ we have
\begin{align}\label{e:DSf}
\partial_j(S_{\Lambda,g,h}f)
&= S_{\Lambda,g,h}(\partial_jf) + d(\Lambda)\cdot C_{\Lambda^\circ,f}^* \, d_{j,\Lambda^\circ,g,h},
\end{align}
where $d_{j,\Lambda^{\circ},g,h} \in \ell^2(\Lambda^{\circ})$ is defined by
\begin{align}\label{e:de}
(d_{j,\Lambda^\circ,g,h})_{\mu}
:= \big\langle\partial_jh,\pi(\mu)g\big\rangle
+ \big\<h,\pi(\mu)(\partial_jg)\big\rangle,\quad \mu\in\Lambda^\circ.
\end{align} \end{prop}
\begin{proof} Let $f\in H^1_\Lambda(\ensuremath{\mathbb R}^d)$ and set $u := S_{\Lambda,g,h}f$. First of all, we have $u \in \calB_\Lambda$. Indeed, a direct computation shows that $S_{\Lambda,g,h}$ commutes with $\pi (\lambda)$ for all $\lambda\in\Lambda$, and that $S_{\Lambda, g, h}^\ast = S_{\Lambda, h, g}$, which shows for $v \in L^2(\ensuremath{\mathbb R}^d)$ that \[
(C_{\Lambda,u} v)_{\lambda}
= \langle v, \pi(\lambda) u\rangle
= \langle v,\pi(\lambda) S_{\Lambda,g,h} f \rangle
= \langle S_{\Lambda,h,g} v, \pi(\lambda) f \rangle
= (C_{\Lambda,f} \circ S_{\Lambda,h,g} \, v)_{\lambda}, \] and therefore \begin{equation}\label{e:alledrei}
\| C_{\Lambda,u} \|
\leq \|S_{\Lambda,h,g}\| \cdot \|C_{\Lambda,f}\|
\leq \| C_{\Lambda,g} \| \cdot \| C_{\Lambda,h} \| \cdot \| C_{\Lambda,f} \| < \infty , \end{equation} since $S_{\Lambda,h,g} = C_{\Lambda,g}^*C_{\Lambda,h}$.
We now show that $u \in H^1(\ensuremath{\mathbb R}^d)$. To this end, note for $v \in H^1(\ensuremath{\mathbb R}^d)$, $a, b \in \ensuremath{\mathbb R}^d$, and $j \in \{1,\ldots,d\}$ that \[
\partial_j (M_b v)
= 2\pi i \cdot b_j \cdot M_{b}v + M_{b}(\partial_jv)
\qquad\text{and}\qquad
\partial_j(T_av) = T_{a}(\partial_jv) \] and therefore \[
\partial_j(\pi(z)v)
= 2\pi i \cdot z_{d+j}\cdot\pi(z)v + \pi(z)(\partial_jv). \] Hence, setting $c_{\lambda,j} := 2\pi i \cdot \lambda_{d+j}\cdot\<f,\pi(\lambda)g \rangle$ for $\lambda = (a,b)\in \Lambda$, we see that \begin{align} \begin{split}\label{e:cmn}
c_{\lambda,j}
&= \langle\partial_jf,\pi(\lambda)g\rangle + \<f,\pi(\lambda)(\partial_jg)\rangle. \end{split} \end{align} In particular, $(c_{\lambda,j})_{\lambda\in\Lambda}\in\ell^2(\Lambda)$ for each $j\in\{1,\ldots,d\}$, because $f,g\in\calB_\Lambda$ and $\partial_j f , \partial_j g \in L^2$.
In order to show that $\partial_ju$ exists and is in $L^2(\ensuremath{\mathbb R}^d)$, let $\phi\in C_c^\infty(\ensuremath{\mathbb R}^d)$ be a test function. Note that $C_c^\infty(\ensuremath{\mathbb R}^d)\subset \calB_\Lambda$. Therefore, we obtain \begin{align*}
-\big\<u,\partial_j\phi\big\rangle
&= - \!\!
\sum_{\lambda\in\Lambda}
\<f,\pi(\lambda)g\big\rangle
\big\langle\pi(\lambda)h,\partial_j\phi\big\rangle
= \sum_{\lambda\in\Lambda}
\<f,\pi(\lambda)g\big\rangle
\big\langle
2\pi i\lambda_{d+j}\cdot\pi(\lambda)h
+ \pi(\lambda)(\partial_jh)
,
\phi
\big\rangle\\
&= \sum_{\lambda\in\Lambda}
c_{\lambda,j}
\cdot \big\langle\pi(\lambda)h,\phi\big\rangle
+ \sum_{\lambda\in\Lambda}
\big\<f,\pi(\lambda)g\big\rangle
\big\langle\pi(\lambda)(\partial_jh),\phi\big\rangle \\
& \!\!\overset{\eqref{e:cmn}}{=}
\<S_{\Lambda,g,h}(\partial_jf),\phi\rangle
+ \sum_{\lambda\in\Lambda}
\<f,\pi(\lambda)(\partial_jg)\rangle
\langle\pi(\lambda)h,\phi\rangle
+ \sum_{\lambda\in\Lambda}
\big\<f,\pi(\lambda)g\big\rangle
\big\langle\pi(\lambda)(\partial_jh), \phi\big\rangle \\
& \!\!\overset{\eqref{e:fi}}{=}
\<S_{\Lambda,g,h}(\partial_jf),\phi\rangle
+ d(\Lambda) \sum_{\mu\in\Lambda^\circ}
\Big[
\big\<h,\pi(\mu)(\partial_jg)\big\rangle
+ \big\langle\partial_jh,\pi(\mu)g\big\rangle
\Big]
\big\langle\pi(\mu)f,\phi\big\rangle \\[-0.1cm]
&= \left\langle
S_{\Lambda,g,h} (\partial_jf)
+ d(\Lambda) \sum_{\mu\in\Lambda^\circ}
\Big[
\big\<h,\pi(\mu)(\partial_jg)\big\rangle
+ \big\langle\partial_jh,\pi(\mu)g\big\rangle
\Big]
\pi(\mu)f
\,,\,\phi
\right\rangle \\
&= \left\langle
S_{\Lambda,g,h}(\partial_jf)
+ d(\Lambda)\cdot C_{\Lambda^\circ,f}^* \, d_j
\,,\phi
\right\rangle, \end{align*} with $d_j = d_{j,\Lambda^\circ,g,h}$ as in \eqref{e:de}. Note that $d_j\in \ell^2(\Lambda^\circ)$ because $g,h\in\calB_{\Lambda} = \calB_{\Lambda^\circ}$ and ${\partial_j h , \partial_j g \in L^2}$. Since $j\in\{1,\ldots,d\}$ is chosen arbitrarily, this proves that $u\in H^1(\ensuremath{\mathbb R}^d)$ with \[
\partial_ju
= S_{\Lambda,g,h}(\partial_jf)
+ d(\Lambda)\cdot C_{\Lambda^\circ,f}^* \, d_j
\,\in\, L^2(\ensuremath{\mathbb R}^d) \] for $j \in \{1,\ldots,d\}$, which is \eqref{e:DSf}. Next, recalling Equation \eqref{e:Cnorms} we get \[
\|d_j\|_{\ell^2}
\leq \|C_{\Lambda^\circ,h}\| \cdot \|\partial_jg\|_{L^2}
+ \|C_{\Lambda^\circ,g}\| \cdot \|\partial_jh\|_{L^2}
= \operatorname{Vol}(\Lambda)^{1/2}
\big(
\|C_{\Lambda,h}\| \cdot \|\partial_jg\|_{L^2}
+ \|C_{\Lambda,g}\| \cdot \|\partial_jh\|_{L^2}
\big), \]
and $\|C_{\Lambda^\circ,f}^*\| = \operatorname{Vol}(\Lambda)^{1/2}\|C_{\Lambda,f}\|$. Therefore, \begin{align*}
\|\partial_ju\|_{L^2}
\leq \|S_{\Lambda,g,h}\| \cdot \|\partial_jf\|_{L^2}
+ \big(
\|C_{\Lambda,h}\| \cdot \|\partial_jg\|_{L^2}
+ \|C_{\Lambda,g}\| \cdot \|\partial_jh\|_{L^2}
\big)
\, \|C_{\Lambda,f}\|. \end{align*} Hence, with \eqref{e:alledrei}, we see \vspace*{-0.3cm} \begin{align*}
\|S_{\Lambda,g,h}f\|_{H^1_\Lambda}
&= \|\nabla u\|_{L^2}
+ \|C_{\Lambda,u}\|
\leq \sum_{j=1}^d \|\partial_ju\|_{L^2}
+ \|C_{\Lambda,g}\| \cdot \|C_{\Lambda,h}\| \cdot \|C_{\Lambda,f}\| \\
&\leq \|S_{\Lambda,g,h}\| \!\cdot\! \|\nabla f\|_{L^2}
+ \big(
\|C_{\Lambda,h}\| \!\cdot\! \|\nabla g\|_{L^2}
\!+\! \|C_{\Lambda,g}\| \!\cdot\! \|\nabla h\|_{L^2}
\!+\! \|C_{\Lambda,g}\| \!\cdot\! \|C_{\Lambda,h}\|
\big)
\|C_{\Lambda,f}\| \\
&\leq \|C_{\Lambda,g}\| \cdot \|C_{\Lambda,h}\| \cdot \|\nabla f\|_{L^2}
+ \big(
\|\nabla g\|_{L^2} + \|C_{\Lambda,g}\|
\big)
\big(
\|\nabla h\|_{L^2}
+ \|C_{\Lambda,h}\|
\big)
\, \|C_{\Lambda,f}\| \\
&\leq \|g\|_{H^1_\Lambda}
\|h\|_{H^1_\Lambda}
\cdot \|f\|_{H^1_\Lambda}, \end{align*} and the proposition is proved. \end{proof}
\section{Spectrum and dual windows} \label{s:spectra}
Let $X$ be a Banach space. As usual, we denote the set of bounded linear operators from $X$ into itself by $\calB(X)$. The {\em resolvent set} $\rho(T)$ of an operator $T\in\calB(X)$ is the set of all $z\in\ensuremath{\mathbb C}$ for which $T-z := T-z I : X \to X$ is bijective. Note that $\rho(T)$ is always open in $\ensuremath{\mathbb C}$. The {\em spectrum} of $T$ is the complement $\sigma(T) := \ensuremath{\mathbb C}\backslash\rho(T)$. The {\em approximate point spectrum} $\sigma_{{ap}}(T)$ is a subset of $\sigma(T)$ and is defined as the set of points $z\in\ensuremath{\mathbb C}$ for which there exists a sequence
$(f_n)_{n\in\ensuremath{\mathbb N}}\subset X$ such that $\|f_n\|=1$ for all $n\in\ensuremath{\mathbb N}$
and $\|(T-z)f_n\|\to 0$ as $n\to\infty$. By \mbox{\cite[Proposition~VII.6.7]{ConwayFA}} we have \begin{equation}\label{e:partial_sap}
\partial\sigma(T)\,\subset\,\sigma_{{ap}}(T). \end{equation}
\begin{lem}\label{l:spectra_eq}
Let $(\calH, \| \cdot \|)$ be a Hilbert space, let $S\in\calB(\calH)$ be self-adjoint,
and let $X\subset\calH$ be a dense linear subspace satisfying $S(X) \subset X$.
If $\|\cdot\|_X$ is a norm on $X$ such that $(X,\| \cdot \|_X)$ is complete
and satisfies $X\hookrightarrow\calH$, then $A := S|_X\in\calB(X)$.
If, in addition, $\sigma_{{ap}}(A)\subset\sigma(S)$, then $\sigma(A) = \sigma(S)$. \end{lem}
\begin{proof} The fact that $A \in \calB(X)$ easily follows from the closed graph theorem. Next, since $X \hookrightarrow \calH$, there exists $C > 0$
with $\| f \| \leq C \, \| f \|_X$ for all $f \in X$. Assume now that additionally $\sigma_{{ap}}(A) \subset \sigma(S)$ holds. Note that $\sigma(S) \subset \ensuremath{\mathbb R}$, since $S$ is self-adjoint. Since $\sigma(A)\subset\ensuremath{\mathbb C}$ is compact, the value ${r := \max_{w\in\sigma(A)}|\!\operatorname{Im} w|}$ exists. Choose $z\in\sigma(A)$ such that $|\!\operatorname{Im} z| = r$. Clearly, $z$ cannot belong to the interior of $\sigma(A)$, and hence $z \in \partial \sigma(A)$. In view of Equation~\eqref{e:partial_sap}, this implies $z \in \sigma_{{ap}}(A) \subset \sigma(S) \subset \ensuremath{\mathbb R}$, hence $r = 0$ and thus $\sigma(A) \subset \ensuremath{\mathbb R}$. Therefore, $\sigma(A)$ has empty interior in $\ensuremath{\mathbb C}$, meaning $\sigma (A) = \partial \sigma (A)$. Thanks to Equation~\eqref{e:partial_sap}, this means $\sigma(A) \subset \sigma_{{ap}}(A)$, and hence $\sigma(A) \subset \sigma(S)$, since by assumption $\sigma_{{ap}}(A) \subset \sigma(S)$.
For the converse inclusion it suffices to show that $\rho(A)\cap\ensuremath{\mathbb R}\subset\rho(S)$. To see that this holds, let $z\in\rho(A)\cap\ensuremath{\mathbb R}$ and denote by $E$ the spectral measure of the self-adjoint operator $S$. Since $\ensuremath{\mathbb R} \cap \rho(A) \subset \ensuremath{\mathbb R}$ is open, there are $a, b \in \ensuremath{\mathbb R}$ and $\delta_0 > 0$ such that $z \in (a,b)$ and $[a-\delta_0, b+\delta_0] \subset \rho(A)$. By Stone's formula (see, e.g., \cite[Thm.\ VII.13]{rs}),
the spectral projection of $S$ with respect to $(a,b]$ can be expressed as \[
E((a,b])f
= \lim_{\delta\downarrow 0}\,
\lim_{\varepsilon\downarrow 0}
\frac{1}{2\pi i}
\int_{a+\delta}^{b+\delta}
\big[
(S-t-i\varepsilon)^{-1}f
-(S-t+i\varepsilon)^{-1}f
\big]
\,dt,
\qquad f\in\calH, \] where all limits are taken with respect to the norm of $\calH$.
Note for $w \in \ensuremath{\mathbb C} \setminus \ensuremath{\mathbb R}$ that $w \in \rho(S) \subset \rho(A)$. Furthermore, $A - w = (S - w)|_X$, which easily implies $(S - w)^{-1}|_X = (A - w)^{-1}$. Hence, for $f \in X$, \begin{align*}
\|E((a,b])f\|
&\leq \lim_{\delta\downarrow 0}\,
\lim_{\varepsilon\downarrow 0}
\frac{1}{2\pi}
\int_{a+\delta}^{b+\delta}
\big\|(S- t-i\varepsilon)^{-1}f-(S- t+i\varepsilon)^{-1}f\big\|
\,d t\\
&\leq C \cdot \lim_{\delta\downarrow 0}\,
\lim_{\varepsilon\downarrow 0}
\frac{1}{2\pi}
\int_{a+\delta}^{b+\delta}
\big\|(A- t-i\varepsilon)^{-1}f-(A- t+i\varepsilon)^{-1}f\big\|_{X}
\,d t \\
&= C \cdot \lim_{\delta\downarrow 0}
\frac{1}{2\pi}
\int_{a+\delta}^{b+\delta}
\lim_{\varepsilon\downarrow 0}
\big\|(A- t-i\varepsilon)^{-1}f-(A- t+i\varepsilon)^{-1}f\big\|_{X}
\,d t\\
&=0, \end{align*} since the map $\rho(A)\to X$, $z\mapsto (A-z)^{-1}f$ is analytic and thus uniformly continuous on compact sets. This implies $E((a,b])f = 0$ for all $f\in X$ and therefore $E((a,b]) = 0$ as $X$ is dense in $\calH$. But this means that $(a,b) \subset \rho(S)$ (see \cite[Prop.\ on p.\ 236]{rs}) and thus $z \in \rho(S)$. \end{proof}
For proving the invertibility of $S_{\Lambda,g}$ on $H_{\Lambda}^1, L_{w,\Lambda}^2$, and $\mathbb{H}^1_{\Lambda}$, we first focus on the space $H^1_\Lambda(\ensuremath{\mathbb R}^d)$. Note that if $g \in H^1_\Lambda(\ensuremath{\mathbb R}^d)$, then $S_{\Lambda,g}$ maps $H^1_\Lambda(\ensuremath{\mathbb R}^d)$ boundedly into itself by Proposition~\ref{p:bounded}. For $g \in H^1_\Lambda(\ensuremath{\mathbb R}^d)$, we will denote the restriction of $S_{\Lambda,g}$ to $H^1_\Lambda(\ensuremath{\mathbb R}^d)$
by $A_{\Lambda,g}$; that is, $A_{\Lambda,g} := S_{\Lambda,g} |_{H^1_\Lambda(\ensuremath{\mathbb R}^d)} \in\calB(H^1_\Lambda(\ensuremath{\mathbb R}^d))$.
\begin{thm}\label{t:spectra}
Let $\Lambda\subset\ensuremath{\mathbb Z}^{2d}$ be a lattice and let $g\in H^1_\Lambda(\ensuremath{\mathbb R}^d)$.
Then
\[
\sigma(A_{\Lambda,g}) = \sigma(S_{\Lambda,g}).
\] \end{thm}
\begin{proof} For brevity, we set $A := A_{\Lambda,g}$ and $S := S_{\Lambda,g}$. Due to Lemma~\ref{l:spectra_eq}, we only have to prove that $\sigma_{{ap}}(A)\subset\sigma(S)$. For this, let $z\in\sigma_{{ap}}(A)$. Then there exists a sequence $(f_n)_{n\in\ensuremath{\mathbb N}} \subset H^1_\Lambda(\ensuremath{\mathbb R}^d)$ such that
$\|f_n\|_{H^1_\Lambda} = 1$ for all $n \in \ensuremath{\mathbb N}$ and $\|(A - z)f_n\|_{H^1_\Lambda}\to 0$ as $n \to \infty$. The latter means that, for each $j \in \{1, \ldots, d\}$, \begin{equation}\label{e:two}
\big\| \partial_j(S f_n) - z \cdot (\partial_j f_n) \big\|_{L^2}\to 0
\qquad\text{and}\qquad
\big\| C_{\Lambda,(S - z)f_n} \big\|\to 0. \end{equation} Suppose towards a contradiction that $z \notin \sigma(S)$. Since $S$ is self-adjoint, this implies $\overline{z} \notin \sigma(S)$. Furthermore, because $S$ is self-adjoint and commutes with $\pi(\lambda)$ for all $\lambda \in \Lambda$, we see for $f \in \calB_\Lambda$ that $C_{\Lambda,(S - z)f} = C_{\Lambda,f} \circ (S - \overline z)$
and hence $C_{\Lambda,f_n} = C_{\Lambda,(S-z)f_n} \circ (S - \overline z)^{-1}$, which implies that $\|C_{\Lambda,f_n}\|\to 0$. Hence, also $\|C_{\Lambda^\circ,f_n}\|\to 0$ as $n\to\infty$ (see Equation~\eqref{e:Cnorms}). Now, by Equation~\eqref{e:DSf}, we have \[
\partial_j(Sf_n) - z \cdot (\partial_j f_n)
= (S- z)(\partial_jf_n) + C_{\Lambda^\circ,f_n}^*d_j \]
with some $d_j\in\ell^2(\Lambda^\circ)$ which is \emph{independent of} $n$. Hence, the first limit in \eqref{e:two} combined with $\|C_{\Lambda^\circ,f_n}\|\to 0$
implies that $\|(S - z)(\partial_jf_n)\|_{L^2}\to 0$ and thus $\|\partial_jf_n\|_{L^2}\to 0$
as $n\to\infty$ for all $j \in \{1,\ldots,d\}$, since $z \notin \sigma(S)$. Hence, $\|f_n\|_{H^1_\Lambda} = \sum_{j=1}^d \|\partial_j f_n\|_{L^2} + \|C_{\Lambda,f_n}\|\to 0$
as $n\to\infty$, in contradiction to $\|f_n\|_{H^1_\Lambda} = 1$ for all $n \in \ensuremath{\mathbb N}$. This proves that, indeed, $\sigma_{{ap}}(A)\subset\sigma(S)$. \end{proof}
We now show analogous properties to Proposition~\ref{p:bounded} and Theorem~\ref{t:spectra} for $L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d)$.
\begin{cor}\label{cor:WeightedSpaceSpectrum}
Let $\Lambda\subset\ensuremath{\mathbb Z}^{2d}$ be a lattice.
If $g, h \in L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d)$, then $S_{\Lambda,g,h}$ maps $L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d)$
boundedly into itself.
If $g = h$ and if $A^w_{\Lambda,g} := S_{\Lambda,g} |_{L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d)} \in \calB(L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d))$
denotes the restriction of $S_{\Lambda,g}$ to $L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d)$, then
\[
\sigma(A^w_{\Lambda,g}) = \sigma(S_{\Lambda,g}).
\] \end{cor}
\begin{proof}
We equip the space $\calB_{\Lambda} \subset L^2(\ensuremath{\mathbb R}^d)$ with the norm
${\| f \|_{\calB_\Lambda} := \| C_{\Lambda,f} \|_{L^2 \to \ell^2}}$,
where we recall from Equation~\eqref{eq:BesselEmbedsIntoL2} that
$\| f \|_{L^2} \leq \| f \|_{\calB_\Lambda}$.
Equation~\eqref{e:FTpi} shows that the Fourier transform
is an isometric isomorphism from $\calB_{\Lambda}$ to $\calB_{\widehat{\Lambda}}$,
where $\widehat{\Lambda} := J \Lambda$.
Furthermore, it is well-known (see for instance \mbox{\cite[Section~9.3]{FollandRealAnalysis}})
that the Fourier transform ${\mathcal{F} : L^2 \to L^2}$
restricts to an isomorphism of Banach spaces ${\mathcal{F} : L_{w}^2(\ensuremath{\mathbb R}^d) \to H^1(\ensuremath{\mathbb R}^d)}$,
where $H^1$ is equipped with the norm $\| f \|_{H^1} := \| f \|_{L^2} + \| \nabla f \|_{L^2}$.
Taken together, we thus see that the Fourier transform restricts to an isomorphism
$\mathcal{F} : L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d) \to H_{\widehat{\Lambda}}^1(\ensuremath{\mathbb R}^d)$; here, we implicitly used that
${\| f \|_{H_{\widehat{\Lambda}}^1} \asymp \| f \|_{H^1} + \| f \|_{\calB_{\widehat{\Lambda}}}}$,
which follows from $\| \cdot \|_{L^2} \leq \| \cdot \|_{\calB_{\widehat{\Lambda}}}$.
Plancherel's theorem, in combination with Equation~\eqref{e:FTpi}
shows for $f \in L^2(\ensuremath{\mathbb R}^d)$ that
\begin{align*}
\mathcal{F} \bigl[S_{\Lambda,g,h} f\bigr]
\!=\! \sum_{\lambda\in\Lambda}
\Big\langle\widehat f,\widehat{\pi(\lambda)g}\Big\rangle
\widehat{\pi(\lambda)h}
=\! \sum_{\lambda\in\Lambda}
\langle\widehat f,\pi(J\lambda)\widehat g \,\rangle
\pi(J\lambda)\widehat h
=\! \sum_{\lambda\in\widehat\Lambda}
\langle\widehat f,\pi(\lambda)\widehat g \,\rangle
\pi(\lambda)\widehat h
= S_{\widehat\Lambda,\widehat g,\widehat h} \widehat f.
\end{align*}
Since
\(
A_{\widehat{\Lambda},\widehat{g},\widehat{h}}
= S_{\widehat{\Lambda},\widehat{g},\widehat{h}}|_{H_{\widehat{\Lambda}}^1} :
H_{\widehat{\Lambda}}^1(\ensuremath{\mathbb R}^d) \to H_{\widehat{\Lambda}}^1(\ensuremath{\mathbb R}^d)
\)
is well-defined and bounded by Proposition~\ref{p:bounded}, the preceding calculation
combined with the considerations from the previous paragraph shows that
$A_{\Lambda,g,h}^w = S_{\Lambda,g,h}|_{L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d)} : L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d) \to L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d)$
is well-defined and bounded, with
\[
A_{\Lambda,g,h}^w
= \mathcal{F}^{-1} \circ A_{\widehat{\Lambda},\widehat{g},\widehat{h}} \circ \mathcal{F} .
\]
Finally, if $g = h$, we see
\(
\sigma(A_{\Lambda,g,g}^w)
= \sigma(A_{\widehat{\Lambda},\widehat{g},\widehat{g}})
= \sigma(S_{\widehat{\Lambda},\widehat{g},\widehat{g}})
= \sigma(S_{\Lambda,g,g}) ,
\)
where the second step is due to Theorem~\ref{t:spectra}, and the final step used the identity
$S_{\Lambda,g,h} = \mathcal{F}^{-1} \circ S_{\widehat{\Lambda},\widehat{g},\widehat{h}} \circ \mathcal{F}$
from above. \end{proof}
Finally, we establish the corresponding properties for $\mathbb{H}^1_\Lambda(\ensuremath{\mathbb R}^d) = H^1_\Lambda(\ensuremath{\mathbb R}^d) \cap L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d)$.
\begin{cor}\label{c:bounded_HH}
Let $\Lambda\subset\ensuremath{\mathbb Z}^{2d}$ be a lattice.
If $g,h\in\mathbb{H}^1_\Lambda(\ensuremath{\mathbb R}^d)$, then $S_{\Lambda,g,h}$ maps $\mathbb{H}^1_\Lambda(\ensuremath{\mathbb R}^d)$ boundedly into itself.
If $g=h$ and $\mathbb A_{\Lambda,g} := S_{\Lambda,g} |_{\mathbb{H}^1_\Lambda(\ensuremath{\mathbb R}^d)} \in\calB(\mathbb{H}^1_\Lambda(\ensuremath{\mathbb R}^d))$
denotes the restriction of $S_{\Lambda,g}$ to $\mathbb{H}^1_\Lambda(\ensuremath{\mathbb R}^d)$, then
\begin{equation}
\sigma(\mathbb A_{\Lambda,g}) = \sigma(S_{\Lambda,g}).
\label{eq:FatHSpectrum}
\end{equation} \end{cor} \begin{proof}
From the definition of $\mathbb{H}_\Lambda^1$ and the proof of Corollary~\ref{cor:WeightedSpaceSpectrum}
it is easy to see that $\mathbb{H}_\Lambda^1 = H_\Lambda^1 \cap L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d)$,
and $\| \cdot \|_{\mathbb{H}_\Lambda^1} \asymp \| \cdot \|_{H_\Lambda^1} + \NormWeightedBessel{\cdot}$.
Therefore, Proposition~\ref{p:bounded} and Corollary~\ref{cor:WeightedSpaceSpectrum} imply
that $S_{\Lambda,g,h}$ maps $\mathbb{H}_\Lambda^1(\ensuremath{\mathbb R}^d)$ boundedly into itself.
Lemma~\ref{l:spectra_eq} shows that to prove \eqref{eq:FatHSpectrum},
it suffices to show $\sigma_{{ap}}(\mathbb{A}_{\Lambda,g}) \subset \sigma(S_{\Lambda,g})$.
Thus, let ${z \in \sigma_{{ap}}(\mathbb{A}_{\Lambda,g})}$.
Then there exists $(f_n)_{n \in \ensuremath{\mathbb N}} \subset \mathbb{H}^1_\Lambda(\ensuremath{\mathbb R}^d)$ with $\|f_n\|_{\mathbb{H}_{\Lambda}^1}=1$
for all $n \in \ensuremath{\mathbb N}$ and $\|(\mathbb A_{\Lambda,g}-z)f_n\|_{\mathbb{H}^1_\Lambda}\to 0$ as $n\to\infty$.
Thus, $\|(A_{\Lambda,g}-z)f_n\|_{H^1_\Lambda}\to 0$
and ${\NormWeightedBessel{(A_{\Lambda,g}^w - z)f_n} \to 0}$ as $n\to\infty$.
Furthermore, there is a subsequence $(n_k)_{k \in \ensuremath{\mathbb N}}$ such that
$\lim_{k\to\infty} \| f_{n_k} \|_{H_\Lambda^1} > 0$
or $\lim_{k\to\infty} \NormWeightedBessel{f_{n_k}} > 0$.
Hence, $z \in \sigma(A_{\Lambda,g})$ or $z \in \sigma(A_{\Lambda,g}^w)$.
But Theorem~\ref{t:spectra} and Corollary~\ref{cor:WeightedSpaceSpectrum} show
$\sigma(A_{\Lambda,g}) = \sigma(A_{\Lambda,g}^w) = \sigma(S_{\Lambda,g})$.
We have thus shown $\sigma_{{ap}}(\mathbb{A}_{\Lambda,g}) \subset \sigma(S_{\Lambda,g})$,
so that Lemma~\ref{l:spectra_eq} shows $\sigma(\mathbb{A}_{\Lambda,g}) = \sigma(S_{\Lambda,g})$. \end{proof}
The next proposition shows that any operator obtained from $S_{\Lambda,g}$ through the holomorphic spectral calculus (see \cite[Sections~10.21--10.29]{RudinFunctionalAnalysis} for a definition) maps each of the spaces $H_{\Lambda}^1(\ensuremath{\mathbb R}^d)$, $L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d)$, and $\mathbb{H}_\Lambda^1(\ensuremath{\mathbb R}^d)$ into itself.
\begin{prop}\label{t:inverse_closed}
Let $\Lambda\subset\ensuremath{\mathbb R}^{2d}$ be a lattice,
let $V \in \{ H_\Lambda^1(\ensuremath{\mathbb R}^d), L_{w,\Lambda}^2(\ensuremath{\mathbb R}^d), \mathbb{H}_\Lambda^1(\ensuremath{\mathbb R}^d) \}$, and $g \in V$.
Then for any open set $\Omega \subset \ensuremath{\mathbb C}$ with $\sigma(S_{\Lambda,g}) \subset \Omega$,
any analytic function $F : \Omega \to \ensuremath{\mathbb C}$, and any $f \in V$, we have $F(S_{\Lambda,g})f \in V$. \end{prop}
\begin{proof}
We only prove the claim for $V = H_{\Lambda}^1 (\ensuremath{\mathbb R}^d)$;
the proofs for the other cases are similar, using Corollaries~\ref{cor:WeightedSpaceSpectrum}
or \ref{c:bounded_HH} instead of Theorem~\ref{t:spectra}.
Thus, let $g\in H^1_\Lambda(\ensuremath{\mathbb R}^d)$ and set $S := S_{\Lambda,g}$ and $A := A_{\Lambda,g}$.
Let $f\in H^1_\Lambda(\ensuremath{\mathbb R}^d)$ and define
\[
h
= - \frac{1}{2\pi i}
\int_\Gamma
F(z) \cdot (A-z)^{-1}f
\,dz
\,\in\, H^1_\Lambda(\ensuremath{\mathbb R}^d),
\]
where $\Gamma \subset \Omega \setminus \sigma(S)$ is a finite set of closed rectifiable curves
surrounding $\sigma(S) = \sigma(A)$
(existence of such curves is shown in \cite[Theorem~13.5]{RudinRealAndComplexAnalysis}).
Note that the integral converges in $H^1_\Lambda(\ensuremath{\mathbb R}^d)$.
Since $H^1_\Lambda(\ensuremath{\mathbb R}^d)\hookrightarrow L^2(\ensuremath{\mathbb R}^d)$, it also converges
(to the same limit) in $L^2(\ensuremath{\mathbb R}^d)$ and hence, by definition of the holomorphic spectral calculus,
\[
F(S)f
= -\frac{1}{2\pi i}
\int_\Gamma
F(z) \cdot (S-z)^{-1}f
\,dz
= h\,\in\,H^1_\Lambda(\ensuremath{\mathbb R}^d).
\qedhere
\]
\end{proof}
Our main result (Theorem~\ref{t:main}) is now an easy consequence of Proposition~\ref{t:inverse_closed}.
\begin{proof}[Proof of Theorem~\ref{t:main}]
Using the fact that $S_{\Lambda,g}$ commutes with $\pi(\lambda)$ for all $\lambda \in \Lambda$,
it is easily seen that $(S_{\Lambda,g}^{-1} \, g,\Lambda)$ is the canonical dual frame of $(g,\Lambda)$
and that $(S_{\Lambda,g}^{-1/2} g, \Lambda)$ is a Parseval frame for $L^2(\ensuremath{\mathbb R}^d)$;
see for instance, \cite[Theorem 12.3.2]{ChristensenBook}.
Note that since $(g,\Lambda)$ is a frame for $L^2(\ensuremath{\mathbb R}^d)$,
we have $\sigma(S_{\Lambda,g}) \subset [A,B]$
where $0 < A \leq B < \infty$ are the optimal frame bounds for $(g,\Lambda)$.
Thus, we obtain $S_{\Lambda,g}^{-1} \, g \in V_\Lambda \subset V$
and $S_{\Lambda,g}^{-1/2} g \,\in\, V_\Lambda \subset V$
from Proposition~\ref{t:inverse_closed} with $F(z) = z^{-1}$ and $F(z) = z^{-1/2}$
(with any suitable branch cut; for instance, the half-axis $(-\infty,0]$),
respectively, on $\Omega = \bigl\{ x + i y : x \in ( \frac{A}{2}, \infty) , y \in \ensuremath{\mathbb R}\bigr\}$. \end{proof}
Finally, we state and prove a version of \Cref{t:main} for Gabor frame \emph{sequences}. For completeness, we briefly recall the necessary concepts. Generally, a (countable) family $(h_i)_{i \in I}$ in a Hilbert space $\calH$ is called a \emph{frame sequence}, if $(h_i)_{i \in I}$ is a frame for the subspace $\calH' := \overline{\operatorname{span}} \{ h_i \colon i \in I \} \subset \calH$. In this case, the frame operator
$S : \calH \to \calH, f \mapsto \sum_{i \in I} \langle f, h_i \rangle h_i$, is a bounded, self-adjoint operator on $\calH$, and $S|_{\calH'} : \calH' \to \calH'$ is boundedly invertible; in particular, $\operatorname{ran} S = \calH' \subset \calH$ is closed, so that $S$ has a well-defined \emph{pseudo-inverse} $S^{\dagger}$, given by \[
S^{\dagger}
= (S|_{\calH'})^{-1} \circ P_{\calH'} : \quad
\calH \to \calH' , \] where $P_{\calH'}$ denotes the orthogonal projection onto $\calH'$. The \emph{canonical dual system} of $(h_i)_{i \in I}$ is then given by $(h_i')_{i \in I} = (S^{\dagger} h_i)_{i \in I} \subset \calH'$, and it satisfies $\sum_{i \in I} \langle f, h_i \rangle h_i ' = \sum_{i \in I} \langle f, h_i' \rangle h_i = P_{\calH'} f$ for all $f \in \calH$.
Finally, in the case where $(h_i)_{i \in I} = (g,\Lambda)$ is a Gabor family with a lattice $\Lambda$, it is easy to see that $S \circ \pi(\lambda) = \pi(\lambda) \circ S$ and $\pi(\lambda) \calH' \subset \calH'$ for $\lambda \in \Lambda$, which implies $P_{\calH'} \circ \pi(\lambda) = \pi(\lambda) \circ P_{\calH'}$, and therefore $S^\dagger \circ \pi(\lambda) = \pi(\lambda) \circ S^\dagger$ for all $\lambda \in \Lambda$. Consequently, setting $\gamma := S^\dagger g$, we have $S^\dagger (\pi(\lambda) g) = \pi(\lambda)\gamma$, so that the canonical dual system of a Gabor frame sequence $(g,\Lambda)$ is the Gabor system $(\gamma,\Lambda)$, where $\gamma = S^{\dagger} g$ is called the \emph{canonical dual window} of $(g,\Lambda)$. Our next result shows that $\gamma$ inherits the regularity of $g$, if one measures this regularity using one of the three spaces $H^1, L_w^2$, or $\mathbb{H}^1$.
\begin{prop}\label{p:MainResultForGaborFrameSequences}
Let $V \in \{ H^1(\ensuremath{\mathbb R}^d), L_w^2(\ensuremath{\mathbb R}^d), \mathbb{H}^1(\ensuremath{\mathbb R}^d) \}$.
Let $\Lambda \subset \ensuremath{\mathbb R}^{2d}$ be a lattice and let $g \in V$.
If $(g,\Lambda)$ is a frame sequence, then the associated canonical dual window $\gamma$
satisfies $\gamma \in V$. \end{prop}
\begin{proof}
The frame operator $S : L^2(\ensuremath{\mathbb R}^d) \to L^2(\ensuremath{\mathbb R}^d)$ associated to $(g,\Lambda)$
is non-negative and has closed range.
Consequently, there exist $\varepsilon > 0$ and $R > 0$ such that $\sigma(S) \subset \{ 0 \} \cup [\varepsilon,R]$;
see for instance \cite[Lemma~A.2]{QuantitativeSubspaceBL}.
Now, with the open ball $B_\delta(0) := \{ z \in \ensuremath{\mathbb C} \colon |z| < \delta \}$, define
\[
\Omega
:= B_{\varepsilon/4} (0)
\cup \big\{
x + i y
\colon
x \in (\tfrac{\varepsilon}{2},2 R), y \in (-\tfrac{\varepsilon}{4}, \tfrac{\varepsilon}{4})
\big\}
\subset \ensuremath{\mathbb C} ,
\]
noting that $\Omega \subset \ensuremath{\mathbb C}$ is open, with $\sigma(S) \subset \Omega$.
Furthermore, it is straightforward to see that
\[
\varphi : \quad
\Omega \to \ensuremath{\mathbb C}, \quad
z \mapsto \begin{cases}
0, & \text{if } z \in B_{\varepsilon/4} (0), \\
z^{-1}, & \text{otherwise}
\end{cases}
\]
is holomorphic.
Since the functional calculus for self-adjoint operators
is an extension of the holomorphic functional calculus,
\cite[Lemma~A.6]{QuantitativeSubspaceBL} shows that $S^{\dagger} = \varphi(S)$.
Finally, since $g \in V_\Lambda$, \Cref{t:inverse_closed} now shows that
$\gamma = S^{\dagger} g = \varphi(S) g \in V_\Lambda \subset V$ as well. \end{proof}
\appendix
\section{(Non)-convergence of Janssen's representation for \texorpdfstring{$\mathbb{H}^1$}{ℍ¹} windows}
In this appendix we provide a counterexample showing that Janssen's representation of the frame operator associated to a Bessel vector $g \in \mathbb{H}^1$ in general does not converge \emph{unconditionally} with respect to the strong operator topology. We furthermore show that for convergence in \emph{operator norm}, even conditional convergence fails in general.
For simplicity, we only consider the setting $d = 1$ and the lattice $\Lambda = \ensuremath{\mathbb Z} \times \ensuremath{\mathbb Z}$. Thus, given a function $g \in \mathbb{H}^1 = \mathbb{H}^1(\ensuremath{\mathbb R})$, we say that $g$ is a \emph{Bessel vector} if the Gabor system $(T_k M_\ell \, g)_{k,\ell \in \ensuremath{\mathbb Z}} \subset L^2(\ensuremath{\mathbb R})$ is a Bessel system.
In this case, \emph{Janssen's representation} of the frame operator $S := S_g := S_{\ensuremath{\mathbb Z} \times \ensuremath{\mathbb Z}, g, g}$ is (formally) given by \begin{equation}
S
= \sum_{\ell, k \in \ensuremath{\mathbb Z}}
\langle g, T_k M_\ell \, g \rangle \, T_k M_\ell.
\label{eq:JannssenRepresentation} \end{equation} We are interested in the question whether the series defining Janssen's representation is unconditionally convergent in the \emph{strong operator topology \braces{SOT\,}}, as an operator on $L^2(\ensuremath{\mathbb R})$. We will construct a function $g \in \mathbb{H}^1$ for which this fails.
\subsection{Properties of the Zak transform}\label{sec:ZakProperties}
The construction of the counterexample is based on several properties of the \emph{Zak transform} that we briefly recall. Given $f \in L^2(\ensuremath{\mathbb R})$, its Zak transform $Z f \in L_{{\rm loc}}^2(\ensuremath{\mathbb R}^2)$ is defined as \[
Z f (x,\omega)
:= \sum_{k \in \ensuremath{\mathbb Z}}
f(x - k) e^{2 \pi i k \omega} , \] where the series converges in $L_{{\rm loc}}^2 (\ensuremath{\mathbb R}^2)$; this is a consequence of the fact that \begin{equation}
Z : \quad
L^2(\ensuremath{\mathbb R}) \to L^2([0,1]^2)
\text{ is unitary} ,
\label{eq:ZakIsUnitary} \end{equation}
as shown in \cite[Theorem~8.2.3]{GroechenigTFFoundations} and of the fact that the Zak transform $Z f$ of a function $f \in L^2(\ensuremath{\mathbb R})$ is always \emph{quasi-periodic}, meaning that \begin{equation}
Z f(x+n, \omega) = e^{2 \pi i n \omega} Z f (x,\omega)
\qquad \text{and} \qquad
Z f (x, \omega + n) = Z f (x,\omega)
\label{eq:QuasiPeriodicity} \end{equation} for (almost) all $x,\omega \in \ensuremath{\mathbb R}$ and all $n \in \ensuremath{\mathbb Z}$; see \cite[Equations~(8.4) and (8.5)]{GroechenigTFFoundations}.
Another crucial property is the interplay between the Zak transform and the time-frequency shifts $T_k M_n$, as expressed by the following formula (found in \cite[Equation~(8.7)]{GroechenigTFFoundations}): \begin{equation}
Z[T_k M_n f] (x,\omega)
= e^{2 \pi i n x} e^{- 2 \pi i k \omega} Z f (x,\omega)
= e_{n,-k} (x,\omega) \, Z f (x,\omega) ,
\label{eq:ZakDiagonalizesTF} \end{equation} where we used the functions \[
e_{n,k}(x,\omega)
:= e^{2 \pi i(nx+ k\omega)}
\quad \text{for } n,k\in\ensuremath{\mathbb Z} \text{ and } x,\omega \in \ensuremath{\mathbb R}. \] Note that $(e_{n,k})_{n,k\in\ensuremath{\mathbb Z}}$ is an orthonormal basis of $L^2 ([0,1]^2)$.
Finally, we note the following equivalence, taken from \cite[Theorem~3.1]{BenedettoDifferentiationAndBLT}: \begin{equation}
\forall \, g \in L^2(\ensuremath{\mathbb R}): \quad
g \text{ is a Bessel vector } \Longleftrightarrow \, Z g \in L^\infty([0,1]^2).
\label{eq:BesselSequenceZakCharacterization} \end{equation}
\subsection{Properties of \texorpdfstring{$\mathbb{H}^1$}{ℍ¹}} \label{sec:FatH1Properties}
A further important property that we will use is the following characterization of the space $\mathbb{H}^1$ via the Zak transform, a proof of which is given in \mbox{\cite[Lemma~2.4]{QuantitativeSubspaceBL}}. \begin{equation}
\forall \, f \in L^2(\ensuremath{\mathbb R}): \quad
f \in \mathbb{H}^1 \, \Longleftrightarrow \, Z f \in W^{1,2}_{{\rm loc}}(\ensuremath{\mathbb R}^2) .
\label{eq:FatH1ZakCharacterization} \end{equation} It is crucial to observe that the Sobolev space $W^{1,2}(\ensuremath{\mathbb R}^2)$ belongs to the ``borderline'' case of the Sobolev embedding theorem, meaning $\strut W^{1,2}_{{\rm loc}}(\ensuremath{\mathbb R}^2) \not\hookrightarrow L^\infty_{{\rm loc}}(\ensuremath{\mathbb R}^2)$. In fact, it is easy to verify (see e.g.\ \cite[Page~280]{EvansPDE}) for $x_0 := (\frac{1}{2}, \frac{1}{2})^T \in \ensuremath{\mathbb R}^2$ that the function \[
u_0 : \quad
(0,1)^2 \to \ensuremath{\mathbb R}, \quad
x \mapsto \ln \left(\ln \left(1 + \frac{1}{|x - x_0|}\right)\right) \] belongs to $W^{1,2}( (0,1)^2 )$, but is not essentially bounded. Now, using the chain rule and the product rule for Sobolev functions (see e.g.\ \cite[Exercise~11.51(i)]{LeoniSobolev} and \cite[Theorem~1 in Section~5.2.3]{EvansPDE}), we see that if $\varphi \in C_c^\infty ( (0,1)^2 )$ is chosen such that $0 \leq \varphi \leq 1$ and such that $\varphi \equiv 1$ on a neighborhood of $x_0$, then the function \begin{equation}
u : \quad
\ensuremath{\mathbb R}^2 \to [0,\infty), \quad
x \mapsto \varphi(x) \cdot \bigl(1 + \sin(u_0(x))\bigr)
\label{eq:NonContinuousSobolevFunction} \end{equation} satisfies $u \in W^{1,2}(\ensuremath{\mathbb R}^2)$, is continuous and bounded on $\ensuremath{\mathbb R}^2 \setminus \{ x_0 \}$, but $\lim_{x \to x_0} u(x)$ does not exist; this uses that $\lim_{x \to \infty} \sin(x)$ does not exist and that on each small ball $B_\varepsilon (x_0)$, the function $u_0$ attains all values from $(M,\infty)$, for a suitable $M = M(\varepsilon) > 0$.
\subsection{A connection to Fourier series} \label{sec:FourierSeriesConnection}
In this subsection, we show that for any fixed window $g \in L^2(\ensuremath{\mathbb R})$ the unconditional convergence of Janssen's representation in the strong operator topology implies that the partial sums of a certain Fourier series are uniformly bounded in $L^\infty$. This connection will be used in the next subsection to disprove the unconditional convergence of Janssen's representation in the strong operator topology.
Precisely, define $Q := [0,1]^2$. For $H \in L^\infty(Q)$, define the associated multiplication operator as \[
M_H : \quad
L^2(Q) \to L^2(Q), \quad
F \mapsto F \cdot H . \]
It is well-known that $\| M_H \|_{L^2 \to L^2} = \| H \|_{L^\infty}$.
Let us fix any window $g \in L^2(\ensuremath{\mathbb R})$. Given a finite set $I \subset \ensuremath{\mathbb Z}^2$, we define \[
S_I : \quad
L^2(\ensuremath{\mathbb R}) \to L^2(\ensuremath{\mathbb R}), \quad
f \mapsto \sum_{(k,\ell) \in I}
\langle g, T_k M_\ell g \rangle T_k M_\ell f . \] Using \Cref{eq:ZakDiagonalizesTF} and the isometry of the Zak transform, we then see \begin{equation*}
\begin{split}
Z(S_I f)
& = \sum_{(k,\ell) \in I}
\langle Z g, Z[T_k M_\ell g] \rangle_{L^2(Q)} Z[T_k M_\ell f]
= Z f \cdot \sum_{(k,\ell) \in I}
\langle Z g, Z g \cdot e_{\ell,-k} \rangle_{L^2(Q)}
\cdot e_{\ell,-k} \\
& = Z f \cdot \sum_{(k,\ell) \in I}
\langle Z g \cdot \overline{Z g}, e_{\ell,-k} \rangle_{L^2(Q)}
\cdot e_{\ell,-k}
= Z f \cdot \sum_{(k,\ell) \in I}
\widehat{|Z g|^2}(\ell, -k) \cdot e_{\ell,-k} \\
&= M_{\mathcal{F}_{I'} [|Z g|^2]} [Z f] ,
\end{split} \end{equation*} where $I' := \{ (\ell,-k) \colon (k,\ell) \in I \}$ and \[
\mathcal{F}_J H
:= \sum_{\alpha \in J}
\widehat{H}(\alpha) \, e_\alpha
\quad \text{with} \quad
\widehat{H}(\alpha) = \langle H, e_\alpha \rangle_{L^2(Q)}
\quad \text{for} \quad
J \subset \ensuremath{\mathbb Z}^2 . \] In other words, we have \begin{equation}\label{eq:MultiplicationOperatorConnection}
S_I = Z^{-1} \circ M_{\mathcal{F}_{I'}[|Z g|^2]} \circ Z. \end{equation} Given $J \subset \ensuremath{\mathbb Z}^2$, define $J_\ast := \{ (-\ell,k) \colon (k,\ell) \in J \}$ and note $(J_\ast)' = J$. Now, suppose that $(S_I)_I$ converges strongly to some (bounded) operator, as $I \to \ensuremath{\mathbb Z}^2$; this is always the case if Janssen's representation converges unconditionally (to $S$ or some other operator) in the SOT. Then, given any sequence $(J_n)_{n \in \ensuremath{\mathbb N}}$ of finite subsets $J_n \subset \ensuremath{\mathbb Z}^2$ with $J_n \subset J_{n+1}$ and $\bigcup_{n=1}^\infty J_n = \ensuremath{\mathbb Z}^2$, we see $(J_n)_\ast \to \ensuremath{\mathbb Z}^2$
so that the sequence $(S_{(J_n)_\ast})_{n \in \ensuremath{\mathbb N}}$ converges strongly to some bounded operator. By the uniform boundedness principle, this shows $\| S_{(J_n)_\ast} \|_{L^2 \to L^2} \leq C$ for all $n \in \ensuremath{\mathbb N}$ and some $C > 0$.
By \Cref{eq:ZakIsUnitary,eq:MultiplicationOperatorConnection}, and because of $((J_n)_\ast)' = J_n$, this implies \[
\big\| \mathcal{F}_{J_n} [|Z g|^2] \big\|_{L^\infty (Q)}
= \big\| M_{\mathcal{F}_{J_n}[|Z g|^2]} \big\|_{L^2 \to L^2}
\leq C
\qquad \forall \, n \in \ensuremath{\mathbb N} , \]
meaning that the partial Fourier sums $\mathcal{F}_{J_n} [|Z g|^2]$
of the function $|Zg|^2$ are uniformly bounded in $L^\infty(Q)$.
\subsection{The counterexample}\label{sec:MainResult}
In this subsection, we prove the following:
\begin{prop}\label{prop:MainResult}
There exists a Bessel vector $g \in \mathbb{H}^1(\ensuremath{\mathbb R})$ such that the series
defining Janssen's representation of the frame operator $S = S_g = S_{\ensuremath{\mathbb Z} \times \ensuremath{\mathbb Z}, g, g}$
associated to $g$ is \emph{not} unconditionally convergent in the strong operator topology. \end{prop}
To prove the proposition, we consider the function $F := u : (0,1)^2 \to [0,\infty)$ introduced in \Cref{eq:NonContinuousSobolevFunction}. The properties of $F$ that we need are the following: \begin{enumerate}
\item $F$ has compact support in $(0,1)^2$,
say $\operatorname{supp} F \subset (\delta, 1-\delta)^2$
for some $\delta \in (0,\frac{1}{2})$.
\item $F$ is bounded, but discontinuous at $x_0 \in (0,1)^2$
(even after adjusting $F$ on a set of measure zero).
\item $F \in W^{1,2}\bigl( (0,1)^2 \bigr)$. \end{enumerate}
We now extend $F$ by zero to $[0,1)^2$ and then extend $1$-periodically in both coordinates to $\ensuremath{\mathbb R}^2$. Thanks to the compact support of $F$, it is easy to see $F \in W^{1,2}_{{\rm loc}}(\ensuremath{\mathbb R}^2)$.
Furthermore, we consider the function \[
G_0 : \quad
\ensuremath{\mathbb R}^2 \to \ensuremath{\mathbb C}, \quad
(x,\omega) \mapsto e^{2 \pi i \lfloor x \rfloor \omega} ,
\] where for each $x\in\ensuremath{\mathbb R}$, $\lfloor x \rfloor \in \ensuremath{\mathbb Z}$ denotes the unique integer such that $x \in \lfloor x \rfloor + [0,1)$. It is then straightforward to verify that $G_0$ is quasi-periodic (see \Cref{eq:QuasiPeriodicity}), i.e., $G_0(x+m,\omega) = e^{2 \pi i m \omega} G_0(x,\omega)$ and $G_0(x,\omega+m) = G_0(x,\omega)$ for $x,\omega \in \ensuremath{\mathbb R}$ and $m \in \ensuremath{\mathbb Z}$. Since $F$ is $1$-periodic in both coordinates, it is easy to see that $F \cdot G_0$ is quasi-periodic as well.
Finally, we choose a smooth function $\psi : \ensuremath{\mathbb R} \to \ensuremath{\mathbb R}$ satisfying $\psi(x) = n$ for all $x \in n + [\delta,1-\delta]$ with $n \in \ensuremath{\mathbb Z}$, and define \[
G : \quad
\ensuremath{\mathbb R}^2 \to \ensuremath{\mathbb C}, \quad
(x,\omega) \mapsto e^{2 \pi i \psi(x) \omega} . \]
Using that $F(x,\omega) = 0$ for $n \in \ensuremath{\mathbb Z}$ and $x \in [n,n+1] \setminus (n +\delta,n +1-\delta)$, it is easy to check $F \cdot G_0 = F \cdot G$, so that $H := F \cdot G \in W^{1,2}_{{\rm loc}}(\ensuremath{\mathbb R}^2) \subset L^2_{{\rm loc}}(\ensuremath{\mathbb R}^2)$ is quasi-periodic.
Since the Zak transform $Z : L^2(\ensuremath{\mathbb R}) \to L^2( (0,1)^2 )$ is unitary, there exists a unique function $g \in L^2(\ensuremath{\mathbb R})$ such that $(Z g)|_{(0,1)^2} = H|_{(0,1)^2}$.
Since both $Z g$ and $H$ are quasi-periodic, this implies $Z g = H$ almost everywhere. Since $H \in W^{1,2}_{{\rm loc}}(\ensuremath{\mathbb R}^2)$ is bounded, \Cref{eq:BesselSequenceZakCharacterization,eq:FatH1ZakCharacterization} show that $g \in \mathbb{H}^1$ is a Bessel vector. Let us assume towards a contradiction that Janssen's representation of the frame operator associated to $g$ converges unconditionally in the strong operator topology.
Note that $|Z g|^2 = |H|^2 = F^2$ is discontinuous at $x_0 \in (0,1)^2$
(since $F$ is discontinuous there and also non-negative), even after possibly changing $|Z g|^2$ on a null-set.
In particular, this implies that the Fourier coefficients $c_\alpha := \widehat{|Z g|^2}(\alpha)$
(for $\alpha \in \ensuremath{\mathbb Z}^2$) satisfy $c = (c_\alpha)_{\alpha \in \ensuremath{\mathbb Z}^2} \notin \ell^1(\ensuremath{\mathbb Z}^2)$, since otherwise the Fourier series of $|Z g|^2$ would be uniformly convergent. This implies $\sum_{\alpha \in \ensuremath{\mathbb Z}^2} |\operatorname{Re} c_\alpha| = \infty$
or $\sum_{\alpha \in \ensuremath{\mathbb Z}^2} |\operatorname{Im} c_\alpha| = \infty$.
For simplicity, we assume the first case; the second case can be treated by similar arguments. This implies that there exists an enumeration $(\alpha_n)_{n \in \ensuremath{\mathbb N}}$ of $\ensuremath{\mathbb Z}^2$
such that $|\sum_{n=1}^N \operatorname{Re} c_{\alpha_n}| \to \infty$ as $N \to \infty$. Indeed, if $\sum_{\alpha \in \ensuremath{\mathbb Z}^2} (\operatorname{Re} c_\alpha)_+ < \infty$ or $\sum_{\alpha \in \ensuremath{\mathbb Z}^2} (\operatorname{Re} c_\alpha)_- < \infty$, this is trivial (for every enumeration); otherwise, existence of the desired enumeration follows from the Riemann rearrangement theorem (see e.g., \cite[Theorem~3.54]{RudinPrinciplesOfAnalysis}).
Now, define $J_n := \{ \alpha_1,\dots,\alpha_n \}$ for $n \in \ensuremath{\mathbb N}$.
We have seen in \Cref{sec:FourierSeriesConnection} that the partial Fourier sums
$\mathcal{F}_{J_n} [|Z g|^2]$ are uniformly bounded in $L^\infty$, say $\| \mathcal{F}_{J_n} [|Z g|^2] \|_{L^\infty} \leq C$ for all $n \in \ensuremath{\mathbb N}$. Since each $\mathcal{F}_{J_n} [|Z g|^2]$ is continuous (in fact, a trigonometric polynomial), this implies \begin{align*}
C
\geq \big|\mathcal{F}_{J_N} |Z g|^2 (0) \big|
& = \Big|
\sum_{\alpha \in J_N}
\widehat{|Z g|^2}(\alpha)
\cdot e^{2 \pi i \langle \alpha,0 \rangle}
\Big| \\
& = \Big|
\sum_{\alpha \in J_N}
c_\alpha
\Big|
\geq \Big| \operatorname{Re} \sum_{n=1}^N c_{\alpha_n} \Big|
\to \infty \text{ as } N \to \infty , \end{align*} which is the desired contradiction.
\subsection{Conditional divergence of Janssen's representation in the operator norm} \label{sec:OperatorNormNonConvergence}
We showed above that \emph{unconditional} convergence of Janssen's representation \eqref{eq:JannssenRepresentation} in the strong operator topology fails for some Bessel vector $g \in \mathbb{H}^1(\ensuremath{\mathbb R})$. A similar argument shows that convergence in the operator norm (with respect to \emph{any} given enumeration) also fails in general: Using \Cref{eq:MultiplicationOperatorConnection} (or more generally the arguments in \Cref{sec:FourierSeriesConnection}), it is relatively easy to see that if for some enumeration $\ensuremath{\mathbb Z}^2 = \{ \alpha_n \colon n \in \ensuremath{\mathbb N} \}$ and $I_n := \{ \alpha_1,\dots,\alpha_n \}$, the sequence of partial sums $(S_{I_n})_{n \in \ensuremath{\mathbb N}}$ of Janssen's representation \eqref{eq:JannssenRepresentation} converges in operator norm (not even necessarily to $S$),
then the associated sequence $(\mathcal{F}_{I_n'} [|Z g|^2])_{n \in \ensuremath{\mathbb N}}$
of partial Fourier sums of $|Z g|^2$ is Cauchy in $L^\infty(Q)$ and thus converges uniformly on $Q$ to a (necessarily continuous) function $\widetilde{H} : Q \to \ensuremath{\mathbb C}$. However, since $|Z g|^2 = |H|^2 = F^2 \in L^\infty(Q) \subset L^2(Q)$, we know that $\mathcal{F}_{I_n '} [|Z g|^2] \to F^2$ with convergence in $L^2(Q)$. Hence, $F^2 = \widetilde{H}$ almost everywhere on $Q$, where $\widetilde{H}$ is continuous. But we saw above that $F^2$ is discontinuous on $Q$, even after (possibly) changing it on a null-set. Thus, we have obtained the desired contradiction:
\begin{prop} There are Bessel vectors $g \in \mathbb{H}^1$ for which Janssen's representation fails to converge conditionally in the operator norm. \end{prop}
However, we leave it as an open question whether Janssen's representation converges conditionally in the strong sense for Bessel vectors $g \in \mathbb{H}^1$.
\section*{Acknowledgments} D.G.\ Lee acknowledges support by the DFG Grants PF 450/6-1 and PF 450/9-1. F.\ Philipp was funded by the Carl Zeiss Foundation within the project \textit{DeepTurb---Deep Learning in und von Turbulenz}. F.\ Voigtlaender acknowledges support by the German Science Foundation (DFG) in the context of the Emmy Noether junior research group VO 2594/1-1.
\section*{Author Affiliations}
\end{document} | arXiv |
Rotation system
In combinatorial mathematics, rotation systems (also called combinatorial embeddings or combinatorial maps) encode embeddings of graphs onto orientable surfaces by describing the circular ordering of a graph's edges around each vertex. A more formal definition of a rotation system involves pairs of permutations; such a pair is sufficient to determine a multigraph, a surface, and a 2-cell embedding of the multigraph onto the surface.
Every rotation scheme defines a unique 2-cell embedding of a connected multigraph on a closed oriented surface (up to orientation-preserving topological equivalence). Conversely, any embedding of a connected multigraph G on an oriented closed surface defines a unique rotation system having G as its underlying multigraph. This fundamental equivalence between rotation systems and 2-cell-embeddings was first settled in a dual form by Lothar Heffter in the 1890s[1] and extensively used by Ringel during the 1950s.[2] Independently, Edmonds gave the primal form of the theorem[3] and the details of his study have been popularized by Youngs.[4] The generalization to multigraphs was presented by Gross and Alpert.[5]
Rotation systems are related to, but not the same as, the rotation maps used by Reingold et al. (2002) to define the zig-zag product of graphs. A rotation system specifies a circular ordering of the edges around each vertex, while a rotation map specifies a (non-circular) permutation of the edges at each vertex. In addition, rotation systems can be defined for any graph, while as Reingold et al. define them rotation maps are restricted to regular graphs.
Formal definition
Formally, a rotation system is defined as a pair (σ, θ) where σ and θ are permutations acting on the same ground set B, θ is a fixed-point-free involution, and the group <σ, θ> generated by σ and θ acts transitively on B.
To derive a rotation system from a 2-cell embedding of a connected multigraph G on an oriented surface, let B consist of the darts (or flags, or half-edges) of G; that is, for each edge of G we form two elements of B, one for each endpoint of the edge. Even when an edge has the same vertex as both of its endpoints, we create two darts for that edge. We let θ(b) be the other dart formed from the same edge as b; this is clearly an involution with no fixed points. We let σ(b) be the dart in the clockwise position from b in the cyclic order of edges incident to the same vertex, where "clockwise" is defined by the orientation of the surface.
If a multigraph is embedded on an orientable but not oriented surface, it generally corresponds to two rotation systems, one for each of the two orientations of the surface. These two rotation systems have the same involution θ, but the permutation σ for one rotation system is the inverse of the corresponding permutation for the other rotation system.
Recovering the embedding from the rotation system
To recover a multigraph from a rotation system, we form a vertex for each orbit of σ, and an edge for each orbit of θ. A vertex is incident with an edge if these two orbits have a nonempty intersection. Thus, the number of incidences per vertex is the size of the orbit, and the number of incidences per edge is exactly two. If a rotation system is derived from a 2-cell embedding of a connected multigraph G, the graph derived from the rotation system is isomorphic to G.
To embed the graph derived from a rotation system onto a surface, form a disk for each orbit of σθ, and glue two disks together along an edge e whenever the two darts corresponding to e belong to the two orbits corresponding to these disks. The result is a 2-cell embedding of the derived multigraph, the two-cells of which are the disks corresponding to the orbits of σθ. The surface of this embedding can be oriented in such a way that the clockwise ordering of the edges around each vertex is the same as the clockwise ordering given by σ.
Characterizing the surface of the embedding
According to the Euler formula we can deduce the genus g of the closed orientable surface defined by the rotation system $(\sigma ,\theta )$ (that is, the surface on which the underlying multigraph is 2-cell embedded).[6] Notice that $V=|Z(\sigma )|$, $E=|Z(\theta )|$ and $F=|Z(\sigma \theta )|$. We find that
$g=1-{\frac {1}{2}}(V-E+F)=1-{\frac {1}{2}}(|Z(\sigma )|-|Z(\theta )|+|Z(\sigma \theta )|)$
where $Z(\phi )$ denotes the set of the orbits of permutation $\phi $.
See also
• Combinatorial map
Notes
1. Heffter (1891), Heffter (1898)
2. Ringel (1965)
3. Edmonds (1960a), Edmonds (1960b)
4. Youngs (1963)
5. Gross & Alpert (1974)
6. Lando & Zvonkin (2004), formula 1.3, p. 38.
References
• Cori, R.; Machì, A. (1992). "Maps, hypermaps and their automorphisms: a survey". Expositiones Mathematicae. 10: 403–467. MR 1190182.
• Edmonds, J. (1960a). "A combinatorial representation for polyhedral surfaces". Notices of the American Mathematical Society. 7: 646.
• Edmonds, John Robert (1960b). A combinatorial representation for oriented polyhedral surfaces (PDF) (Masters). University of Maryland. hdl:1903/24820.
• Gross, J. L.; Alpert, S. R. (1974). "The topological theory of current graphs". Journal of Combinatorial Theory, Series B. 17 (3): 218–233. doi:10.1016/0095-8956(74)90028-8. MR 0363971.
• Heffter, L. (1891). "Über das Problem der Nachbargebiete". Mathematische Annalen. 38 (4): 477–508. doi:10.1007/BF01203357. S2CID 121206491.
• Heffter, L. (1898). "Über metacyklische Gruppen und Nachbarcontigurationen". Mathematische Annalen. 50 (2–3): 261–268. doi:10.1007/BF01448067. S2CID 120691296.
• Lando, Sergei K.; Zvonkin, Alexander K. (2004). Graphs on Surfaces and Their Applications. Encyclopaedia of Mathematical Sciences: Lower-Dimensional Topology II. Vol. 141. Springer-Verlag. ISBN 978-3-540-00203-1..
• Mohar, Bojan; Thomassen, Carsten (2001). Graphs on Surfaces. Johns Hopkins University Press. ISBN 0-8018-6689-8.
• Reingold, O.; Vadhan, S.; Wigderson, A. (2002). "Entropy waves, the zig-zag graph product, and new constant-degree expanders". Annals of Mathematics. 155 (1): 157–187. arXiv:math/0406038. doi:10.2307/3062153. JSTOR 3062153. MR 1888797. S2CID 120739405.
• Ringel, G. (1965). "Das Geschlecht des vollständigen paaren Graphen". Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg. 28 (3–4): 139–150. doi:10.1007/BF02993245. MR 0189012. S2CID 120414651.
• Youngs, J. W. T. (1963). "Minimal imbeddings and the genus of a graph". Journal of Mathematics and Mechanics. 12 (2): 303–315. doi:10.1512/iumj.1963.12.12021. MR 0145512.
| Wikipedia |
\begin{document}
\title{Lower bound on the number of the maximum genus embedding of $K_{n,n}$ \footnotemark[2] \author{Guanghua Dong$^{1,2}$, Han Ren$^{3}$, Ning Wang$^{4}$, Yuanqiu Huang$^{1}$\\ {\small\em 1.Department of Mathematics, Normal University of Hunan, Changsha, 410081, China}\\ \hspace{-1mm}{\small\em 2.Department of Mathematics, Tianjin Polytechnic University, Tianjin, 300387, China}\\ \hspace{-1mm}{\small\em 3.Department of Mathematics, East China Normal University, Shanghai, 200062,China}\\ \hspace{-5mm}{\small\em 4.Department of Information Science and Technology, Tianjin University of Finance }\\ \hspace{-74mm} {\small\em and Economics, Tianjin, 300222, China}\\ }} \footnotetext[2]{\footnotesize \em This work was partially Supported by the China Postdoctoral Science Foundation funded project (Grant No: 20110491248), the National Natural Science Foundation of China (Grant No: 11171114), and the New Century Excellent Talents in University (Grant No: NCET-07-0276).}
\footnotetext[1]{\footnotesize \em E-mail: [email protected](G. Dong). }
\date{} \maketitle
\begin{abstract} In this paper, we provide an method to obtain the lower bound on the number of the distinct maximum genus embedding of the complete bipartite graph $K_{n,n}$ ($n$ be an odd number), which, in some sense, improves the results of S. Stahl and H. Ren.
\noindent{\bf Key Words:} graph embedding; maximum genus; v-type-edge \\ {\bf MSC(2000):} \ 05C10 \end{abstract}
\noindent {\bf 1. Introduction}
Graphs considered here are all connected and finite. A $surface$ $S$ means a compact and connected two-manifold without boundaries. A $cellular$ $embedding$ of a graph $G$ into a surface $S$ is a one-to-one mapping $\psi:$ $G \rightarrow S$ such that each component of $S-\psi(G)$ is homomorphic to an open disc. The maximum genus $\gamma_M(G)$ of a connected graph \emph{G} is the maximum integer \emph{k} such that there exists an embedding of $G$ into the orientable surface of genus $k$. By Euler's polyhedron formula, if a cellular embedding of a graph $G$ with $n$ vertices, $m$ edges and $r$ faces is on an orientable surface of genus $\gamma$, the $n-m+r=2-2\gamma$. Since $\gamma \geqslant 1$, we have $\gamma(G) \leqslant \frac{1}{2}\lfloor\beta(G)\rfloor$, where $\beta(G)=m-n+1$ is called the $Betti$ $number$ (or $cycle$ $rank$) of the graph $G$. It follows that $\gamma_M(G)\leqslant \frac{1}{2}\lfloor\beta(G)\rfloor$. If $\gamma_M(G)= \frac{1}{2}\lfloor\beta(G)\rfloor$, then the graph is called $upper$ $embeddable$. It is not difficult to deduced that a graph is upper embeddable if and only if its face number is not greater than two. Since the introductory investigations on the maximum genus of graphs by Nordhaus, Stewart, and White$^{\cite{nor}}$, this parameter has attracted considerable attention from mathematicians and computer scientists. Up to now, the research about the maximum genus of graphs mainly focus on the aspects as characterizations and complexity, the upper embeddability, the lower bound, the enumeration of the distinct maximum genus embedding, $etc.$. For more detailed information, the reader can be found in a survey in $\cite{top}$.
It is well known that the enumeration of the distinct maximum genus embedding plays an important role in the study of the genus distribution problem, which may be used to decide whether two given graphs are isomorphic. It was S. Stahl$^{\cite{sta}}$ who provides the first result about the lower bound on the number of the distinct maximum genus embedding, which is states as the following:
{\bf Lemma 1$^{\cite{sta}}$} \ \ A connected graph (loops and multi-edges are allowed) of order $n$ with degree sequence $d_1$, $d_2$, $\dots$, $d_{n}$ has at least $$(d_1-5)!(d_2-5)!(d_3-5)!(d_4-5)!\prod_{i=5}^{n}(d_{i}-2)!$$ distinct orientable embeddings with at most two facial walks, where $m!=1$ whenever $m\leqslant0$.
But up to now, except \cite{sta} and \cite{ren}, there is little result concerning the number of the maximum genus embedding of graphs. In this paper, we will provide a method to enumerate the number of the distinct maximum genus embedding of the complete bipartite graph $K_{n,n}$ ($n$ be an odd number), and offer a lower bound which is better than that of S. Stahl$^{\cite{sta}}$ and H. Ren$^{\cite{ren}}$ in some sence. Furthermore, the enumerative method below can be used to any maximum genus embedding, other than the method in \cite{sta} which is restricted to upper embeddable graphs. Terminologies and notations not explained here can be seen in $\cite{bon}$ for general graph theory, and in $\cite{liu}$ and $\cite{moh}$ for topological graph theory.
\noindent {\bf 2. Main results}
A simple graph $G$ is called a $complete$ $bipartite$ $graph$ if its vertex set can be partitioned into two subsets $X$ and $Y$ so that every edge has one end in $X$ and one end in $Y$, and every vertex in $X$ is joined to every vertex in $Y$. We denote a $complete$ $bipartite$ $graph$ $G$ with bipartition $X$ and $Y$ by $G_{[X][Y]}$. A 2-$path$ is called a $v$-$type$-$edge$, and is denoted by $\mathcal {V}$. Let $\psi(G)$ be an embedding of a graph $G$. We say that a $v$-$type$-$edge$ are inserted into $\psi(G)$ if the three endpoints of the $v$-$type$-$edge$ are inserted into the corners of the faces in $\psi(G)$, yielding an embedding of $G+\mathcal {V}$. The embedding $\psi(G)$ of $G$ is called a $one$-$face$-$embedding$ (or $two$-$face$-$embedding$) if the total face number of $\psi(G)$ is one (or two). The following observation can be easily obtained and is essential in the proof of the Theorem A.
{\bf Observation} \ \ Let $\psi(G)$ be an embedding of a graph $G$. We can insert a $v$-$type$-$edge$ $\mathcal {V}$ to $\psi(G)$ to get an embedding $\rho(G+\mathcal {V})$ of $G+\mathcal {V}$ so that the face number of $\rho(G+\mathcal {V})$ is not more than that of $\psi(G)$.
{\bf Theorem A} \ \ For $n\equiv1 \ (mod \ 2)$, the number of the distinct maximum genus embedding of the complete bipartite graph $K_{n,n}$ is at least \begin{displaymath} 2^{\frac{n-1}{2}}\times \big((n-2)!! \big)^{n}\times\big( (n-1)! \big)^{n}. \end{displaymath}
{\bf Proof} \ \ Let $n=2s+1$ and $V(K_{n,n})=\{x_1, x_2, \dots, x_{n}\}\cup \{y_1, y_2, \dots, y_{n}\}$, where $X=\{x_1, x_2, \dots, x_{n}\}$ and $Y=\{y_1, y_2, \dots, y_{n}\}$ are the two independent set of $K_{n,n}$. We denote the $v$-$type$-$edge$ $y_{2i}x_{j}y_{2i+1}$ by $\mathcal {V}_{ji}$, where $i\in \{1,2, \dots, s\}$ and $j\in\{1,2, \dots, n\}$.
\setlength{\unitlength}{0.97mm} \begin{center} \begin{picture}(100,40)
\put(-20,5) {\begin{picture}(10,10)
\put(0,20){\circle*{1.5}}
\put(10,20){\circle*{1.5}}
\put(20,20){\circle*{1.5}}
\put(20,35){\circle*{1.5}}
\put(20,5){\circle*{1.5}}
\qbezier(20,35)(6,35)(0,20)
\put(20,35){\line(-2,-3){10}}
\put(20,35){\line(0,-1){15}}
\qbezier(20,5)(6,5)(0,20)
\qbezier(20,5)(21,17)(10,20)
\qbezier(20,5)(8,15)(19.3,20)
\begin{footnotesize}
\put(-6,20){{$y_1$}}
\put(7,17){{$y_2$}}
\put(20.5,17){{$y_3$}}
\put(21,37){{$x_1$}}
\put(21,1){{$x_2$}}
\end{footnotesize}
\begin{small}
\put(-2.5,-6){{\bf G$_{[x_1, x_2][y_1, y_2, y_3]}$}}
\end{small}
\end{picture}}
\put(22,5) {\begin{picture}(10,10)
\put(0,20){\circle*{1.5}}
\put(10,20){\circle*{1.5}}
\put(20,20){\circle*{1.5}}
\put(30,20){\circle*{1.5}}
\put(40,20){\circle*{1.5}}
\put(20,35){\circle*{1.5}}
\put(20,5){\circle*{1.5}}
\qbezier(20,35)(6,35)(0,20)
\qbezier(20,35)(34,34)(30.3,20.7)
\put(20,35){\line(-2,-3){10}}
\qbezier(20,35)(22,30)(39.3,20)
\put(20,35){\line(0,-1){15}}
\qbezier(20,5)(8,15)(19.3,20)
\qbezier(20,5)(6,5)(0,20)
\put(20,5){\line(2,3){9.6}}
\qbezier(20,5)(21,17)(10,20)
\put(20,5){\line(4,3){19.5}}
\begin{footnotesize}
\put(-6,20){{$y_1$}}
\put(7,17){{$y_2$}}
\put(20.5,17){{$y_3$}}
\put(26,21){{$y_4$}}
\put(40,16.5){{$y_5$}}
\put(21,37){{$x_1$}}
\put(21,1){{$x_2$}}
\end{footnotesize}
\begin{small}
\put(5,-6){{\bf G$_{[x_1, x_2][y_1, y_2, \dots, y_5]}$}}
\end{small}
\end{picture}}
\put(84,5){\begin{picture}(10,10)
\put(0,35){\circle*{1.5}}
\put(0,35){\line(0,-1){15}}
\put(0,20){\circle*{1.5}}
\put(0,35){\line(2,-3){10}}
\put(10,20){\circle*{1.5}}
\qbezier(0,35)(34,25)(20.9,20)
\put(20,20){\circle*{1.5}}
\put(30,20){\circle*{1.5}}
\put(40,20){\circle*{1.5}}
\put(20,35){\circle*{1.5}}
\put(20,5){\circle*{1.5}}
\qbezier(20,35)(6,35)(0,20)
\qbezier(20,35)(34,34)(30.3,20.7)
\put(20,35){\line(-2,-3){10}}
\qbezier(20,35)(22,30)(39.3,20)
\put(20,35){\line(0,-1){15}}
\qbezier(20,5)(8,15)(19.3,20)
\qbezier(20,5)(6,5)(0,20)
\put(20,5){\line(2,3){9.6}}
\qbezier(20,5)(21,17)(10,20)
\put(20,5){\line(4,3){19.5}}
\begin{footnotesize}
\put(-6,20){{$y_1$}}
\put(7,17){{$y_2$}}
\put(20.5,17){{$y_3$}}
\put(26,21){{$y_4$}}
\put(40,16.5){{$y_5$}}
\put(21,37){{$x_1$}}
\put(21,1){{$x_2$}}
\put(-5,35){{$x_3$}}
\end{footnotesize}
\begin{small}
\put(-7,-6){{\bf G$_{[x_1, x_2][y_1, y_2, \dots, y_5]}\cup x_3y_1\cup\mathcal {V}_{3,1}$}}
\end{small}
\end{picture}}
\end{picture} \end{center}
{\bf Claim 1:} \ \ For $G_{[x_1, x_2][y_1, y_2, \dots, y_{n}]}$, the number of the distinct $one$-$face$-$embedding$ is at least $2^{s}\times((2s-1)!!)^2$.
There are 2 different ways to embed $G_{[x_1, x_2][y_1, y_2, y_{3}]}$ on an orientable surface so that the embedding is a $one$-$face$-$embedding$. Select any one of them and denote its face boundary by $W_0$. In $W_0$, there are three $face$-$corner$ containing $x_1$ and $x_2$ respectively. So, there are 3 different ways to put $\mathcal {V}_{1,2}$ in $W_0$, and 3 different ways to put $\mathcal {V}_{2,2}$ in $W_0$. Therefore, the total number of ways to put $\mathcal {V}_{1,2}\cup\mathcal {V}_{2,2}$ in $W_0$ is $3\times3=9$. For each of the above 9 ways, there are 2 different ways to make the embedding of $G_{[x_1, x_2][y_1, y_2, \dots, y_{5}]}$ being a $one$-$face$-$embedding$. So, for each of the $one$-$face$-$embedding$ of $G_{[x_1, x_2][y_1, y_2, y_{3}]}$, there are $3\times3\times2$ different ways to add $\mathcal {V}_{1,2}\cup \mathcal {V}_{2,2}$ to $G_{[x_1, x_2][y_1, y_2, y_{3}]}$ to get a $one$-$face$-$embedding$ of $G_{[x_1, x_2][y_1, y_2, \dots, y_{5}]}$.
Similarly, we can get that for each of the $one$-$face$-$embedding$ of $G_{[x_1, x_2][y_1, y_2, \dots, y_{5}]}$, there are $5\times5\times2$ different ways to add $\mathcal {V}_{1,3}\cup \mathcal {V}_{2,3}$ to $G_{[x_1, x_2][y_1, y_2, \dots, y_{5}]}$ to get a $one$-$face$-$embedding$ of $G_{[x_1, x_2][y_1, y_2, \dots, y_{7}]}$.
In general, we have that for each of the $one$-$face$-$embedding$ of $G_{[x_1, x_2][y_1, y_2, \dots, y_{2k-1}]}$, there are $(2k-1)\times(2k-1)\times2$ different ways to add $\mathcal {V}_{1,k}\cup \mathcal {V}_{2,k}$ to $G_{[x_1, x_2][y_1, y_2, \dots, y_{2k-1}]}$ to get a $one$-$face$-$embedding$ of $G_{[x_1, x_2][y_1, y_2, \dots, y_{2k+1}]}$.
From the above we can get that the number of the distinct $one$-$face$-$embedding$ of $G_{[x_1, x_2][y_1, y_2, \dots, y_{n}]}$ is at least \begin{eqnarray*} 2\times(3\times3\times2)\times(5\times5\times2)\times(7\times7\times2)\times \dots \times((2s-1)\times(2s-1)\times2) \\ \lefteqn{ = 2^{s}\times((2s-1)!!)^2. } \hspace*{141mm} \\ \end{eqnarray*}
{\bf Claim 2:} \ \ For each of the $one$-$face$-$embedding$ of $G_{[x_1, x_2][y_1, y_2, \dots, y_{n}]}$, there are at least $2\times(2s-1)!!\times2^{2s}$ different ways to make $G_{[x_1, x_2, x_3][y_1, y_2, \dots, y_{n}]}$ being a $one$-$face$-$embedding$.
Let $\mathcal {E}_1$ be an arbitrary $one$-$face$-$embedding$ of $G_{[x_1, x_2][y_1, y_2, \dots, y_{n}]}$. In $\mathcal {E}_1$, there are two different $face$-$corner$ containing $y_{i} \ (i=1,2,3)$. So, there are $2\times2\times2(=8)$ different ways to add $y_1x_3\cup\mathcal {V}_{3,1}$ to $\mathcal {E}_1$ to make $G_{[x_1, x_2][y_1, y_2, \dots, y_{n}]}\cup y_1x_3\cup\mathcal {V}_{3,1}$ being a $one$-$face$-$embedding$. For each of the above 8 $one$-$face$-$embedding$ of $G_{[x_1, x_2][y_1, y_2, \dots, y_{n}]}\cup y_1x_3\cup\mathcal {V}_{3,1}$, there are 3 different $face$-$corner$ containing $x_3$ and 2 different $face$-$corner$ containing $y_{i} \ (i=4,5)$. So, for each of the above 8
$one$-$face$-$embedding$ of $G_{[x_1, x_2][y_1, y_2, \dots, y_{n}]}\cup y_1x_3\cup\mathcal {V}_{3,1}$, there are $3\times2\times2$ different ways to add $\mathcal {V}_{3,2}$ to $G_{[x_1, x_2][y_1, y_2, \dots, y_{n}]}\cup y_1x_3\cup\mathcal {V}_{3,1}$ to make $G_{[x_1, x_2][y_1, y_2, \dots, y_{n}]}\cup y_1x_3\cup\mathcal {V}_{3,1}\cup\mathcal {V}_{3,2}$ being a $one$-$face$-$embedding$.
In general, we have that for each of the $one$-$face$-$embedding$ of $G_{[x_1, x_2][y_1, y_2, \dots, y_{n}]}\cup y_1x_3\cup\mathcal {V}_{3,1}\cup\mathcal {V}_{3,2}\cup\dots\cup\mathcal {V}_{3,k-1}$, there are $(2k-1)\times2\times2$ different ways to add $\mathcal {V}_{3,k}$ to $G_{[x_1, x_2][y_1, y_2, \dots, y_{n}]}\cup y_1x_3\cup\mathcal {V}_{3,1}\cup\mathcal {V}_{3,2}\cup\dots\cup\mathcal {V}_{3,k-1}$ to get a $one$-$face$-$embedding$ of $G_{[x_1, x_2][y_1, y_2, \dots, y_{n}]}\cup y_1x_3\cup\mathcal {V}_{3,1}\cup\mathcal {V}_{3,2}\cup\dots\cup\mathcal {V}_{3,k-1}\cup\mathcal {V}_{3,k}$.
From the above we can get that for each of the $one$-$face$-$embedding$ of $G_{[x_1, x_2][y_1, y_2, \dots, y_{n}]}$, there are at least \begin{eqnarray*} (2\times2\times2)\times(3\times2\times2)\times(5\times2\times2)\times \dots \times((2s-1)\times2\times2) \\ \lefteqn{ = 2\times(2s-1)!!\times2^{2s} } \hspace*{121mm} \\ \end{eqnarray*}
\hspace{-8mm} different ways to make $G_{[x_1, x_2, x_3][y_1, y_2, \dots, y_{n}]}$ being a $one$-$face$-$embedding$.
{\bf Claim 3:} \ \ For each of the $one$-$face$-$embedding$ of $G_{[x_1, x_2, x_3][y_1, y_2, \dots, y_{n}]}$, there are at least $3\times(2s-1)!!\times3^{2s}$ different ways to make $G_{[x_1, x_2, x_3, x_4][y_1, y_2, \dots, y_{n}]}$ being a $one$-$face$-$embedding$.
Let $\mathcal {E}_2$ be an arbitrary $one$-$face$-$embedding$ of $G_{[x_1, x_2, x_3][y_1, y_2, \dots, y_{n}]}$. In $\mathcal {E}_2$, there are three different $face$-$corner$ containing $y_{i} \ (i=1,2,3)$. So, there are $3\times3\times3(=27)$ different ways to add $y_1x_4\cup\mathcal {V}_{4,1}$ to $\mathcal {E}_2$ to make $G_{[x_1, x_2, x_3][y_1, y_2, \dots, y_{n}]}\cup y_1x_4\cup\mathcal {V}_{4,1}$ being a $one$-$face$-$embedding$. For each of the above 27 $one$-$face$-$embedding$ of $G_{[x_1, x_2, x_3][y_1, y_2, \dots, y_{n}]}\cup y_1x_4\cup\mathcal {V}_{4,1}$, there are 3 different $face$-$corner$ containing $x_4$ and 3 different $face$-$corner$ containing $y_{i} \ (i=4,5)$. So, for each of the above 27
$one$-$face$-$embedding$ of $G_{[x_1, x_2, x_3][y_1, y_2, \dots, y_{n}]}\cup y_1x_4\cup\mathcal {V}_{4,1}$, there are $3\times3\times3$ different ways to add $\mathcal {V}_{4,2}$ to $G_{[x_1, x_2,x_3][y_1, y_2, \dots, y_{n}]}\cup y_1x_4\cup\mathcal {V}_{4,1}$ to make $G_{[x_1, x_2, x_3][y_1, y_2, \dots, y_{n}]}\cup y_1x_4\cup\mathcal {V}_{4,1}\cup\mathcal {V}_{4,2}$ being a $one$-$face$-$embedding$.
In general, we have that for each of the $one$-$face$-$embedding$ of $G_{[x_1, x_2, x_3][y_1, y_2, \dots, y_{n}]}\cup y_1x_4\cup\mathcal {V}_{4,1}\cup\mathcal {V}_{4,2}\cup\dots\cup\mathcal {V}_{4,k-1}$, there are $(2k-1)\times3\times3$ different ways to add $\mathcal {V}_{4,k}$ to $G_{[x_1, x_2, x_3][y_1, y_2, \dots, y_{n}]}\cup y_1x_4\cup\mathcal {V}_{4,1}\cup\mathcal {V}_{4,2}\cup\dots\cup\mathcal {V}_{4,k-1}$ to get a $one$-$face$-$embedding$ of $G_{[x_1, x_2, x_3][y_1, y_2, \dots, y_{n}]}\cup y_1x_4\cup\mathcal {V}_{4,1}\cup\mathcal {V}_{4,2}\cup\dots\cup\mathcal {V}_{4,k-1}\cup\mathcal {V}_{4,k}$.
From the above we can get that for each of the $one$-$face$-$embedding$ of $G_{[x_1, x_2, x_3][y_1, y_2, \dots, y_{n}]}$, there are at least \begin{eqnarray*} (3\times3\times3)\times(3\times3\times3)\times(5\times3\times3)\times \dots \times((2s-1)\times3\times3) \\ \lefteqn{ = 3\times(2s-1)!!\times3^{2s} } \hspace*{121mm} \\ \end{eqnarray*}
\hspace{-8mm} different ways to make $G_{[x_1, x_2, x_3, x_4][y_1, y_2, \dots, y_{n}]}$ being a $one$-$face$-$embedding$.
Similarly, we can get the following general result.
{\bf Claim 4:} \ \ For each of the $one$-$face$-$embedding$ of $G_{[x_1, x_2, \dots, x_{k-1}][y_1, y_2, \dots, y_{n}]}$, there are at least $(k-1)\times(2s-1)!!\times(k-1)^{2s}$ different ways to make $G_{[x_1, x_2, \dots, x_{k-1}, x_{k}][y_1, y_2, \dots, y_{n}]}$ being a $one$-$face$-$embedding$.
Noticing that a $one$-$face$-$embedding$ of a graph must be its maximum genus embedding, we can get, from Claim 1 - Claim 4, that the number of the distinct maximum genus embedding of $K_{n,n}$ is at least \begin{eqnarray*} \{2^{s}\times((2s-1)!!)^2\}\times\{2\times(2s-1)!!\times2^{2s}\}\times\{3\times(2s-1)!! \\ \lefteqn{ \times3^{2s}\}\times\dots \times\{2s\times(2s-1)!!\times(2s)^{2s}\} } \hspace*{109mm} \\ \lefteqn{ = 2^{s}\times((2s-1)!!)^{2s+1}\times((2s)!)^{2s+1}} \hspace*{114mm} \\ \lefteqn{ = 2^{\frac{n-1}{2}}\times((n-2)!!)^{n}\times((n-1)!)^{n}. \hspace*{75mm} \Box} \hspace*{114mm} \\ \end{eqnarray*}
{\bf Remark} \ \ Through a comparison we can get that the result in Theorem A is much better than that of Lemma 1$^{\cite{sta}}$ when $n\leqslant9$.
In \cite{ren}, the second author of the present paper obtained that a connected loopless graph of order $n$ has at least $\frac{1}{4^{\gamma_{M}(G)}}\prod_{v\in V(G)}(d(v)-1)!$ distinct maximum genus embedding. Let $f_{1}(n)= 2^{\frac{n-1}{2}}\times\big( (n-2)!! \big)^{n}\times \big( (n-1)! \big)^{n}$, $f_2(n)= \frac{1}{4^{\gamma_{M}(G)}}\prod_{v\in V(G)}\big(d(v)-1 \big)!=\frac{1}{4^{\frac{(n-1)(n-1)}{2}}}\times\big((n-1)! \big)^{2n} $. Through a computation we can get $f_1(3) - f_2(3)=16$, $f_1(5) - f_2(5)=6772211712$. So, when $n\leqslant5$ the result obtained in Theorem A is much better than that of \cite{ren}.
{\footnotesize
}
\end{document} | arXiv |
\begin{document}
\title{The horizon problem for prevalent surfaces}
\begin{abstract} We investigate the box dimensions of the horizon of a fractal surface defined by a function $f \in C[0,1]^2 $. In particular we show that a prevalent surface satisfies the `horizon property', namely that the box dimension of the horizon is one less than that of the surface. Since a prevalent surface has box dimension 3, this does not give us any information about the horizon of surfaces of dimension strictly less than 3. To examine this situation we introduce spaces of functions with surfaces of upper box dimension at most $\alpha$, for $\alpha \in [2,3)$. In this setting the behaviour of the horizon is more subtle. We construct a prevalent subset of these spaces where the lower box dimension of the horizon lies between the dimension of the surface minus one and 2. We show that in the sense of prevalence these bounds are as tight as possible if the spaces are defined purely in terms of dimension. However, if we work in Lipschitz spaces, the horizon property does indeed hold for prevalent functions. Along the way, we obtain a range of properties of box dimensions of sums of functions. \end{abstract}
\section{Introduction and main results}
In this section we introduce the horizon problem, that is the problem of relating the dimension of the horizon of a fractal surface to the dimension of the surface itself. Our main results, which are of a generic nature, depend on the notion of prevalence.
\subsection{The horizon problem}
For $d \in \mathbb{N}$ let \[ C[0,1]^d = \{f:[0,1]^d \to \mathbb{R}\, \big\vert \text{ $f$ is continuous} \}. \] The {\it graph} of a function $f \in C[0,1]^d$ is the set $G_f=\{(\mathbf{x},f(\mathbf{x})):\mathbf{x} \in [0,1]^d \} \subset [0,1]^d\times \mathbb{R}$. We shall refer to $G_f$ as a {\it curve} when $d=1$ and as a {\it surface} when $d=2$. \begin{defn} Let $f \in C[0,1]^2$. The horizon function, $H(f) \in C[0,1]$, of $f$ is defined by \[ H(f)(x) = \sup_{y \in [0,1]} f(x,y). \] \end{defn} We are interested in the relationship between the dimension of the graph of a fractal surface and the dimension of the graph of its horizon. A `rule of thumb' is that the dimension of the horizon should be one less than the dimension of the surface. When this is the case we will say that the surface satisfies the `horizon property'. However, the horizon property is certainly not true in general. Consider, for example, a surface which is very smooth except for one small region at the bottom of a depression where it has dimension 3. This irregularity would not affect the horizon which would simply have dimension 1. Thus we can say little about the relationship between the dimensions of the surface and its horizon for \emph{all} surfaces. Nevertheless, one can consider the `generic' situation or alternatively one can restrict attention to specific classes of fractal surfaces. \\ \\ In \cite{randomhorizon,brownianhorizon} potential theoretic methods were used to find bounds for the Hausdorff dimension for the horizon of index-$\alpha$ Brownian fields. In particular the index-$\tfrac{1}{2}$ Brownian surfaces almost surely satisfies the horizon property for Hausdorff dimension. \\ \\ Here we consider the horizon problem for box dimension. We will say that $f \in C[0,1]^2$ satisfies the {\it horizon property (for box dimension)} if the box dimensions of $G_f$ and $G_{H(f)}$ exist and \[ \dim_\text{B} G_{H(f)} = \dim_\text{B} G_{f} -1. \] We examine the horizon problem for a generic surface; of course, there are many ways of defining `generic', but since $C[0,1]^2$ is an infinite dimensional vector space it is natural to appeal to the notion of `prevalence'.
\begin{comment} It is natural to investigate the relationship between the dimension of a surface and the dimension of its horizon. In \cite{randomhorizon} potential theory is used to estimate the Hausdorff dimension of random surfaces. In particular, upper and lower bounds are given for the Hausdorff dimension for the horizon of index-$\alpha$ Brownian fields. Brownian surfaces were also studied more recently in \cite{brownianhorizon}. The Authors provide partial improvements to the estimates in \cite{randomhorizon} and, in particular, they show that the horizon of an index-$\tfrac{1}{2}$ Brownian surface almost surely has the same H\"older exponent as the original surface. As a result of this, index-$\tfrac{1}{2}$ Brownian surfaces almost surely satisfy the horizon property for Hausdorff dimension. The authors also provide numerical results suggesting that this result remains true for arbitrary index. \end{comment}
\subsection{Prevalence}
`Prevalence' provides one way of describing the generic behavior of a class of mathematical objects. In a finite dimensional vector space Lebesgue measure provides a natural tool for deciding if a property is generic. Namely, if the set of elements without the property is a Lebesgue null set then the property is `generic' from a measure theoretical point of view. However, when the space in question is infinite dimensional this approach breaks down because there is no useful analogue of Lebesgue measure in the infinite dimensional setting. The theory of prevalence has been developed to address this situation, see the excellent survey papers \cite{prevalence1,prevalence}. We give a brief reminder of the key definitions.
\begin{defn} A \emph{completely metrizable topological vector space} is a vector space $X$ on which there exists a metric $d$ such that $(X,d)$ is complete and such that the vector space operations are continuous with respect to the topology induced by $d$. \end{defn}
Some sources also require that every point in $X$ is closed in the topology induced by $d$. This will be trivially true in all of our examples and so we omit it, see \cite{rudin} for more details. Note that a complete normed space is a completely metrizable topological vector space with the topology induced by the norm.
\begin{defn} Let $X$ be a completely metrizable topological vector space. A set $F \subseteq X$ is \emph{prevalent} if the following conditions are satisfied. \\ 1)\, \,$F$ is a Borel set;\\ 2) There exists a Borel measure $\mu$ on $X$ and a compact set $K \subseteq X$ such that $0<\mu(K) < \infty$ and \[ \mu\big(X \setminus (F+x)\big) = 0 \] for all $x \in X$. \\ The complement of a prevalent set is called a \emph{shy} set. \end{defn}
Notice that we can assume that $\mu$ is supported by $K$ in the above definition, otherwise we could replace $\mu$ with the measure $\mu \vert_K$ which would still satisfy condition (2). \\ \\ Since prevalence was introduced as an analogue of `Lebesgue-almost all' for infinite dimensional spaces it is perhaps not surprising that the measure $\mu$ mentioned in the above definition is often Lebesgue measure concentrated on a finite dimensional subset of $X$.
\begin{defn} A $k$-dimensional subspace $P \subseteq X$ is called a \emph{probe} for a Borel set $F\subseteq X$ if \[ \mathcal{L}_P \big(X \setminus (F+x)\big) = 0 \] for all $x \in X$ where $\mathcal{L}_P$ denotes $k$-dimensional Lebesgue measure on $P$ in the natural way. We call $F$ $k$-\emph{prevalent} if it admits a $k$-dimensional probe. \end{defn}
The existence of a probe is clearly a sufficient condition for a set $F$ to be prevalent.
\subsection{Main results}
We will be concerned with spaces of functions defined by the box dimensions of their graphs. Recall that the {\it lower} and {\it upper box dimensions} (or {\it box-counting dimensions}) of a bounded subset $F$ of $\mathbb{R}^d$ are given by \begin{equation}\label{lbox} \underline{\dim}_\text{B} F = \underline{\lim}_{\delta \to 0} \frac{\log N_\delta (F)}{-\log \delta} \end{equation} and \begin{equation}\label{ubox} \overline{\dim}_\text{B} F = \overline{\lim}_{\delta \to 0} \frac{\log N_\delta (F)}{-\log \delta} \end{equation} respectively, where $N_\delta (F)$ is the number of cubes in a $\delta$-mesh which intersect $F$. If $\underline{\dim}_\text{B} F = \overline{\dim}_\text{B} F$ then we call the common value the {\it box dimension} of $F$ and denote it by $\dim_\text{B} F$. For basic properties of box dimension see \cite{falconer}. \\ \\ Let $d\in \mathbb{N}$ and $\alpha \in [d,d+1]$, and define \[ C_\alpha[0,1]^d= \{f \in C[0,1]^d : \overline{\dim}_\text{B} G_f \leqslant \alpha \} \] and \[ D_\alpha[0,1]^d = \{f \in C_\alpha[0,1]^d : \underline{\dim}_\text{B} G_f =\overline{\dim}_\text{B} G_f = \alpha \}. \]
There is a natural complete metric $d_{\alpha,d}$ on $C_\alpha[0,1]^d$ which we will construct in Section 3. We write $d_\infty$ to denote the metric on $C_\alpha[0,1]^d$ defined by the norm $\| \cdot \|_\infty$. \\ \\ The following result, that a prevalent surface has upper and lower box dimension as big as possible, is included to put our results on horizons into context.
\begin{thm} \label{01}\hspace{1mm} \begin{itemize} \item[(1)] $D_{d+1}[0,1]^d$ is a 1-prevalent subset of $(C[0,1]^d,d_\infty)$;
\item[(2)] For $\alpha \in [d,d+1)$ the set $D_\alpha[0,1]^d$ is a 1-prevalent subset of $(C_\alpha[0,1]^d, d_{\alpha,d})$. \end{itemize} \end{thm}
Indeed, it was shown in \cite{mcclure} that the graph of a prevalent function in $(C[0,1],d_\infty)$ has upper box dimension 2. Also, Theorem \ref{01} (1) was very recently obtained in \cite{lowerprevalent}, and a slight weakening of Theorem \ref{01} (1) (with `1-prevalent' replaced just by `prevalent') was given in \cite{shaw} using a completely different method without a probe. \\ \\ We will present our results on horizons for surfaces $G_f $ where $f \in C[0,1]^2$ though they may be extended without difficulty to `horizons' of higher dimensional graphs. Our main result is in two parts. Firstly, a prevalent surface satisfies the horizon property. Specifically, in Theorem \ref{main} (1), we show that a prevalent surface, which according to Theorem \ref{01} (1) has box dimension 3, has a horizon with box dimension 2. However, this does not give us any information about the horizon dimensions of surfaces with box dimension strictly less than 3. Thus in the second part, Theorem \ref{main} (2), we give bounds on the box dimension of the horizon of a prevalent surface in $C_\alpha[0,1]^2$. To formulate this, we let \[ F_\alpha[0,1]^2 = \{f \in C_\alpha[0,1]^2 : \dim_\text{B} G_f = \alpha \text{ and } \alpha-1 \leqslant \underline{\dim}_\text{B} G_{H(f)} \leqslant \overline{\dim}_\text{B} G_{H(f)} \leqslant 2\}. \] Thus $F_\alpha[0,1]^2$ is the set of functions in $C_\alpha[0,1]^2$ for which the box dimension exists and is as big as possible and for which the upper and lower box dimension of the horizon are bounded below by the box dimension of the original surface minus 1. Note that taking $\alpha=3$ the box dimension of the horizon exists for all $f \in F_3 [0,1]^2$ and is equal to the box dimension of the original surface minus 1.
\begin{thm} \label{main}\hspace{1mm} \begin{itemize} \item[(1)] $F_{3}[0,1]^2$ is a 1-prevalent subset of $(C[0,1]^2,d_\infty)$;
\item[(2)] For $\alpha \in [2,3)$ the set $F_\alpha[0,1]^2$ is a 1-prevalent subset of $(C_\alpha[0,1]^2, d_{\alpha,2})$. \end{itemize} \end{thm} For $\alpha<3$ we do not have precise bounds on the box dimension of the horizon of a prevalent surface. However, the following theorem shows that our bounds are as tight as possible.
\begin{thm} \label{tight} Let $\alpha \in [2,3)$ and let $U_\alpha[0,1]^2$ and $L_\alpha[0,1]^2$ be defined by \[ U_\alpha[0,1]^2 = \{f \in C_\alpha[0,1]^2 : \dim_\text{\emph{B}} G_f = \alpha \text{ and } \alpha-1 \leqslant \underline{\dim}_\text{\emph{B}} G_{H(f)} \leqslant \overline{\dim}_\text{\emph{B}} G_{H(f)} < 2\} \] and \[ L_\alpha[0,1]^2 = \{f \in C_\alpha[0,1]^2 : \dim_\text{\emph{B}} G_f = \alpha \text{ and } \alpha-1 < \underline{\dim}_\text{\emph{B}} G_{H(f)} \leqslant \overline{\dim}_\text{\emph{B}} G_{H(f)} \leqslant 2\}. \] Then \begin{itemize} \item[(1)] $U_\alpha[0,1]^2$ is not a prevalent subset of $(C_\alpha[0,1]^2, d_{\alpha,2})$;
\item[(2)] $L_\alpha[0,1]^2$ is not a prevalent subset of $(C_\alpha[0,1]^2,d_{\alpha,2})$.
\end{itemize}
\end{thm}
Theorem \ref{tight} shows that we cannot improve Theorem \ref{main} (2) for the box dimensions of the horizon of a prevalent function in $C_\alpha[0,1]^2$. However, the horizon property does hold for prevalent functions if we consider the subspace $L_\alpha[0,1]^2$ of $C_\alpha[0,1]^2$ consisting of $\alpha$-Lipschitz functions, that is functions for which \begin{equation} \label{lip} \text{Lip}_\alpha(f) = \sup_{\substack{x,y \in [0,1]^2 \\ x \neq y}} \frac{\lvert f(x)-f(y) \rvert}{\, \, \,\,\, \, \, \, \, \lvert x-y\rvert^{3-\alpha}} <\infty. \end{equation} It is easily verified that \[
\|f\|_{\text{\rm{Lip}}_\alpha} = \| f \|_\infty + \text{\rm{Lip}}_\alpha(f) \] defines a complete norm on $L_\alpha$, and we write $d_{\text{\rm{Lip}}_\alpha}$ for the corresponding metric. \\ \\ The Lipschitz condition controls the box dimension of both the surface and the horizon. Thus if $f \in L_\alpha[0,1]^2$ then $\overline{\dim}_\text{B} G_{f} \leqslant \alpha$ (though the converse is not true) and \begin{equation}\label{horlip} \overline{\dim}_\text{B} G_{H(f)} \leqslant \alpha -1, \end{equation}\label{horlip} and this enables the following theorem.
\begin{thm}\label{lipthm} The set \[ \{f \in L_\alpha[0,1]^2 : \dim_\text{\emph{B}} G_f = \alpha \text{ and } \dim_\text{\emph{B}} G_{H(f)} = \alpha - 1 \} \] is a 1-prevalent subset of $(L_\alpha[0,1]^2, d_{\text{\emph{Lip}}_\alpha})$. \end{thm}
Thus a prevalent function in the space of $\alpha$-Lipshitz functions satisfies the horizon property for box dimension. \\ \\ It would clearly be desirable to obtain analogues of Theorem \ref{main} (2) for \emph{Hausdorff} dimension, $\dim_\text{\text{H}}$. However, it follows from a category theoretic argument in \cite{graphsums} that the set \[ H_\alpha[0,1]^2 = \{f \in C[0,1]^2 : \dim_\text{\text{H}} G_f \leqslant \alpha \} \] is not a subspace of $C[0,1]^2$ for $\alpha \in [2, 3)$ because it is not closed under addition (see the remarks at the end of Section 2). As a consequence, if one were to search for an analogous space to $C_\alpha[0,1]^2$ for Hausdorff dimension one would have to look for a subspace of $C[0,1]^2$ contained in $H_\alpha[0,1]^2$ which would necessarily lie \emph{strictly} inside $H_\alpha[0,1]^2$.
\section{Box dimensions of functions}
The box counting dimensions were defined in (\ref{lbox})--(\ref{ubox}), and in this section we present various technical results concerning the box dimension of fractal curves and surfaces. \\ \\ Let $d \in \mathbb{N}$, let $f \in C[0,1]^d$ and let $S \subseteq [0,1]^d$. We define the \emph{range} of $f$ on $S$ as \[ R_f(S) = \sup_{x,y \in S} \lvert f(x)-f(y)\rvert. \] For $\delta>0$ let $\Delta_\delta^d$ be the set of grid cubes in the $\delta$-mesh on $[0,1]^d$ defined by \[ \Delta_\delta^d = \bigcup_{n_1=0}^{\lceil \delta^{-1} \rceil-1} \cdots \bigcup_{n_d=0}^{\lceil \delta^{-1} \rceil-1} \Big\{[n_1 \delta,(n_1+1)\delta] \times \cdots \times [n_d \delta,(n_d+1)\delta]\Big\}. \] It follows that \begin{equation} \label{boxupper} \delta^{-1}\sum_{S \in \Delta_\delta^d} R_f(S) \leqslant N_\delta (G_f) \leqslant 2(\delta^{-1}+1)^d +\delta^{-1}\sum_{S \in \Delta_\delta^d} R_f(S), \end{equation} see \cite{falconer}, so, given a non-constant $f$, \begin{equation} N_\delta (G_f) \asymp \delta^{-1}\sum_{S \in \Delta_\delta^d} R_f(S), \label{estimate} \end{equation} i.e., there exists constants $\delta_f,C_f>0$ such that for $\delta<\delta_f$ \[ \frac{1}{C_f} \leqslant \frac{N_\delta (G_f) }{\delta^{-1}\sum_{S \in \Delta_\delta^d} R_f(S)} \leqslant C_f \] The remainder of this section will be devoted to studying the box dimensions of sums of functions. We will assume throughout that $f+g, f$ and $g$ are all non-constant, as otherwise the proofs are trivial.
\begin{lma} \label{upperbound} Let $f,g \in C[0,1]^d$. Then \[ \overline{\dim}_\text{\emph{B}} G_{f+g} \leqslant \max\{\overline{\dim}_\text{\emph{B}} G_{f}, \overline{\dim}_\text{\emph{B}} G_{g} \}. \] In particular, $C_\alpha[0,1]^d$ is a vector space. \end{lma}
\begin{proof} Let $s=\max\{\overline{\dim}_\text{B} G_{f}, \overline{\dim}_\text{B} G_{g} \}$ and let $\epsilon>0$. By (\ref{estimate}) there exists $\delta_0>0$ such that for all $\delta<\delta_0$ we have \[ \sum_{S \in \Delta_\delta^d} R_f(S) \leqslant \delta^{-\overline{\dim}_\text{B} G_{f}-\epsilon+1} \leqslant \delta^{1-s-\epsilon} \] and \[ \sum_{S \in \Delta_\delta^d} R_g(S) \leqslant \delta^{-\overline{\dim}_\text{B} G_{g}-\epsilon+1} \leqslant \delta^{1-s-\epsilon}. \] By considering the range of $f+g$ we have \[ \sum_{S \in \Delta_\delta^d} R_{f+g}(S) \leqslant \sum_{S \in \Delta_\delta^d} R_{f}(S)+ \sum_{S \in \Delta_\delta^d} R_{g}(S) \leqslant 2 \delta^{1-s-\epsilon} \] so $\overline{\dim}_\text{B} G_{f+g} \leqslant s+\epsilon$. Since this is true for all $\epsilon>0$ we conclude that $\overline{\dim}_\text{B} G_{f+g} \leqslant s$. \end{proof}
\begin{lma} \label{upper equals} Let $f,g \in C[0,1]^d$ and suppose $\overline{\dim}_\text{\emph{B}} G_{f}\neq \overline{\dim}_\text{\emph{B}} G_{g}$. Then \[ \overline{\dim}_\text{\emph{B}} G_{f+g} = \max\{\overline{\dim}_\text{\emph{B}} G_{f}, \overline{\dim}_\text{\emph{B}} G_{g} \}. \] \end{lma}
\begin{proof} Let $f,g \in C[0,1]^d$ and suppose, without loss of generality, that $\overline{\dim}_\text{B} G_{f}< \overline{\dim}_\text{B} G_{g}$. If $f+g = h$ where $\overline{\dim}_\text{B} G_{h} \neq \overline{\dim}_\text{B} G_{g}$, Lemma \ref{upperbound} gives that $\overline{\dim}_\text{B} G_{h} < {\dim}_\text{B} G_{g}$. This contradicts Lemma \ref{upperbound} since \[ \overline{\dim}_\text{B} G_{h-f}=\overline{\dim}_\text{B} G_{g} > \max\{\overline{\dim}_\text{B} G_{h}, \overline{\dim}_\text{B} G_{-f} \}. \] \end{proof}
\begin{lma} \label{but 2} Let $f,g \in C[0,1]^d$. Then \[ \overline{\dim}_\text{\emph{B}} G_{f+\lambda g} = \max\{\overline{\dim}_\text{\emph{B}} G_{f}, \overline{\dim}_\text{\emph{B}} G_{g} \} \] for all $\lambda \in \mathbb{R}$ with the possible exceptions of $\lambda = 0$ and one other value of $\lambda$. \end{lma}
\begin{proof} Let $f, g \in C[0,1]^d$ and assume without loss of generality that \[ \max\{\overline{\dim}_\text{B} G_{f}, \overline{\dim}_\text{B} G_{g} \} = \overline{\dim}_\text{B} G_{g} = s. \] Suppose $\lambda \in \mathbb{R}$ is such that $f+\lambda g = h$ where $\overline{\dim}_\text{B} G_{h} \neq s$. It follows from Lemma \ref{upperbound} that $\overline{\dim}_\text{B} G_{h} < s$. Now let $\beta \in \mathbb{R} \setminus \{0,\lambda \}$. Then $f+\beta g = h+(\beta - \lambda) g$ and since $\overline{\dim}_\text{B} G_{h}<\overline{\dim}_\text{B} G_{g}$ we have by Lemma \ref{upper equals} that \[ \overline{\dim}_\text{B} G_{f+\beta g}=\max\{\overline{\dim}_\text{B} G_{h}, \overline{\dim}_\text{B} G_{(\beta-\lambda)g} \} = s. \] \end{proof}
We write $\mathcal{L}^1$ for Lebesgue measure on $\mathbb{R}$.
\begin{lma} \label{leq} Let $f,g \in C[0,1]^d$. Then \[ \underline{\dim}_\text{\emph{B}} G_{f+\lambda g} \geqslant \max \{ \underline{\dim}_\text{\emph{B}} G_{f}, \underline{\dim}_\text{\emph{B}} G_{g} \} \] for $\mathcal{L}^1$-almost all $\lambda \in \mathbb{R}$. \end{lma}
\begin{proof} Let $f,g \in C[0,1]^d$, let $\epsilon>0$ and suppose $\max \{ \underline{\dim}_\text{B} G_{f}, \underline{\dim}_\text{B} G_{g} \} =\underline{\dim}_\text{B} G_{g}=s$. By (\ref{estimate}) there exists a $\delta_0>0$ such that for $\delta<\delta_0$ \begin{equation} \sum_{S \in \Delta_\delta^d} R_g(S) \geqslant \delta^{1-s+\epsilon}. \label{g} \end{equation}
Let $E \subset \mathbb{R}$ be any bounded Lebesgue measurable set and fix $\delta<\delta_0$. Note that, since \[ \int_E \lvert a-\lambda b\rvert\, d\lambda \geqslant \tfrac{1}{4} \,\mathcal{L}^1(E)^2 \,\lvert b \rvert, \] for all $a,b \in \mathbb{R}$, \begin{eqnarray} \int_E \sum_{S \in \Delta_\delta^d} R_{f+\lambda g}(S)\, d \lambda &\geqslant& \sum_{S \in \Delta_\delta^d} \int_E \lvert R_{f}(S)-\lambda R_{g}(S) \rvert \,d \lambda \nonumber\\ \nonumber\\ &\geqslant& \sum_{S \in \Delta_\delta^d}\tfrac{1}{4} \,\mathcal{L}^1(E)^2 \, R_{g}(S) \nonumber \\ \nonumber\\ &=& \tfrac{1}{4} \,\mathcal{L}^1(E)^2 \, \sum_{S \in \Delta_\delta^d} R_{g}(S) \nonumber \\ \nonumber\\ &\geqslant& \tfrac{1}{4} \, \mathcal{L}^1(E)^2 \, \delta^{1-s+\epsilon} \label{second} \end{eqnarray} using (\ref{g}). Let $n \in \mathbb{N}$ and \[ E_\delta^n = \Big\{\lambda \in [-n,n]: \sum_{S \in \Delta_\delta^d} R_{f+\lambda g}(S) \leqslant \delta^{1-s+2\epsilon} \Big\}. \] By (\ref{second}) we have \[
\mathcal{L}^1(E_\delta^n) \delta^{1-s+2\epsilon}\,\geqslant\, \int_{E_\delta^n}\sum_{S \in \Delta_\delta^d} R_{f+\lambda g}(S) \,d \lambda \, \geqslant \, \tfrac{1}{4} \, \mathcal{L}^1(E_\delta^n)^2 \, \delta^{1-s+\epsilon} \] so $\mathcal{L}^1(E_\delta^n) \leqslant 4 \delta^\epsilon.$ Choose $K \in \mathbb{N}$ such that $2^{-k}<\delta_0$ for all $k\geqslant K$. Then \[ \sum_{k \geqslant K} \mathcal{L}^1(E_{2^{-k}}^n) \leqslant 4 \sum_{k \geqslant K} 2^{-k\epsilon} < \infty \] so by the Borel-Cantelli Lemma, \[ \mathcal{L}^1 \Big(\bigcap_{M \in \mathbb{N} } \bigcup_{ k \geqslant M }E_{2^{-k}}^n \Big) =0, \] i.e., for $\mathcal{L}^1$-almost all $\lambda \in [-n,n]$, $\lambda \notin E_{2^{-k}}$ for sufficiently large $k$. It follows that for $\mathcal{L}^1$-almost all $\lambda \in [-n,n]$ there exists $\delta_\lambda>0$ such that for $\delta<\delta_\lambda$ \[ \sum_{S \in \Delta_\delta^d} R_{f+\lambda g}(S) > \delta^{1-s+2\epsilon} \] and hence \[ \underline{\dim}_\text{B} G_{f+\lambda g} \geqslant s -2\epsilon. \] Since this is true for arbitrarily small $\epsilon$ and since $\mathbb{R} = \cup_{n \in \mathbb{N}} [-n,n] $ the result follows. \end{proof}
\begin{comment} \begin{lma} \label{lower eq} Let $f,g \in C[0,1]^d$. Then \[ \underline{\dim}_\text{\emph{B}} G_{f+\lambda g} = \max \{ \underline{\dim}_\text{\emph{B}} G_{f}, \underline{\dim}_\text{\emph{B}} G_{g} \} \] for $\mathcal{L}^1$-almost all $\lambda \in \mathbb{R}$. \end{lma}
The proof I initially came up with for this Lemma was incorrect and after thinking about it for a while I no longer believe that it is true. We don't need it for our results though... which is good... \\ \\
\begin{proof} Let $f,g \in C[0,1]^d$ and let \begin{eqnarray*} \Lambda &=& \big\{\lambda \in \mathbb{R} : \underline{\dim}_\text{B} G_{f+\lambda g} \neq \max \{ \underline{\dim}_\text{B} G_{f}, \underline{\dim}_\text{B} G_{g} \} \big\} \\ \\ &=& \big\{\lambda \in \mathbb{R} : \underline{\dim}_\text{B} G_{f+\lambda g} < \max \{ \underline{\dim}_\text{B} G_{f}, \underline{\dim}_\text{B} G_{g} \} \big\} \\ \\ &\hspace{1mm}& \qquad \qquad \cup \, \big\{\lambda \in \mathbb{R} : \underline{\dim}_\text{B} G_{f+\lambda g} > \max \{ \underline{\dim}_\text{B} G_{f}, \underline{\dim}_\text{B} G_{g} \} \big\} \\ \\ &\,=:& \Lambda_- \cup \Lambda_+ \end{eqnarray*} Writing $f=(f+\lambda g) - \lambda g$ we have \[ \Lambda_+ \subseteq \big\{\lambda \in \mathbb{R} : \underline{\dim}_\text{B} G_{f} < \max \{ \underline{\dim}_\text{B} G_{f+\lambda g}, \underline{\dim}_\text{B} G_{-\lambda g} \} \big\} \] and it follows from Lemma \ref{leq} that \[ \mathcal{L}^1 (\Lambda) = \mathcal{L}^1 (\Lambda_-)+\mathcal{L}^1 (\Lambda_+) = 0. \] \end{proof} \end{comment}
Thus we have proved the following theorem.
\begin{thm} \label{sum} Let $f,g \in C[0,1]^d$. Then \[ \max \{ \underline{\dim}_\text{\emph{B}} G_{f}, \underline{\dim}_\text{\emph{B}} G_{g} \} \, \leqslant \, \underline{\dim}_\text{\emph{B}} G_{f+\lambda g} \, \leqslant \, \overline{\dim}_\text{\emph{B}} G_{f+\lambda g} \, = \, \max \{ \overline{\dim}_\text{\emph{B}} G_{f}, \overline{\dim}_\text{\emph{B}} G_{g} \} \] for $\mathcal{L}^1$-almost all $\lambda \in \mathbb{R}$. \end{thm}
\begin{proof} This combines Lemma \ref{but 2} and Lemma \ref{leq}. \end{proof}
It would clearly be desirable to have analogous results for the Hausdorff dimension and packing dimension of the graphs of sums of functions. However this is not possible for Hausdorff dimension. In particular, the estimate \begin{equation} \label{question} \dim_\text{H} G_{f+g} \leqslant \max\{\dim_\text{H} G_{f}, \dim_\text{H} G_{g} \} \end{equation} fails; indeed every $f \in C[0,1]$ can be written as the sum of two functions with graphs Hausdorff dimension 1. Mauldin and Williams \cite{graphsums} showed this by an elegant application of the Baire category theorem. It is well known that the set $A=\{f \in C[0,1] : \dim_\text{H} G_f = 1 \}$ is co-meagre and thus, for any $f \in C[0,1]$, the set $A \cap (A+f)$ is co-meagre and in particular non-empty. Hence, we may choose $g = f_1=f_2+f$ where both $f_1,f_2 \in A$. Thus if $f \notin A$ then $f=f_1-f_2$ satisfies \[ \dim_\text{H} G_{f} = \dim_\text{H} G_{f_1-f_2} > \max\{\dim_\text{H} G_{f_1}, \dim_\text{H} G_{-f_2} \} = 1. \] It was shown in \cite{bairefunctions} that the set of functions with lower box dimension equal to 1 is also co-meagre so the above argument could be modified to obtain a slightly more general result concerning lower box dimension. In particular, every $f \in C[0,1]$ has a decomposition $f=f_1+f_2$ where $f_1, f_2 \in C[0,1]$ have lower box dimension equal to 1. Such a decomposition has been constructed explicitly in \cite{decomp}. \\ \\ We are unaware if the analogue of (\ref{question}) holds for packing dimension, so we ask: Is it true that for all $f, g \in C[0,1]$ we have \begin{equation} \label{packing} \dim_\text{P} G_{f+g} \leqslant \max\{\dim_\text{P} G_{f}, \dim_\text{P} G_{g} \}? \end{equation}
\section{The space $(C_\alpha[0,1]^d, d_{\alpha,d})$} \label{space}
To consider prevalent subsets of \[ C_\alpha[0,1]^d = \{f \in C[0,1]^d : \overline{\dim}_\text{B} G_f \leqslant \alpha \} \qquad \text{ for } \alpha \in [d,d+1] \]
we need to show that $C_\alpha[0,1]^d$ is a completely metrizable topological vector space. It follows from Lemma \ref{upperbound} that $C_\alpha[0,1]^d$ is a vector space and in this section we will construct a suitable metric. \\ \\ For $\alpha \geqslant d$ define \[
V_\alpha[0,1]^d = \{f \in C[0,1]^d : \| f \|_{\alpha,d} < \infty \} \] where \[
\| f \|_{\alpha,d} = \| f \|_\infty + \sup_{0<\delta \leqslant 1} \frac{\sum_{S \in \Delta_\delta^d} R_{f}(S) }{\delta^{1-\alpha}}. \]
It is easy to see that $(V_\alpha[0,1]^d, \| \cdot \|_{\alpha,d})$ is a normed space. (See \cite{massopust} for the relationship between these spaces and Besov spaces.) \begin{lma} \label{comp}
Let $\alpha \in [d,d+1]$. Then $(V_\alpha[0,1]^d, \| \cdot \|_{\alpha,d})$ is a complete normed space. \end{lma}
\begin{proof}
Let $(f_n)_n$ be a Cauchy sequence in $(V_\alpha[0,1]^d, \| \cdot \|_{\alpha,d})$. It follows that $(f_n)_n$ is Cauchy in $\| \cdot \|_\infty$ and so converges uniformly to some $f \in C[0,1]^d$. By uniform convergence, \[
\| f \|_\infty + \sup_{\delta_0 <\delta \leqslant 1} \frac{\sum_{S \in \Delta_\delta^d} R_{f}(S) }{\delta^{1-\alpha}} \leqslant \limsup_{n \to \infty}\|f_n\|_{\alpha,d}, \] for all $0<\delta_0<1$, so
$f \in V_\alpha[0,1]^d$ with $\|f\|_{\alpha,d} \leqslant \limsup_{n \to \infty}\|f_n\|_{\alpha,d}$ . In the same way, for each $m$,
we see that $\|f-f_m\|_{\alpha,d} \leqslant \limsup_{n \to \infty}\|f_n-f_m\|_{\alpha,d}$, so
$(f_n)_n$ converges to $f$ in $\|\cdot\|_\alpha$.
\begin{comment}
Let $\alpha \in [2,3)$ and let $(f_n)_n$ be a Cauchy sequence in $(V_\alpha[0,1]^2, \| \cdot \|_\alpha)$. By the definition of $\| \cdot \|_\alpha$ we have that $(f_n)_n$ is a Cauchy sequence in $(C[0,1]^2, \| \cdot \|_\infty)$. Hence, there exists a function $f \in C[0,1]^2$ such that $\| f_n-f \|_\infty \to 0$. We will show that $\| f_n-f \|_\alpha \to 0$ and that $f \in V_\alpha[0,1]^2$. \\ \\
Since $\| f_n-f_m \|_\alpha \to 0$ as $m,n \to \infty$ we may find a `rapidly converging subsequence' $(f_{n_k})_k$ by choosing $N_k$ such that if $n,m \geqslant N_k$ then \[
\| f_n-f_m \|_\alpha \leqslant 2^{-k}. \] Since uniform convergence implies pointwise convergence we may write $f$ as \[ f= f_{N_1}+ \sum_{k=1}^\infty \Big( f_{N_{k+1}}- f_{N_k} \Big) \] and it follows that for each $n \in \mathbb{N}$ \begin{eqnarray*}
\| f-f_n \|_\alpha &=& \| f-f_{N_n}+f_{N_n}-f_n \|_\alpha\\ \\
&\leqslant& \|f-f_{N_n} \|_\alpha + \| f_{N_n}-f_n\|_\alpha\\ \\
&=& \Bigg\|f_{N_1}+\sum_{k=1}^\infty \Big( f_{N_{k+1}}- f_{N_k} \Big)-f_{N_1} - \sum_{k=1}^{n-1} \Big( f_{N_{k+1}}- f_{N_k} \Big) \Bigg\|_\alpha+\| f_{N_n}-f_n\|_\alpha\\ \\
&=& \Bigg\|\sum_{k=n}^{\infty} \Big( f_{N_{k+1}}- f_{N_k} \Big) \Bigg\|_\alpha +\| f_{N_n}-f_n\|_\alpha\\ \\
&\leqslant& \sum_{k=n}^{\infty} \| f_{N_{k+1}}- f_{N_k} \|_\alpha +\| f_{N_n}-f_n\|_\alpha\\ \\
&\leqslant& \sum_{k=n}^{\infty}2^{-k} +\| f_{N_n}-f_n\|_\alpha\\ \\
&=& 2^{1-n} +\| f_{N_n}-f_n\|_\alpha\\ \\ &\to& 0 \end{eqnarray*}
as $n \to \infty$ since $(f_n)_n$ is Cauchy. Finally, let $n \in \mathbb{N}$ be such that $\| f-f_n \|_\alpha < \infty$ (which we can clearly do) and we have \[
\| f\|_\alpha = \| f-f_n +f_n\|_\alpha \leqslant \| f-f_n\|_\alpha + \|f_n\|_\alpha< \infty \]
and so $f \in V_\alpha[0,1]^2$ and $(V_\alpha[0,1]^2, \| \cdot \|_\alpha)$ is complete. \end{comment}
\end{proof}
\begin{lma} \label{inter}
Let $(X_k, \| \cdot \|_k)_k$ be a decreasing sequence of complete normed vector spaces. i.e. for all $k \in \mathbb{N}$ we have $X_k \geqslant X_{k+1}$ and for $x \in X_{k+1}$ we have $\|x\|_{k+1} \geqslant \|x\|_k$. Then \[ \Big(\bigcap_{k \in \mathbb{N}} X_k , d \Big) \] is a complete metric space where the metric $d$ is defined by \[
d(x,y) = \sum_{k=1}^\infty \min\big\{ 2^{-k}, \|x-y\|_k\big\}. \] \end{lma}
\begin{proof}
It is clear that $d$ is defined for every pair $x,y \in \cap_{k \in \mathbb{N}} X_k$ and that it is a metric. To show completeness let $(x_n)_n$ be a Cauchy sequence in $(\cap_{k \in \mathbb{N}} X_k , d )$. Then for each $k$, $(x_n)_n$ is Cauchy in $(X_k , \| \cdot \|_k)$. Since $(X_k , \| \cdot \|_k)$ is complete there exists $x^{(k)} \in X_k$ such that $
\|x_n-x^{(k)}\|_k \to 0, $
but since $\|x\|_k \geqslant \|x\|_{j}$ for $j<k$ we have that $
\|x_n-x^{(k)}\|_{j} \to 0 $ for all or $j<k$. Thus $x^{(k)}$ is independent of $k$ and we may simply refer to it as $x$. It follows that $x \in \cap_{k \in \mathbb{N}} X_k$ and $
\|x_n-x\|_k \to 0 $ for all $k \in \mathbb{N}$, so $d( x_n, x ) \to 0$. \end{proof}
Note that whilst the metric $d$ is translation invariant, $d(0,x)$ does not define a norm as it clearly fails the scalar property.
\begin{lma} \label{dec} Let $\alpha \geqslant d$. For all $k \in \mathbb{N}$ and $f \in C[0,1]^d$ we have \[
\|f \|_{\alpha+\frac{1}{k+1},d} \geqslant \|f \|_{\alpha+\frac{1}{k},d} \] and consequently \[ V_{\alpha+\frac{1}{k}}[0,1]^d \geqslant V_{\alpha+\frac{1}{k+1}}[0,1]^d. \] \end{lma}
\begin{proof} Let $\alpha \geqslant d$, let $k \in \mathbb{N}$ and let $f \in C[0,1]^d$. Then \[
\|f \|_{\alpha+\frac{1}{k},d} = \| f \|_\infty + \sup_{0<\delta \leqslant 1} \frac{\sum_{S \in \Delta_\delta^d} R_{f}(S) }{\delta^{1-\alpha-\frac{1}{k}}} \leqslant \| f \|_\infty + \sup_{0<\delta \leqslant 1} \frac{\sum_{S \in \Delta_\delta^d} R_{f}(S) }{\delta^{1-\alpha-\frac{1}{k+1}}}= \|f \|_{\alpha+\frac{1}{k+1},d}. \]
\end{proof}
\begin{comment} It follows from this and the definition of $V_{\alpha}^2$ that \[ V_{\alpha+\frac{1}{k}}^2 \geqslant V_{\alpha+\frac{1}{k+1}}^2. \]
Combining Lemmas \ref{comp}-\ref{dec} we have that \[ \Big(\bigcap_{k \in \mathbb{N}} V_{\alpha+\frac{1}{k}}[0,1]^d , d_{\alpha,d} \Big) \] is a complete metric space where the metric $d_{\alpha,d}$ is given by \[
d_{\alpha,d}(f,g) = \sum_{k=1}^\infty \min\big\{ 2^{-k}, \|f-g\|_{\alpha+\frac{1}{k}}\big\}. \] \end{comment}
\begin{prop} \label{met} Let $\alpha \in [d,d+1)$. Then \[
\{f \in C[0,1]^d : \overline{\dim}_\text{\emph{B}} G_{f} \leqslant \alpha\} \equiv C_\alpha[0,1]^d = \bigcap_{k \in \mathbb{N}} V_{\alpha+\frac{1}{k}}[0,1]^d. \] Moreover $(C_\alpha[0,1]^d,d_{\alpha,d} )$ is a complete metric space where \[
d_{\alpha,d}(f,g) = \sum_{k=1}^\infty \min\big\{ 2^{-k}, \|f-g\|_{\alpha+\frac{1}{k},d}\big\}. \] \end{prop}
\begin{proof} It follows from Lemmas \ref{comp}-\ref{dec} that $\big(\bigcap_{k \in \mathbb{N}} V_{\alpha+\frac{1}{k}}[0,1]^d , d_{\alpha,d} \big)$ is a complete metric space. \\ \\
Let $f \in C_\alpha[0,1]^d$ so that $\overline{\dim}_\text{B}G_f \leqslant \alpha$, and so by (\ref{estimate}), for each $k \in \mathbb{N}$, there exists $\delta_0>0$ such that for $\delta<\delta_0$ \[ \delta^{-1} \sum_{S \in \Delta_\delta^d} R_{f}(S) \leqslant N_\delta(G_f) \leqslant \delta^{-\alpha-\frac{1}{k}}. \] It follows that
$\|f \|_{\alpha+\frac{1}{k},d} < \infty$ so $f \in V_{\alpha+\frac{1}{k}}[0,1]^d$ for each $k$.
For the opposite inclusion, let $f \in \bigcap_{k \in \mathbb{N}} V_{\alpha+\frac{1}{k}}[0,1]^d$. Hence, for all $k \in \mathbb{N}$, \[ \sup_{0<\delta \leqslant 1} \frac{\sum_{S \in \Delta_\delta^d} R_{f}(S) }{\delta^{1-\alpha-\frac{1}{k}}}< \infty, \] and hence for all $\delta \in (0,1]$ \[ \delta^{-1} \sum_{S \in \Delta_\delta^d} R_{f}(S) \leqslant C_k \delta^{-\alpha-\frac{1}{k}} \] where $C_k$ depends only on $f$ and $k$. Taking logarithms and using (\ref{estimate}), $\overline{\dim}_\text{B} G_f \leqslant \alpha+\frac{1}{k}$ for all $k$, so $\overline{\dim}_\text{B} G_f \leqslant \alpha$.
\end{proof}
It is easy to see that the vector space operations are continuous with respect to the topology induced by $d_{\alpha,d}$ and therefore $(C_\alpha[0,1]^d, d_{\alpha,d})$ is a completely metrizable topological vector space. Thus we are able to consider prevalent subsets of $C_\alpha[0,1]^d$.
\section{Proofs of Theorems \ref{01}, \ref{main} and \ref{lipthm}} \label{proof1}
In this section we will prove Theorem \ref{main} which provides bounds for the box dimensions of the horizon of a prevalent surface in $C_\alpha[0,1]^2$ and also the Lipschitz variant, Theorem \ref{lipthm}. We will also sketch the proof of Theorem \ref{01}, which is very similar but simpler to the proof of Theorem \ref{main}. We begin with a technical measurability lemma. \begin{lma} \label{bor}\hspace{1mm}
\begin{itemize} \item[(1)] For all $\alpha \in [2,3)$, the set $F_\alpha[0,1]^2$ is a Borel subset of $(C_\alpha[0,1]^2, d_{\alpha,2})$; \item[(2)] $F_3[0,1]^2$ is a Borel subset of $(C[0,1]^2, d_\infty)$. \end{itemize} \end{lma}
\begin{proof} We will prove part (1); part (2) is similar. \\ \\ Let $\alpha \in [2,3)$. We have \begin{eqnarray*} F_\alpha[0,1]^2
&=& \{f \in C[0,1]^2 : \dim_\text{B} G_f = \alpha\} \cap \{f \in C[0,1]^2 : \alpha-1 \leqslant \underline{\dim}_\text{B} G_H(f) \leqslant \overline{\dim}_\text{B} G_H(f) \leqslant 2\} \\ \\ &\equiv& F_{1} \cap F_2 , \end{eqnarray*} say. We will first consider $F_1$. By (\ref{estimate}) we have \begin{eqnarray*} F_1 &=& \bigcap_{\substack{q \in \mathbb{Q}\\ q>0}} \bigcup_{N \in \mathbb{N}} \bigcap_{n \geqslant N} \Bigg\{ f \in C[0,1]^2 : \,\,\,\,2^{(\alpha-q-1)n} < \sum_{\,\,\,\,\,S \in \Delta_{2^{-n}}^2} R_f(S) \,\,\,\,< 2^{(\alpha+q-1)n}\Bigg\} \end{eqnarray*} and it is clear that the set in curly brackets is open in $d_\infty$ for all $q, N$ and $n$. Since the metric $d_{\alpha,2}$ is stronger than $d_\infty$, this set is also open in $d_{\alpha,2}$, so $F_1$ is a Borel subset of $(C_\alpha[0,1]^2, d_{\alpha,2})$. \\ \\ Similarly for $F_2$, \begin{eqnarray*} F_2 &=& \bigcap_{\substack{q \in \mathbb{Q}\\ q>0}} \bigcup_{N \in \mathbb{N}} \bigcap_{n \geqslant N} \Bigg\{ f \in C[0,1]^2 : \,\,\,\,2^{(\alpha-1-q-1)n} < \sum_{\,\,\,\,\,S \in \Delta_{2^{-n}}^1} R_{H(f)}(S) \,\,\,\,< 2^{(2+q-1)n}\Bigg\}. \end{eqnarray*} If $g \in B_{d_\infty}(f,r)$ then $H(g) \in B_{d_\infty}(H(f),r)$ so the set in curly brackets is open in $d_\infty$ and thus in $d_{\alpha,2}$. Hence $F_2$ is a Borel subset of $(C_\alpha[0,1]^2, d_{\alpha,2})$, and consequently $F_{\alpha}[0,1]^2$ is Borel. \end{proof}
We will now turn to the proof of Theorem \ref{main}. We will begin by constructing a probe. Let $\alpha \in [2,3]$ and let $\psi_\alpha \in C[0,1]$ be a non-constant function satisfying \[ \underline{\dim}_\text{B} G_{\psi_\alpha} =\overline{\dim}_\text{B} G_{\psi_\alpha}= \alpha-1. \] Define $\Psi_\alpha \in C[0,1]^2$ by \[ \Psi_\alpha(x,y)=\psi_\alpha(x) \] for $(x,y) \in [0,1]^2$. It is clear that \[ \underline{\dim}_\text{B} G_{\Psi_\alpha} =\overline{\dim}_\text{B} G_{\Psi_\alpha} = \dim_\text{B} G_{\psi_\alpha} +1 = \alpha \] and hence $\Psi_\alpha \in C_\alpha[0,1]^2$. Also note that $H(\Psi_\alpha)=\psi_\alpha$.
\begin{lma} \label{key} Let $\alpha \in [2,3]$ and let $f \in C_\alpha[0,1]^2$. For $\mathcal{L}^1$-almost all $\lambda \in \mathbb{R}$ we have \begin{itemize} \item[(1)] $\dim_\text{\emph{B}} G_{f+\lambda \Psi_\alpha} = \dim_\text{\emph{B}} G_{\Psi_\alpha} = \alpha$; \item[(2)] $\alpha - 1 \leqslant \underline{\dim}_\text{\emph{B}} G_{H(f+\lambda \Psi_\alpha)} \leqslant \overline{\dim}_\text{\emph{B}} G_{H(f+\lambda \Psi_\alpha)} \leqslant 2$. \end{itemize} \end{lma}
\begin{proof} Let $\alpha \in [2,3]$ and let $f \in C_\alpha[0,1]^2$.\\ \\ (1) This follows immediately from Theorem \ref{sum}. \\ \\ (2) Since $\Psi_\alpha(x,y)$ is independent of $y$, \begin{eqnarray*} H(f+\lambda \Psi_\alpha)(x) = \sup_{y \in [0,1]} (f+\lambda \Psi_\alpha)(x,y) &=& \sup_{y \in [0,1]} \Big(f(x,y)+\lambda \Psi_\alpha(x,y) \Big) \\ \\ &=&\sup_{y \in [0,1]} \Big(f(x,y)\Big)+\lambda H(\Psi_\alpha)(x) \\ \\ &=&H(f)(x)+\lambda H(\Psi_\alpha)(x). \end{eqnarray*} Applying Theorem \ref{sum} for $H(f), H(\Psi_\alpha) \in C[0,1]$ gives \[ \alpha - 1 = \underline{\dim}_\text{B} G_{H(\Psi_\alpha)} \leqslant \underline{\dim}_\text{B} G_{H(f+\lambda \Psi_\alpha)} \leqslant \overline{\dim}_\text{B} G_{H(f+\lambda \Psi_\alpha)} \leqslant 2 \] for $\mathcal{L}^1$-almost all $\lambda \in \mathbb{R}$. \end{proof}
Let $\alpha \in [2,3]$ and let $P_\alpha =\{\lambda \Psi_\alpha : \lambda \in \mathbb{R} \}\subset C[0,1]^2$. Define $\pi_\alpha: P_\alpha \to \mathbb{R}$ by \[ \pi_\alpha(\lambda \Psi_\alpha) = \lambda \] and a measure $\mathcal{L}_{P_\alpha}$ on $P_\alpha$ by \[ \mathcal{L}_{P_\alpha} = \mathcal{L}^1 \circ \pi_\alpha. \]
\begin{lma}\label{probe1}
$P_\alpha$ is a probe for $F_\alpha[0,1]^2$. In particular, for all $f \in C_\alpha[0,1]^2$, \[ \mathcal{L}_{P_\alpha} \Big( C_\alpha[0,1]^2 \setminus (F_\alpha[0,1]^2+f) \Big)=0. \] \end{lma}
\begin{proof}
Let $f \in C_\alpha[0,1]^2$. Then \begin{eqnarray*} \mathcal{L}_{P_\alpha} \Big( C_\alpha[0,1]^2 \setminus (F_\alpha[0,1]^2+f) \Big) &=& \mathcal{L}_{P_\alpha} \Big(P_\alpha \setminus (F_\alpha[0,1]^2+f) \Big) \\ \\ &=& (\mathcal{L}^1 \circ \pi_\alpha ) \Big(\lambda \Psi_\alpha \in P_\alpha : \lambda \Psi_\alpha - f \notin F_\alpha[0,1]^2 \Big) \\ \\ &=& \mathcal{L}^1 \Big(\lambda \in \mathbb{R} : \lambda \Psi_\alpha - f \notin F_\alpha[0,1]^2 \Big) \\ \\ &=& 0 \end{eqnarray*} by Lemma \ref{key}. \end{proof}
Lemma \ref{probe1} combined with Lemma \ref{bor}(1-2) gives Theorem \ref{main}.
\begin{proof}[Proof of Theorem \ref{lipthm}]\quad \\ This is very similar to the proof of Theorem \ref{main}. We note that for $f \in L_\alpha[0,1]^2$, inequality (\ref{horlip}) allows us to tighten Lemma \ref{key} to the following statement: For $\alpha \in [2,3]$ and $f \in L_\alpha[0,1]^2$, $$\underline{\dim}_\text{B} G_{H(f+\lambda \Psi_\alpha)}= \overline{\dim}_\text{B} G_{H(f+\lambda \Psi_\alpha)} =\alpha -1$$ for $\mathcal{L}^1$-almost all $\lambda \in \mathbb{R}$. With this equality, the remainder of the proof is virtually identical to that of Theorem \ref{main}. \end{proof}
\begin{proof}[Proof of Theorem \ref{01}]\quad \\ Again, this is very similar to the proof of Theorem \ref{main}. For $\alpha \in [d,d+1]$ a one dimensional probe is constructed for $D_\alpha[0,1]^d$ using a function $\Psi_\alpha\in C_\alpha[0,1]^d$ with $\dim_\text{B} G_{\Psi_\alpha} = \alpha$. It then follows from Theorem \ref{sum} that for all $f \in C_\alpha[0,1]^d$, \[ \dim_\text{B} G_{f+\lambda \Psi_\alpha} = \dim_\text{B} G_{\Psi_\alpha} = \alpha \] for $\mathcal{L}^1$-almost all $\lambda \in \mathbb{R}$. The rest of the proof is straightforward. \end{proof}
\section{Proof of Theorem \ref{tight} } \label{proof2}
In this section we will prove Theorem \ref{tight} which shows that the bounds obtained in Theorem \ref{main} are as sharp as possible. \\ \\ The key tools will be certain classes of functions in $C[0,1]^2$ which we will call `forcers' and `modifiers'. We will first show such functions exist by explicit construction. We prove, given a compact subset $K \subset C_\alpha[0,1]^2$ and a point $y_0 \in [0,1]$, the existence of a `forcer', $F_{K,y_0} \in C_2[0,1]^2$, which `forces' the horizon of $F_{K,y_0}+f$ to lie along the line $[0,1] \times {y_0}$ for all $f \in K$. Furthermore, we prove, given $g \in C[0,1]$ and a point $y_0 \in [0,1]$, the existence of a `modifier', $M_{g,y_0} \in C_2[0,1]^2$, such that \[ H(F_{K,y_0}+M_{g,y_0}) = g. \]
\begin{lma}[Forcers] Let $K \subset C_\alpha[0,1]^2$ be a compact set and let $y_0 \in [0,1]$. There exists $F_{K,y_0} \in C_2[0,1]^2$ such that \begin{itemize} \item[(1)] For all $f \in K$ \[ f(x,y_0) + F_{K,y_0}(x,y_0) \geqslant f(x,y) + F_{K,y_0}(x,y) \] for all $y \in[0,1]$;
\item[(2)] $F_{K,y_0}(x,y_0)=0$ for all $x \in [0,1]$. \end{itemize}
\end{lma}
\begin{proof}
Define a map
$\phi:(K,d_{\alpha,2}) \to (C[0,1]^2,\| \cdot \|_\infty)$ by \[ \phi(f)(x,y) =f(x,y) - f(x,y_0). \]
It is clear that $\phi$ is continuous and thus $\phi(K)$ is compact. In particular, $g$ defined by $g(x,y)=\sup_{f \in \phi(K)} f(x,y)$ is continuous on $[0,1]^2$, from which it follows that $g^*$ defined by $g^*(y) = \sup_{x \in [0,1]} g(x,y)$ is continuous. Now let $F \in C[0,1]$ be such that \begin{itemize}
\item[(i)] $\overline{\dim}_\text{B} G_F = 1$;
\item[(ii)] $F(y) \geqslant g^*(y)$ for all $y \in [0,1]$;
\item[(iii)] $F(y_0) = g^*(y_0) = 0$;
\end{itemize}
and define the `forcer' $F_{K, y_0} \in C_2[0,1]^2$ by \[ F_{K, y_0} (x,y) = -F(y). \] It is clear that a function $F$ satisfying (i)--(iii) exists, for example, $$F(y) = \left\{ \begin{array}{ll} \max_{y \leqslant w \leqslant y_0} g^*(w) & (0\leqslant y \leqslant y_0) \\ \max_{y_0 \leqslant w \leqslant y} g^*(w) & (y_0 \leqslant y \leqslant 1) \end{array} \right. . $$ Note that this function is monotonic on either side of $y_0$, and the box dimension of the graph of a monotonic function is 1. \\ \\ Clearly (i) implies that $F_{K, y_0} \in C_2[0,1]^2$ and (iii) implies that $F_{K, y_0}$ satisfies (2). \\ \\ To check (1), let $f \in K$ and let $x,y \in [0,1]$. Then $$ f(x,y) - f(x,y_0) \leqslant g(x,y) \leqslant F(y) = -F_{K, y_0} (x,y) = F_{K, y_0}(x,y_0) - F_{K, y_0} (x,y) $$ as required.
\end{proof}
We now construct `modifiers'; the crucial thing is showing we can construct a surface with dimension equal to 2, the least value possible, but with a given horizon which may be highly irregular.
\begin{lma}[Modifiers] Let $g \in C[0,1]$ and let $y_0 \in [0,1]$. Then there exists $M_{g,y_0} \in C[0,1]^2$ such that \begin{itemize} \item[(1)] $M_{g,y_0}(x, y_0) = g(x)$ for all $x \in[0,1]$; \item[(2)] $M_{g,y_0}(x, y) \leqslant M_{g,y_0}(x, y_0)$ for all $y \in [0,1]$; \item[(3)] $\dim_{\text{\emph{B}}} G_{M_{g,y_0}} = 2$. \end{itemize}
In particular, for all $g \in C[0,1]$ and all $y_0 \in [0,1]$ we have that $M_{g,y_0} \in C_\alpha[0,1]^2$ for all $\alpha \in [2,3].$ \end{lma}
\begin{proof} Let $g \in C[0,1]$ and without loss of generality assume that $y_0 = 0$; the proof can easily be adapted for arbitrary $y_0$.\\ \\ Let $(p_k)_k$ be a sequence of polynomials such that \begin{itemize} \item[(i)] $p_1 \leqslant p_2 \leqslant p_3 \leqslant \dots$; \item[(ii)] $p_k \nearrow g$; \item[(iii)] $\lvert p_k(x_1) - p_k(x_2) \rvert \leqslant 2^k \lvert x_1- x_2 \rvert$ for all $k$ and for all $x_1, x_2 \in [0,1]$. \end{itemize}
To achieve this, the Weierstrass approximation theorem allows us to chose a sequence satisfying (i) and (ii), and then we may adapt this sequence to get a very slowly converging sequence, which may involve repeating terms for longer and longer runs, which satisfies (3). \\ \\ Let \[ D_k = [0,1] \times (2^{-k}, 2^{-k+1}) \] so that we have the decomposition $ [0,1]^2 = \bigcup_{k=1}^{\infty} \overline{D_k}, $ and let $q:[0,1] \to [0,1]$ be defined by \[ q(x) = \left\{ \begin{array}{c l} 0 & x =0 \\ 2^k x -1 & x \in [2^{-k}, 2^{-k+1}) \\ 1 & x=1 \end{array}. \right. \] Now define $M_{g,0}: [0,1]^2 \to \mathbb{R}$ by \[ M_{g,0}(x,y) = \left\{ \begin{array}{c l} g(x) & y=0 \\ q(y) p_{k-1}(x) + (1-q(y))p_{k}(x) & y \in [2^{-k}, 2^{-k+1}) \text{ for some } k \in \mathbb{N} \end{array} \right. \] It is clear that $M_{g,0}$ satisfies conditions (1) and (2) of being a `modifier'. It remains to show that $\dim_{\text{B}} G_{M_{g,0}} = 2$. \\ \\ Let $n \in \mathbb{N}$. By (\ref{boxupper}) we have \begin{eqnarray} N_{2^{-n}}(G_{M_{g,0}}) &\leqslant& 2(2^n+1)^2 +2^{n}\sum_{S \in \Delta_{2^{-n}}^2} R_{M_{g,0}}(S) \nonumber \\ \nonumber \\ &=& 2(2^n+1)^2+2^{n}\sum_{k=1}^{n} \quad \sum_{\substack{S \in \Delta_{2^{-n}}^2\\ \\ S \cap D_k \neq \emptyset}} R_{M_{g,0}}(S) \quad + \quad 2^{n} \sum_{\substack{S \in \Delta_{2^{-n}}^2\\ \\ \forall k = 1,\dots, n \,: \,S \cap D_k = \emptyset}} R_{M_{g,0}}(S) \nonumber \\ \nonumber \\ &\leqslant& 2(2^n+1)^2+2^{n}\sum_{k=1}^{n} \quad \sum_{\substack{S \in \Delta_{2^{-n}}^2\\ \\
S \cap D_k \neq \emptyset}} R_{M_{g,0}}(S) \quad + \quad 2^{n} (2^n+1) 2\|M_{g,0}\|_\infty \nonumber \\ \nonumber \\ &=& 2^{n}\sum_{k=1}^{n} \quad \sum_{\substack{S \in \Delta_{2^{-n}}^2\\ \\ S \cap D_k \neq \emptyset}} R_{M_{g,0}}(S) \quad + \quad O \Big( (2^{n})^2 \Big) \label{rangeest} \end{eqnarray} Let $S=S_x \times S_y \in \Delta_{2^{-n}}^2$ be such that $S \cap D_k \neq \emptyset$. Then
\begin{eqnarray*} R_{M_{g,0}}(S) &\leqslant& R_{q p_{k-1}}(S) + R_{(1-q)p_{k}}(S) \\ \\ &\leqslant& \Bigg( \sup_{y \in S_y} \sup_{x_1,x_2 \in S_x} \Big\lvert q(y) \big(p_{k-1}(x_1) - p_{k-1}(x_2) \big) \Big\rvert + \sup_{x \in S_x} \sup_{y_1,y_2 \in S_y} \Big\lvert p_{k-1}(x) \big(q(y_1) - q(y_2) \big)\Big\rvert \Bigg) \\ \\ &\quad& \quad + \Bigg( \sup_{y \in S_y} \sup_{x_1,x_2 \in S_x} \Big\lvert (1-q(y)) \big(p_{k}(x_1) - p_{k}(x_2) \big) \Big\rvert \\ \\ &\qquad& \qquad \qquad \qquad \qquad \qquad + \sup_{x \in S_x} \sup_{y_1,y_2 \in S_y} \Big\lvert p_{k}(x) \big((1-q(y_1)) -(1- q(y_2)) \big)\Big\rvert \Bigg) \\ \\ &=& \sup_{y \in S_y} q(y) \Bigg( \sup_{x_1,x_2 \in S_x} \Big\lvert p_{k-1}(x_1) - p_{k-1}(x_2) \Big\rvert \Bigg) + \sup_{x \in S_x} p_{k-1}(x) \Bigg(\sup_{y_1,y_2 \in S_y} \Big\lvert q(y_1) - q(y_2) \Big\rvert \Bigg) \\ \\ &\quad& \quad + \sup_{y \in S_y} (1-q(y)) \Bigg( \sup_{x_1,x_2 \in S_x} \Big\lvert p_{k}(x_1) - p_{k}(x_2) \Big\rvert \Bigg) +\sup_{x \in S_x} p_{k}(x) \Bigg(\sup_{y_1,y_2 \in S_y} \Big\lvert q(y_2)- q(y_2) \Big\rvert \Bigg) \\ \\
&=& 2^{k-1-n} + \|g\|_\infty2^{k-n} + 2^{k-n} + \|g\|_\infty 2^{k-n}\\ \\ &=& c \, 2^{k-n} \end{eqnarray*}
where $c = \frac{3}{2}+2\|g\|_\infty$. Combining this with (\ref{rangeest}) we obtain \begin{eqnarray*} N_{2^{-n}}(G_{M_{g,0}}) &\leqslant& 2^{n}\sum_{k=1}^{n} \Big\lvert \Big\{ S \in \Delta_{2^{-n}}^2 : S \cap D_k \neq \emptyset \Big\} \Big\rvert \, c \, 2^{k-n} \quad + \quad O \Big( (2^{n})^2 \Big) \\ \\ &=& 2^{n}\sum_{k=1}^{n} 2^n \, 2^{n-k} \, c \, 2^{k-n} \quad + \quad O \Big( (2^{n})^2 \Big) \\ \\ &=& c \, n \, (2^{n})^2 \quad + \quad O \Big( (2^{n})^2 \Big) \end{eqnarray*} and letting $n \to \infty$ we deduce that $ \dim_{\text{B}} G_{M_{g,0}} = 2. $ \end{proof}
Forcers and modifiers will be used throughout the remainder of this section without being mentioned explicitly. We may now complete the proof of Theorem \ref{tight}(1). \\ \begin{proof}[Proof of Theorem \ref{tight} (1)]\quad
Let $\alpha \in [2,3)$ and assume $U_\alpha[0,1]^2$ is a prevalent subset of $(C_\alpha[0,1]^2,d_{\alpha,2})$. Hence, there exists a Borel measure $\mu_1$ on $C_\alpha[0,1]^2$ and a compact set $K_1 \subset C_\alpha[0,1]^2$ such that
\begin{equation*} 0 < \mu_1(K_1)<\infty \end{equation*} and \begin{equation} \label{mu1} \mu_1 \Big(C_\alpha[0,1]^2 \setminus (U_\alpha[0,1]^2+f) \Big) = 0 \text{ for all } f \in C_\alpha[0,1]^2. \end{equation}
Let $f_1, f_2 \in C[0,1]$ be functions such that \begin{equation} \label{opposite} \overline{\dim}_\text{B}\, G_{ f_1} = 2 \end{equation} and \begin{equation} \label{opposite2} \overline{\dim}_\text{B}\, G_{ f_2} = 1. \end{equation}
It follows from (\ref{mu1}) that for $\mu_1$-almost all $f \in C_\alpha[0,1]^2$ we have \[ f \in \Big(U_\alpha[0,1]^2 - (F_{K_1, 0}+M_{f_1, 0}) \Big) \cap \Big(U_\alpha[0,1]^2 - (F_{K_1, 0}+M_{f_2, 0}) \Big) \] and since $\mu_1(K_1)>0$, we can choose $f_0 \in K_1$ such that \[ f_0 \in \Big(U_\alpha[0,1]^2 - (F_{K_1, 0}+M_{f_1, 0}) \Big) \cap \Big(U_\alpha[0,1]^2 - (F_{K_1, 0}+M_{f_2, 0}) \Big). \] Hence, there exist $h_1, h_2 \in U_\alpha[0,1]^2$ such that \[ f_0 = h_1-(F_{K_1, 0}+M_{f_1,0}) = h_2 - (F_{K_1, 0}+M_{f_2,0}). \] It follows that \[ f_0 + (F_{K_1, 0}+M_{f_1,0}) = h_1 \in U_\alpha[0,1]^2 \] and \[ f_0 + (F_{K_1, 0}+M_{f_2,0}) = h_2 \in U_\alpha[0,1]^2. \] Let $f_0^* \in C[0,1]$ be defined by $f_0^*(x) = f_0(x,0)$. We will now consider the horizons of $f_0 + (F_{K_1, 0}+M_{f_1,0})$ and $f_0 + (F_{K_1, 0}+M_{f_2,0})$. We have \begin{eqnarray*} H\Big(f_0 + (F_{K_1, 0}+M_{f_1,0})\Big)(x) &=& \sup_{y \in [0,1]} \Big(f_0(x,y) + (F_{K_1, 0}+M_{f_1,0})(x,y) \Big) \\ \\ &=& f_0(x,0)+(F_{K_1, 0}+M_{f_1,0})(x,0) \qquad \qquad \text{by Lemmas 5.1--5.2}\\ \\ &=& f_0^*(x) + f_1(x) \end{eqnarray*}
and similarly \begin{eqnarray*} H\Big(f_0 +(F_{K_1, 0}+M_{f_2,0})\Big)(x) &=& f_0^*(x) + f_2(x). \end{eqnarray*}
Since $f_0 + (F_{K_1, 0}+M_{f_1,0}) \in U_\alpha[0,1]^2$ and $f_0 + (F_{K_1, 0}+M_{f_2,0}) \in U_\alpha[0,1]^2$, it follows that \begin{equation} \label{bothlessthan2} \overline{\dim}_\text{B}\, G_{ f_0^* + f_1}, \, \, \overline{\dim}_\text{B} \,G_{ f_0^* + f_2} < 2. \end{equation} Since $\overline{\dim}_\text{B} G_{f_1} = 2$ it follows from Lemma \ref{upper equals} that $\overline{\dim}_\text{B} G_{f_0^*} = 2$. It now follows from (\ref{opposite2}) and Lemma \ref{upper equals} that $\overline{\dim}_\text{B} G_{f_0^* + f_2} = 2$ which contradicts (\ref{bothlessthan2}). \end{proof}
We turn to the proof of Theorem \ref{tight}(2).
\begin{proof}[Proof of Theorem \ref{tight} (2)]\hspace{1mm}
Let $\alpha \in [2,3)$ and assume that $L_\alpha[0,1]^2$ is a prevalent subset of $(C_\alpha[0,1]^2, d_{\alpha,2})$. Hence, there exists a Borel measure $\mu_2$ on $C_\alpha[0,1]^2$ and a compact set $K_2 \subset C_\alpha[0,1]^2$ such that $0 < \mu_2(K_2)<\infty$ and \begin{equation} \label{mu2} \mu_2\Big(C_\alpha[0,1]^2 \setminus (L_\alpha[0,1]^2+f) \Big) = 0 \text{ for all } f \in C_\alpha[0,1]^2. \end{equation}
Let \[ V=\{(x,0,z): x, z \in \mathbb{R} \} \in G_{3,2} \] and for $a \in \mathbb{R}$ let \[ V_a=V+(0,a,0) =\{(x,a,z): x, z \in \mathbb{R} \}. \] \\ We claim that for all $y \in [0,1]$, $\mu_2$-almost all $f \in K_2$ satisfy \begin{equation} \overline{\dim}_{\text{B}} \, G_f \cap V_y > \alpha-1. \label{bad} \end{equation} To see this, assume there exists $y_0 \in [0,1]$ such that \[ \mu_2 \Big(f \in K_2 : \overline{\dim}_{\text{B}} G_f \cap V_{y_0} \leqslant \alpha-1 \Big) >0; \] then \[ \mu_2 \Big(f \in K_2 : \overline{\dim}_{\text{B}} G_{H(f + F_{K_2,y_0})} \leqslant \alpha-1 \Big) >0 \] whence \[ \mu_2 \Big(C_\alpha[0,1]^2 \setminus (L_\alpha[0,1]^2 - F_{K_2, y_0}) \Big) >0, \] which contradicts (\ref{mu2}). \\ \\ We also note that, by results in \cite{boxslice}, if
$f \in C_\alpha[0,1]^2$, then for $\mathcal{L}^1$-almost all $y \in [0,1]$ we have \begin{equation} \overline{\dim}_{\text{B}} \, G_{f} \cap V_y \leqslant \alpha-1.\label{slice} \end{equation} \\ Define a map $\Phi : K_2 \times [0,1] \to [1,2]$ by \[ \Phi(f,y) = \overline{\dim}_\text{B} \, G_f \cap V_y, \] and observe that $\Phi$ is a measurable function. To see this let $\Phi_1: K_2 \times [0,1] \to C[0,1]$ and $\Phi_2: C[0,1] \to [1,2]$ be \[ (\Phi_1 (f,y)) (x) = f(x,y) \] and \[ \Phi_2 (f) = \overline{\dim}_\text{B} G_f. \] It is clear that $\Phi_1$ is continuous with respect to the metric $d$ on $K_2 \times [0,1]$ given by \[ d \Big( (f_1, y_1), (f_2,y_2) \Big) = d_{\alpha,2}(f_1,f_2) +\lvert y_1-y_2\rvert \]
and, since $\mu$ is a Borel measure, it follows that $\Phi_1$ is measurable with respect to any Borel measure on $(C[0,1], \| \cdot \|_\infty)$. The function $\Phi_2$ is an upper limit of measurable functions and therefore is itself (Borel) measurable. It follows that the composition $\Phi = \Phi_2 \circ \Phi_1$ is (Borel) measurable. \\ \\ Consider the following integral. \begin{eqnarray*} \int_{K_2} \int_0^1 \Phi(f,y) \, d\mathcal{L}^1 \, d\mu_2 &=& \int_0^1 \int_{K_2} \Phi(f,y) \, d\mu_2 \, d\mathcal{L}^1 \qquad \qquad \text{by Fubini's Theorem} \\ \\ &>& \int_0^1 (\alpha - 1) \, \mu_2(K_2) \, d\mathcal{L}^1 \qquad \qquad \text{by (\ref{bad})} \\ \\ &=& (\alpha - 1) \, \mu_2(K_2). \end{eqnarray*}
It follows that \[ \mu_2 \Big(f \in K_2 : \int_0^1 \Phi(f,y) \, d\mathcal{L}^1 > \alpha - 1 \Big) > 0 \] and we can thus choose $f_0 \in K_2$ such that \[ \int_0^1 \Phi(f_0,y) \, d\mathcal{L}^1 > \alpha - 1. \] It follows that \[ \mathcal{L}^1 \Big(y \in [0,1] : \Phi(f_0,y) > \alpha - 1 \Big) > 0 \] but this contradicts (\ref{slice}). \end{proof}
\begin{centering}
\textbf{Acknowledgements}
\end{centering}
We thank James T. Hyde for suggesting the Fubini type argument used in completing the proof Theorem \ref{tight} (2).
\end{document} | arXiv |
\begin{document}
\renewcommand{Complexity of $3$-manifolds}{Complexity of $3$-manifolds} \renewcommand{\author}{Bruno Martelli\footnote{Supported by the INTAS project ``CalcoMet-GT'' 03-51-3663}}
\dohead
\markboth{Martelli}{Complexity}
\renewcommand{\pageref{martelli_first}}{\pageref{martelli_first}} \renewcommand{\pageref{martelli_last}}{\pageref{martelli_last}}
\begin{abstract} We give a summary of known results on Matveev's complexity of compact $3$-manifolds. The only relevant new result is the classification of all closed orientable irreducible $3$-manifolds of complexity $10$. \end{abstract}
\section{Introduction} \label{martelli_first} In 3-dimensional topology, various quantities are defined, that measure how complicated a compact 3-manifold $M$ is. Among them, we find the Heegaard genus, the minimum number of tetrahedra in a triangulation, and Gromov's norm (which equals the volume when $M$ is hyperbolic). Both Heegaard genus and Gromov norm are additive on connected sums, and behave well with respect to other common cut-and-paste operations, but it is hard to classify all manifolds with a given genus or norm. On the other hand, triangulations with $n$ tetrahedra are more suitable for computational purposes, since they are finite in number and can be easily listed using a computer, but the minimum number of tetrahedra is a quantity which does not behave well with any cut-and-paste operation on 3-manifolds. (Moreover, it is not clear what is meant by ``triangulation'': do the tetrahedra need to be embedded? Are ideal vertices admitted when $M$ has boundary?)
In 1988, Matveev introduced~\cite{Mat88} for any compact $3$-manifold $M$ a non-negative integer $c(M)$, which he called the \emph{complexity} of $M$, defined as the minimum number of vertices of a \emph{simple spine} of $M$. The function $c$ is finite-to-one on the most interesting sets of compact 3-manifolds, and it behaves well with respect to the most important cut-and-paste operations. Its main properties are listed below. \begin{description} \item[additivity] $c(M\#M')=c(M)+c(M')$; \end{description} \begin{description} \item[finiteness] for any $n$ there is a finite number of closed $\matP^2$-irreducible\ $M$'s with $c(M)=n$, and a finite number of hyperbolic $N$'s with $c(N) = n$; \end{description} \begin{description} \item[monotonicity] $c(M_F)\leqslant c(M)$ for any incompressible $F\subset M$ cutting $M$ into $M_F$. \end{description}
We recall some definitions used throughout the paper. Let $M$ be a compact $3$-manifold, possibly with boundary. We say that $M$ is \emph{hyperbolic} if it admits (after removing all tori and Klein bottles from the boundary) a complete hyperbolic metric of finite volume (possibly with cusps and geodesic boundary). Such a metric is unique by Mostow's theorem (see~\cite{McMu} for a proof). A surface in $M$ is \emph{essential} if it is incompressible, $\partial$-incompressible, and not $\partial$-parallel. Thurston's Hyperbolicity Theorem for Haken manifolds ensures that a compact $M$ with boundary is hyperbolic if and only if every component of $\partial M$ has $\chi\leqslant 0$, and $M$ does not contain essential surfaces with $\chi\geqslant 0$. The complexity satisfies also the following strict inequalities. \begin{description} \item[filling] every closed hyperbolic $M$ is a Dehn filling of some hyperbolic $N$ with $c(N)<c(M)$; \end{description} \begin{description} \item[strict monotonicity] $c(M_F)<c(M)$ if $F$ is essential and $M$ is closed $\matP^2$-irreducible\ or hyperbolic; \end{description}
Some results in complexity zero already show that the finiteness property does not hold for all compact $3$-manifolds. \begin{description} \item[complexity zero] the closed $\matP^2$-irreducible\ manifolds with $c=0$ are $S^3, \matRP^3,$ and $L(3,1)$. We also have $c(S^2\times S^1) = c(S^2\timtil S^1) = 0$. Interval bundles over surfaces and handlebodies also have $c=0$. \end{description}
The ball and the solid torus have therefore complexity zero. Moreover, the additivity property actually also holds for $\partial$-connected sums. These two facts together imply the following. \begin{description} \item[stability] The complexity of $M$ does not change when adding 1-handles to $M$ or removing interior balls from it. \end{description}
Note that both such operations that not affect $c$ are ``invertible'' and hence topologically inessential. In what follows, a simplicial face-pairing $T$ of some tetrahedra is a
\emph{triangulation} of a closed 3-manifold $M$ when $M = |T|$. Tetrahedra are therefore
not necessarily embedded in $M$. A simplicial pairing $T$ is an \emph{ideal triangulation} of a compact $M$ with boundary if $M$ is $|T|$ minus open stars of all the vertices. The finiteness property above follows easily from the following. \begin{description} \item[naturality] if $M$ is closed $\matP^2$-irreducible\ and not $S^3, \matRP^3,$ or $L(3,1)$, then $c(M)$ is the minimum number of tetrahedra in a triangulation of $M$. If $N$ is hyperbolic with boundary, then $c(N)$ is the minimum number of tetrahedra in an ideal triangulation of $N$. \end{description}
The beauty of Matveev's complexity theory relies on the fact that simple spines are more flexible than triangulations: for instance spines can often be simplified by puncturing faces, and can always be cut along normal surfaces. In particular, we have the following result. An (ideal) triangulation $T$ of $M$ is \emph{minimal} when $M$ cannot be (ideally) triangulated with fewer tetrahedra. A \emph{normal surface} in $T$ is one intersecting the tetrahedra in normal triangles and squares, see~\cite{normal}. \begin{description} \item[normal surfaces] let $T$ be a minimal (ideal) triangulation of a closed $\matP^2$-irreducible\ (hyperbolic with boundary) manifold $M$ different from $S^3$, $\matRP^3$, and $L(3,1)$. If $F$ is a normal surface in $T$ containing some squares, then $c(M_F)< c(M)$. \end{description}
As an application of the previous properties, the following result was implicit in Matveev's paper~\cite{Mat90}. \begin{corollary} Let $T$ be a minimal triangulation of a closed $\matP^2$-irreducible\ 3-manifold $M$ different from $S^3,\matRP^3, L(3,1)$. Then $T$ has one vertex only, and it contains no normal spheres, except the vertex-linking one. \end{corollary} Computers can easily handle spines and triangulations, and manifolds of low complexity have been classified by various authors. Closed orientable irreducible manifolds with $c\leqslant 6$ were classified by Matveev~\cite{Mat88} in 1988. Those with $c=7$ were then classified in 1997 by Ovchinnikov~\cite{Ov, Mat:book}, and those with $c=8,9$ in 2001 by Martelli and Petronio~\cite{MaPe}. We present here the results we recently found for $c=10$. The list of all manifolds with $c=10$ has also been computed independently by Matveev~\cite{Mat:priv}, and the two tables (each consisting of $3078$ manifolds) coincide. The closed $\matP^2$-irreducible\ non-orientable manifolds with $c\leqslant 7$ have been listed independently in 2003 by Amendola and Martelli~\cite{AmMa2}, and Burton~\cite{Bu}.
Hyperbolic manifolds with cusps and without geodesic boundary were listed for all $c\leqslant 3$ in the orientable case by Matveev and Fomenko~\cite{MaFo} in 1988, and for all $c\leqslant 7$ by Callahan, Hildebrand, and Weeks~\cite{CaHiWe} in 1999. Orientable hyperbolic manifolds with geodesic boundary (and possibly some cusps) were listed for $c\leqslant 2$ by Fujii~\cite{Fuj} in 1990, and for $c\leqslant 4$ by Frigerio, Martelli, and Petronio~\cite{FriMaPe2} in 2002.
All properties listed above were proved by Matveev in~\cite{Mat90}, and extended when necessary to the non-orientable case by Martelli and Petronio in~\cite{MaPe:nonori}, except the filling property, which is a new result proved below in Subsection~\ref{properties:subsection}. The only other new results contained in this paper are the complexity-$10$ closed census (also constructed independently by Matveev~\cite{Mat:priv}), and the following counterexample (derived from that census) of a conjecture of Matveev and Fomenko~\cite{MaFo} stated in Subsection~\ref{counterexample:subsection}. \begin{proposition} There are two closed hyperbolic fillings $M$ and $M'$ of the same cusped hyperbolic $N$ with $c(M)<c(M')$ and ${\rm Vol}(M)>{\rm Vol}(M')$. \end{proposition} We mention the most important discovery of our census. \begin{proposition} There are $25$ closed hyperbolic manifolds with $c=10$ (while none with $c\leqslant 8$ and four with $c=9$). \end{proposition}
This paper is structured as follows: the complexity of a $3$-manifold is defined in Section~\ref{definitions:section}. We then collect in Section~\ref{closed:section} and~\ref{hyperbolic:section} the censuses of closed and hyperbolic 3-manifolds described above, together with the new results in complexity $10$. Relations between complexity and volume of hyperbolic manifolds are studied in Section~\ref{complexity:volume:section}. Lower bounds for the complexity, together with some infinite families of hyperbolic manifolds with boundary for which the complexity is known, are described in Section~\ref{lower:bounds:section}. The algorithm and tools usually employed to produce a census are described in Section~\ref{minimal:section}. Finally, we describe the decomposition of a manifold into \emph{bricks} introduced by Martelli and Petronio in~\cite{MaPe, MaPe:nonori}, necessary for our closed census with $c=10$, in Section~\ref{bricks:section}. All sections may be read independently, except that Sections~\ref{minimal:section} and~\ref{bricks:section} need the definitions contained in Section~\ref{definitions:section}.
\section{The complexity of a 3-manifold} \label{definitions:section} We define here simple and special spines, and the complexity of a 3-manifold. We then show a nice relation between spines without vertices and Riemannian geometry, found by Alexander and Bishop~\cite{AleBi}. Finally, we prove the filling property stated in the Introduction.
\subsection{Definitions} \label{definitions:subsection} We start with the following definition. \begin{defn} A compact 2-dimensional polyhedron $P$ is \emph{simple} if the link of every point in $P$ is contained in the graph \includegraphics[width = .4cm]{mercedes.eps}. \end{defn} Alternatively, $P$ is simple if it is locally contained in the polyhedron shown in Fig.~\ref{special:fig}-(3). A point, a compact graph, a compact surface are therefore simple. The polyhedron given by two orthogonal discs intersecting in their diameter is not simple. Three important possible kinds of neighborhoods of points are shown in Fig.~\ref{special:fig}. A point having the whole of \includegraphics[width =.3 cm]{mercedes_small.eps} as a link is called a \emph{vertex}, and its regular neighborhood is shown in Fig.~\ref{special:fig}-(3). The set $V(P)$ of the vertices of $P$ consists of isolated points, so it is finite. Note that points, graphs, and surfaces do not contain vertices. \begin{figure}
\caption{Neighborhoods of points in a special polyhedron.}
\label{special:fig}
\end{figure}
A compact polyhedron $P\subset M$ is a \emph{spine} of a compact manifold $M$ with boundary if $M$ collapses onto $P$. When $M$ is closed, we say that $P\subset M$ is a \emph{spine} if $M\setminus P$ is an open ball. \begin{defn} The \emph{complexity} $c(M)$ of a compact 3-manifold $M$ is the minimal number of vertices of a simple spine of $M$. \end{defn} As an example, a point is a spine of $S^3$, and therefore $c(S^3)=0$. A simple polyhedron is \emph{special} when every point has a neighborhood of one of the types (1)-(3) shown in Fig.~\ref{special:fig}, and the sets of such points induce a cellularization of $P$. That is, defining $S(P)$ as the set of points of type (2) or (3), the components of $P \setminus S(P)$ should be open discs -- the \emph{faces} -- and the components of $S(P)\setminus V(P)$ should be open segments -- the \emph{edges}. \begin{remark} \label{spine:triangulation:rem} A special spine of a compact $M$ with boundary is dual to an ideal triangulation of $M$, and a special spine of a closed $M$ is dual to a 1-vertex triangulation of $M$, as suggested by Fig.~\ref{dualspine:fig}. In particular, a special spine is a spine of a unique manifold. Therefore the naturality property of $c$ may be read as follows: every closed irreducible or hyperbolic manifold distinct from $S^3,\matRP^3$, and $L(3,1)$ has a special spine with $c(M)$ vertices. Such a special spine is then called \emph{minimal}. \end{remark} \begin{figure}
\caption{A special spine of $M$ is dual to a triangulation, which is ideal or 1-vertex, depending on whether $M$ has boundary or not.}
\label{dualspine:fig}
\end{figure}
\subsection{Complexity zero} A handlebody $M$ collapses onto a graph, which has no vertices, hence $c(M)=0$. An interval bundle $M$ over a surface has that surface as a spine, and hence $c(M)=0$ again. Note that, by shrinking the fibers of the bundle, the manifold $M$ admits product metrics with arbitrarily small injectivity radius and uniformly bounded curvature. This is a particular case of a relation between spines and Riemannian geometry found by Alexander and Bishop~\cite{AleBi}. A Riemannian 3-manifold $M$ is \emph{thin} when its curvature-normalized injectivity radius is less than some constant $a_2\approx 0.075$, see~\cite{AleBi} for details. We have the following. \begin{proposition}[Alexander-Bishop~\cite{AleBi}] A thin Riemannian 3-manifold has complexity zero. \end{proposition}
\subsection{The filling property} \label{properties:subsection} We prove here the filling property, stated in the Introduction. Recall from~\cite{Mat90, Mat:book} that by thickening a special spine $P$ of $M$ we get a handle decomposition $\xi_P$ of the same $M$. Normal surfaces in $\xi_P$ correspond to normal surfaces in the (possibly ideal) triangulation dual to $P$.
\begin{theorem} \label{filling:teo} Every closed hyperbolic manifold $M$ is a Dehn filling of some hyperbolic $N$ with $c(N)<c(M)$. \end{theorem} \begin{proof} Let $P$ be a minimal special spine of $M$, which exists by Remark~\ref{spine:triangulation:rem}. Take a face $f$ of $P$. By puncturing $f$ and collapsing the resulting polyhedron as much as possible, we get a simple spine $Q$ of some $N$ obtained by drilling $M$ along a curve. Since $P$ is special, $f$ is incident to at least one vertex. During the collapse, all vertices adjacent to $f$ have disappeared, hence $Q$ has less vertices than $P$. This gives $c(N)<c(M)$.
If $N$ is hyperbolic we are done. Suppose it is not. Then it is reducible, Seifert, or toroidal. If $N$ is reducible, the drilled solid torus is contained in a ball of $M$ and we get $N=M\# M'$ for some $M'$, hence $c(M)\leqslant c(N)<c(M)$ by the additivity property. Then $N$ is irreducible. Moreover $\partial N$ is incompressible in $N$ (because $M$ is not a lens space). Then the 1-dimensional portion of $Q$ can be removed, and we can suppose $Q\subset P$ is a spine of $N$ having only points of the type of Fig.~\ref{special:fig}.
Our $N$ cannot be Seifert (because $M$ is hyperbolic), hence its {\rm JSJ}\ decompostion consists of some tori $T_1,\ldots, T_k$. Each $T_i$ is essential in $N$ and compressible in $M$. Each $T_i$ can be isotoped in normal position with respect to $\xi_Q$. Since $Q\subset P$, every normal surface in $\xi_Q$ is normal also in $\xi_P$. The only normal surface in $\xi_P$ not containing squares is the vertex-linking sphere, therefore we have $c(M_{T_i})< c(M)$ for all $i$ by the normal surfaces property. Each $T_i$ is compressible in $M$, hence either it bounds a solid torus or is contained in a ball. The latter case is excluded, otherwise $M_{T_i}$ is the union of $M\# M'$ and a solid torus, and $c(M)\leqslant c(M_{T_i})<c(M)$.
Therefore each $T_i$ bounds a solid torus in $M$. Each solid torus contains the drilled curve, hence they all intersect, and there is a solid torus $H$ bounded by a $T_i$ containing all the others. Therefore $M_{T_i} = N' \cup H$ where $N'$ is a block of the {\rm JSJ}\ decomposition, which cannot be Seifert, hence it is hyperbolic. We have $c(N')=c(M_{T_i})<c(M)$, and $M$ is obtained by filling $N'$, as required. \end{proof}
\begin{remark} The proof Theorem~\ref{filling:teo} is also valid for $M$ \emph{hyperbolike}, \emph{i.e.}~irreducible, atoroidal, and not Seifert. \end{remark}
\section{Closed census} \label{closed:section} We describe here the closed orientable irreducible manifolds with $c\leqslant 10$, and the closed non-orientable $\matP^2$-irreducible\ ones with $c\leqslant 7$. Such manifolds are collected in terms of their geometry, if any, in Table~\ref{closed:table}. The complete list of manifolds can be downloaded from~\cite{weblist}. \begin{table}
\begin{center}
\begin{tabular}{rccccccccccc}
& $0$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ \\
\multicolumn{12}{c}{orientable\phantom{\Big|}} \\
lens spaces &
$3$ & $2$ & $3$ & $6$ & $10$ & $20$ & $36$ & $72$ & $136$ & $272$ & $528$ \\
other elliptic &
. & . & $1$ & $1$ & $4$ & $11$ & $25$ & $45$ & $78$ & $142$ & $270$\\
flat &
. & . & . & . & . & . & $6$ & . & . & . & . \\
Nil &
. & . & . & . & . & . & $7$ & $10$ & $14$ & $15$ & $15$\\
${\rm SL}_2\matR$ &
. & . & . & . & . & . & . & $39$ & $162$ & $513$ & $1416$\\
{\rm Sol} &
. & . & . & . & . & . & . & $5$ & $9$ & $23$ & $39$\\
$\matH^2\times\matR$ &
. & . & . & . & . & . & . & . & $2$ & . & $8$\\
hyperbolic &
. & . & . & . & . & . & . & . & . & $4$ & $25$\\
not geometric &
. & . & . & . & . & . & . & $4$ & $35$ & $185$ & $777$\\
total orientable &
$\bf{3}$ & $\bf{2}$ & $\bf{4}$ & $\bf{7}$ & $\bf{14}$ & $\bf{31}$ & $\bf{74}$ & $\bf{175}$ & $\bf{436}$ & $\bf{1154}$ & $\bf{3078}$ \\
\multicolumn{12}{c}{non-orientable\phantom{\Big|}} \\
flat &
. & . & . & . & . & . & $4$ & . & & & \\
$\matH^2\times\matR$ &
. & . & . & . & . & . & . & $2$ & & & \\
{\rm Sol} &
. & . & . & . & . & . & $1$ & $1$ & & & \\
total non-orientable &
. & . & . & . & . & . & $\bf 5$ & $\bf 3$ & & & \\
\end{tabular}
\end{center}
\caption{The number of closed $\matP^2$-irreducible\ manifolds of
given complexity (up to $10$ in the orientable case, and up to $7$ in the non-orientable one) and geometry. Recall that there is no $\matP^2$-irreducible\ manifold of type $S^2\times\matR$, and no non-orientable one of type $S^3$, Nil, and ${\rm SL}_2\matR$. }
\label{closed:table} \end{table}
\subsection{The first $7$ geometries} We recall~\cite{Sco} that there are eight important 3-dimensional geometries, six of them concerning Seifert manifolds. A Seifert fibration is described via its \emph{normalized parameters} $\big(F, (p_1, q_1), \ldots, (p_k, q_k), t\big)$, where $F$ is a closed surface, $p_i>q_i>0$ for all $i$, and $t\geqslant -k/2$ (obtained by reversing orientation if necessary). The Euler characteristic $\chi^{\rm orb}$ of the base orbifold and the Euler number $e$ of the fibration are given respectively by $$\chi^{\rm orb} = \chi(F)-\sum_{i=1}^k \left(1-\frac 1{p_i}\right), \qquad e = t + \sum_{i=1}^k \frac{q_i}{p_i}$$ and they determine the geometry of the Seifert manifold (which could have different fibrations) according to Table~\ref{tabellina}. The two non-Seifert geometries are the Sol and the hyperbolic ones~\cite{Sco}.
\begin{table} \begin{center}
\begin{tabular}{c|ccc}
\phantom{\Big|} & $\chi^{\rm orb}>0$ & $\chi^{\rm orb}=0$ & $\chi^{\rm orb}<0$ \\ \hline
\phantom{\Big|} $e=0$ & $S^2\times\matR$ & $E^3$ & $H^2\times\matR$ \\
\phantom{\Big|} $e\neq 0$ & $S^3$ & Nil & $\widetilde{{\rm SL}_2\matR}$ \\ \end{tabular} \end{center} \caption{The six Seifert geometries.} \label{tabellina} \end{table}
The following result shows how to compute the complexity (when $c\leqslant 10$) of most manifolds belonging to the first 7 geometries. It is proved for $c\leqslant 9$
in~\cite{MaPe:geometric}, and completed for $c=10$ here in Subsection~\ref{bricks:found:subsection}. We define the norm $|p,q|$ of two coprime non-negative integers inductively by setting
$|1,0|=|0,1|=|1,1|=0$ and $|p+q,q|=|p,q+p|=|p,q|+1$. A norm $\|A\|$ on matrices $A\in\GL_2(\matZ)$ is also defined in~\cite{MaPe:geometric}.
\begin{theorem}\label{non:hyperbolic:teo} Let $M$ be a geometric non-hyperbolic manifold with $c(M)\leqslant 10$: \begin{enumerate}
\item if $M$ is a lens space $L(p,q)$, then $c(M)=|p,q|-2$; \item if $M$ is a torus bundle with monodromy $A$
then $c(M)=\min\{\|A\|+5, 6\}.$ \item \label{23m} if $M=\big(S^2,(2,1),(3,1),(m,1),-1\big)$ with $m\geqslant 5$, we have $c(M)=m$; \item \label{2nm} if $M=\big(S^2,(2,1),(n,1),(m,1),-1\big)$ is not of the type above,
we have $c(M)=n+m-2$; \item \label{23pq} if $M=\big(S^2,(2,1),(3,1),(p,q),-1\big)$ with $p/q>5$ is not of the types above,
we have $c(M)=|p,q|+2$; \item if $M=\big(F, (p_1, q_1),\ldots, (p_k, q_k), t\big)$ is not of the types above, then $$c(M)=\max\big\{0,t-1+\chi(F)\big\}+6\big(1-\chi(F)\big) +
\sum_{i=1}^k\big(|p_i,q_i|+2\big). $$ \end{enumerate} \end{theorem}
Note from Table~\ref{closed:table} that a Seifert manifold with $c<6$ has $\chi^{\rm orb}>0$ and one with $c\leqslant 6$ has $\chi^{\rm orb}\geqslant 0$, whereas for higher $c$ most Seifert manifolds have $\chi^{\rm orb}<0$.
\begin{remark} Theorem~\ref{non:hyperbolic:teo}, together with analogous formulas for some non-geometric graph manifolds, follows from the decomposition of closed manifolds into bricks, introduced in Section~\ref{bricks:section}. The lists of all non-hyperbolic manifolds with $c\leqslant 10$ is then computed from such formulas by a computer program, available from~\cite{weblist}. A mistake in that program produced in~\cite{MaPe} for $c=9$ a list of $1156$ manifolds instead of $1154$ (two graph manifolds with distinct parameters were counted twice). Using Turaev-Viro invariants, Matveev has also recently checked that all the listed closed manifolds with $c\leqslant 10$ are distinct~\cite{Mat:priv}. \end{remark}
\subsection{Hyperbolic manifolds} Table~\ref{hyperbolic:table} shows all closed hyperbolic manifolds with $c\leqslant 10$. Each such manifold is a Dehn surgery on the chain link with 3 components shown in Fig.~\ref{chainlink:fig}, with parameters shown in the table.
\begin{figure}
\caption{The chain link with $3$ components.}
\label{chainlink:fig}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{ccccc}
surgery parameters & N. & volume & shortest geod & homology \\
\multicolumn{4}{c}{complexity $9$\phantom{\Big|}} \\
$1,-4,-3/2$ & $1$ & $0.942707362$ & $0.5846$ & $\matZ_5 + \matZ_5$ \\
$1,-4,2$ & $2$ & $0.981368828$ & $0.5780$ & $\matZ_5$ \\
$1,-5,-1/2$ & $3$ & $1.014941606$ & $0.8314$ & $\matZ_3 + \matZ_6$ \\
$1,-3/2,-3/2$ & $4$ & $1.263709238$ & $0.5750$ & $\matZ_5 + \matZ_5$ \\
\multicolumn{4}{c}{complexity $10$\phantom{\Big|}} \\
$1,-5,2$ & $5$ & $1.284485300$ & $0.4803$ & $\matZ_6$ \\
$1,2,1/2$ & $6$ & $1.398508884$ & $0.3661$ & trivial \\
$1,-5,1/2$ & $7$ & $1.414061044$ & $0.7941$ & $\matZ_6$ \\
$1,-4,3$ & $8$ & $1.414061044$ & $0.3648$ & $\matZ_{10}$ \\
$1,-4,-4/3$ & $9$ & $1.423611900$ & $0.3523$ & $\matZ_{35}$ \\
$1,2,-1/2$ & $10$ & $1.440699006$ & $0.3615$ & $\matZ_3$ \\
$1,2,-3/2$ & $12$ & $1.529477329$ & $0.3359$ & $\matZ_5$ \\
$1,-4,-5/2$ & $13$ & $1.543568911$ & $0.3353$ & $\matZ_{35}$ \\
$1,-1/2,-5/2$ & $14$ & $1.543568911$ & $0.5780$ & $\matZ_{21}$ \\
$1,-4,-5/3$ & $16$ & $1.583166660$ & $0.2788$ & $\matZ_{40}$ \\
$1,-6,-1/2$ & $17$ & $1.583166660$ & $0.5577$ & $\matZ_{21}$ \\
$1, -1/2, -7/2$ & $18$ & $1.583166660$ & $0.7774$ & $\matZ_3 + \matZ_9$ \\
$2,-3/2,-3/2$ & $19$ & $1.588646639$ & $0.3046$ & $\matZ_{30}$ \\
$1,-5,-3/2$ & $20$ & $1.588646639$ & $0.5345$ & $\matZ_{30}$ \\
$1,-4,3/2$ & $21$ & $1.610469711$ & $0.2499$ & $\matZ_5$ \\
$1,2,-5/2$ & $24$ & $1.649609715$ & $0.2627$ & $\matZ_7$ \\
$1,-1/2,-3/2$ & $25$ & $1.649609715$ & $0.5087$ & $\matZ_{15}$ \\
$1,1/2,-6$ & $34$ & $1.757126029$ & $0.7053$ & $\matZ_7$ \\
$1,-1/2,-1/2$ & $49$ & $1.824344322$ & $0.4680$ & $\matZ_3 + \matZ_3$ \\
$1,-5,-1/3$ & $55$ & $1.831931188$ & $0.5306$ & $\matZ_2 + \matZ_{12}$ \\
$1,-3/2,-5/3$ & $74$ & $1.885414725$ & $0.3970$ & $\matZ_{40}$ \\
$1,-5/2,-5/2$ & $76$ & $1.885414725$ & $0.5846$ & $\matZ_7 + \matZ_7$ \\
$-5/2,-1/2,-1/2$& $77$ & $1.885414725$ & $0.5846$ & $\matZ_{39}$ \\
$1,-5,-2/3$ & $91$ & $1.910843793$ & $0.4421$ & $\matZ_{30}$ \\
$1,-4/3,-3/2$ & $139$ & $1.953708315$ & $0.3535$ & $\matZ_{35}$
\end{tabular}
\end{center}
\caption{The hyperbolic manifolds of complexity $9$ and $10$. Each such manifold is
described as the surgery on the chain link with some parameters.}
\label{hyperbolic:table} \end{table}
It is proved in~\cite{MaFo} that every closed 3-manifold with $c\leqslant 8$ is a graph manifold, and that the first closed hyperbolic manifolds arise with $c=9$. The hyperbolic manifolds with $c=9$ then turned out~\cite{MaPe} to be the 4 smallest ones known. The most interesting question about those with $c=10$ is then whether they are also among the smallest ones known, for instance comparing them with the closed census~\cite{HoWe} also used by SnapPea~\cite{SnapPea}. As explained in~\cite{DuTh}, the manifolds in that census have all geodesics bigger than $.3$, and therefore some manifolds having $c=10$ are not present there (namely, those in Table~\ref{hyperbolic:table} corresponding to N.~$16$, $21$, $24$). We have therefore used SnapPea (in the python version) to compute a list of many surgeries on the chain link with $3$ components (avoiding the non-hyperbolic ones, listed in~\cite{MaPe:chain}), available from~\cite{weblist}, which contains many closed manifolds of volume smaller than $2$ that are not present in SnapPea's closed census. The entry ``N.'' in Table~\ref{hyperbolic:table} tells the position of the manifold in our table from~\cite{weblist}. The first $10$ manifolds of the two lists nevertheless coincide and are also fully described in~\cite{HoWe}, and they all have $c\leqslant 10$, as Table~\ref{hyperbolic:table} shows.
\subsection{Non-geometric manifolds} Every non-hyperbolic orientable manifold with $c\leqslant 10$ is a graph manifold, \emph{i.e.}~its {\rm JSJ}\ decomposition consists of Seifert or Sol blocks. A non-geometric orientable manifold whose decomposition contains a hyperbolic block with $c\leqslant 11$ is constructed in~\cite{AmMa}, and from our census now it follows that it cannot have $c\leqslant 10$. Therefore we have proved the following. \begin{theorem} The first closed orientable irreducible manifold with non-trivial {\rm JSJ}\ decomposition containing hyperbolic blocks has $c=11$. \end{theorem} All graph manifolds with $c\leqslant 10$ are collected in Table~\ref{Seifert:table} according to their {\rm JSJ}\ decomposition into fibering pieces, and to the type of fiberings of each piece. \begin{table}
\begin{center}
\begin{tabular}{rccccccccccc}
& $0$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ \\
\multicolumn{12}{c}{geometric\phantom{\Big|}} \\
lens spaces &
$3$ & $2$ & $3$ & $6$ & $10$ & $20$ & $36$ & $72$ & $136$ & $272$ & $528$ \\
$S^2, 3$
& . & . & $1$ & $1$ & $4$ & $11$ & $31^*$ & $84$ & $226$ & $586$ & $1477$ \\
$S^2, 4$
& . & . & . & . & . & . & $2$ & $4$ & $14$ & $40$ & $120$ \\
$S^2, 5$
& . & . & . & . & . & . & . & . & . & $2$ & $5$ \\
$\matRP^2, 2$
& . & . & . & . & . & . & $2$ & $4$ & $14$ & $34$ & $90$ \\
$\matRP^2, 3$
& . & . & . & . & . & . & . & . & . & $2$ & $5$ \\
$T$ or $K$
& . & . & . & . & . & . & $4^*$ & $2$ & $2$ & $2$ & $2$ \\
$T,1$ or $K,1$
& . & . & . & . & . & . & . & . & . & $4$ & $10$\\
$T$-fiberings over $S^1$
& . & . & . & . & . & . & . & $2$ & $2$ & $6$ & $6$\\
$T$-fiberings over $I$
& . & . & . & . & . & . & . & $3$ & $7$ & $17$ & $33$\\
\multicolumn{12}{c}{non-geometric\phantom{\Big|}} \\
$D, 2$ --- $D, 2$
& . & . & . & . & . & . & . & $4$ & $35$ & $168$ & $674$\\
$A, 1$
& . & . & . & . & . & . & . & . & . & $8$ & $24$ \\
$D, 2$ --- $D, 3$
& . & . & . & . & . & . & . & . & . & $3$ & $24$ \\
$S, 1$ --- $D, 2$
& . & . & . & . & . & . & . & . & . & $3$ & $24$ \\
$D, 2$ --- $A, 1$ --- $D, 2$
& . & . & . & . & . & . & . & . & . & $3$ & $31$ \\
total
& $\bf{3}$ & $\bf{2}$ & $\bf{4}$ & $\bf{7}$ & $\bf{14}$ & $\bf{31}$ & $\bf{74}$ & $\bf{175}$ & $\bf{436}$ & $\bf{1150}$ & $\bf{3053}$ \\
\end{tabular}
\end{center}
\caption{The type of graph manifolds of
given complexity, up to $10$. Here, $I, D, S, A, T, K$ denote respectively the closed interval,
the disc, the M\"obius strip, the annulus, the torus, and the Klein bottle. We denote by
$X,n$ a block with base space the surface $X$ and $n$ exceptional fibers. We write $X$ for
$X,0$. We have counted as $T$-fiberings only the Sol manifolds, not the manifolds also admitting
a Seifert structure.
There is a flat manifold with $c=6$ counted twice, since it has two different fibrations,
corresponding to the asterisks.}
\label{Seifert:table} \end{table}
\subsection{The simplest manifolds} As the following discussion shows, in most geometries, the manifolds with lowest complexity are the ``simplest'' ones.
\subsubsection{Elliptic} The elliptic manifolds of smallest complexity are $S^3, \matRP^3,$ and $L(3,1)$, having $c=0$. The first manifold which is not a lens space is $\big(S^2,(2,1),(2,1),(2,1),-1\big)$ and has $c=2$. It is the elliptic manifold with smallest non-cyclic fundamental group, having order $8$~\cite{Mat:book}.
\subsubsection{Flat} Every (orientable or not) flat manifold has $c=6$. A typical way to obtain some flat 3-manifold $M$ is from a face-pairing of the cube: by taking a triangulation of the cube with $6$ tetrahedra matching along the face-pairing, we get a minimal triangulation of $M$.
\subsubsection{$\matH^2\times\matR$} The first manifolds of type $\matH^2\times\matR$ are non-orientable and have $c=7$, and are also the manifolds of that geometry with smallest base orbifold~\cite{AmMa2}, having volume $-2\pi\chi^{\rm orb} = \pi/3$.
\subsubsection{Sol} The first manifold of type Sol is also non-orientable and has $c=6$, and it is the unique filling of the Gieseking manifold, the cusped hyperbolic manifold with smallest volume $1.0149\ldots$~\cite{Ad} and smallest
complexity $1$~\cite{CaHiWe}. It is also the unique torus fibering whose monodromy $A=\matr 0111$ is hyperbolic with $|\tr A|<2$~\cite{AmMa2}.
\subsubsection{Hyperbolic} As we said above, the first orientable hyperbolic manifolds are the smallest ones known. It would be interesting to know the complexity of the first non-orientable closed hyperbolic manifold, whose volume is probably considerably bigger than in the orientable case, see~\cite{HoWe}.
\section{Census of hyperbolic manifolds} \label{hyperbolic:section} We describe here the compact hyperbolic manifolds with boundary with $\chi=0$ and $c\leqslant 7$, and the orientable ones with $\chi<0$ and $c\leqslant 4$.
\subsection{Manifolds with $\chi = 0$} Recall that we define a compact $M$ to be hyperbolic when it admits a complete metric of finite volume and geodesic boundary, after removing all boundary components with $\chi = 0$. Therefore, hyperbolic manifolds $M$ with $\chi(M)=0$ have some cusps based on tori or Klein bottles, and those with $\chi(M)<0$ have geodesic boundary and possibly some cusps. To avoid confusion, we define the \emph{topological boundary} of $M$ to be the union of the geodesic boundary and the cusps.
Hyperbolic manifolds with $\chi(M)=0$ and $c\leqslant 7$ were listed by Hodgson and Weeks in~\cite{CaHiWe} and form the cusped census used by SnapPea. They are collected, according to their topological boundary, in Table~\ref{cusped:table}. Hyperbolicity of each manifold was checked by solving Thurston's equations, and all manifolds were distinguished computing their Epstein-Penner \emph{canonical decomposition}~\cite{EpPe}. In practice, volume, homology, and the length of the shortest geodesic are usually enough to distinguish two such manifolds.
\begin{table}
\begin{center}
\begin{tabular}{rcccccccc}
topological boundary & $0$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ \\
\multicolumn{9}{c}{orientable\phantom{\Big|}} \\
$T$ & . & . & $2$ & $9$ & $52$ & $223$ & $913$ & $3388$ \\
$T,T$ & . & . & . & . & $4$ & $11$ & $48$ & $162$ \\
$T,T,T$ & . & . & . & . & . & . & $1$ & $2$ \\
total orientable & . & . & $\bf 2$ & $\bf 9$ & $\bf{56}$ & $\bf{234}$ & $\bf{962}$ & $\bf{3552}$ \\
\multicolumn{9}{c}{non-orientable\phantom{\Big|}} \\
$K$ & . & $1$ & $1$ & $5$ & $14$ & $52$ & $171$ & $617$ \\
$K,K$ & . & . & $1$ & $2$ & $9$ & $23$ & $68$ & $208$ \\
$K,K,K$ & . & . & . & . & . & . & $3$ & $6$ \\
$K,K,K,K$ & . & . & . & . & . & . & $1$ & . \\
$T$ & . & . & . & . & $1$ & $1$ & $4$ & $19$ \\
$T,T$ & . & . & . & . & . & . & $1$ & . \\
$K,T$ & . & . & . & . & $1$ & $2$ & $8$ & $31$ \\
$K,K,T$ & . & . & . & . & $1$ & . & $3$ & $6$ \\
total non-orientable
& . & $\bf 1$ & $\bf 2$ & $\bf 7$ & $\bf{26}$ & $\bf{78}$ & $\bf{259}$ & $\bf{887}$
\end{tabular}
\end{center}
\caption{The number of cusped hyperbolic manifolds of
given complexity, up to $7$. The ``topological boundary'' indicates the tori $T$ and Klein bottles
$K$ present as cusps.}
\label{cusped:table} \end{table}
\subsection{Manifolds with $\chi<0$} \label{Kojima:subsection} Equations analogous to Thurston's were constructed by Frigerio and Petronio in~\cite{FriPe} for an ideal triangulation $T$ of a manifold $M$ with $\chi(M)<0$. A solution of such equations gives a realization of the hyperbolic structure of $M$ via partially truncated hyperbolic tetrahedra. One such tetrahedron is parametrized by its $6$ interior dihedral angles $\alpha_1,\ldots,\alpha_6$. The sum of the $3$ of them incident to a given vertex must be less or equal than $\pi$, and the vertex is truncated if the sum is less than $\pi$, or ideal if it is $\pi$. The compatibility equations ensure that identified edges all have the same length and that dihedral angles sum to $2\pi$ around each resulting edge. These equations, together with others checking the completeness of the cusps, realize the hyperbolic structure for $M$. Then Kojima's \emph{canonical decomposition}~\cite{Koj}, analogous to Epstein-Penner's, is a complete invariant which allows one to distinguish manifolds. In contrast with the case $\chi=0$, there are plenty of manifolds having the same complexity that are not distinguished by volume, homology, Turaev-Viro invariants, and the canonical decomposition seems to be the only available tool, see Subsection~\ref{families:subsection}. The results from~\cite{FriMaPe2} are summarized in Table~\ref{geodesic:table}.
\begin{table}
\begin{center}
\begin{tabular}{rccccc}
topological boundary
& $0$ & $1$ & $2$ & $3$ & $4$ \\
$2$ & . & . & $8$ & $76$ & $628$ \\
$3$ & . & . & . & $74$ & $2034$ \\
$4$ & . & . & . & . & $2340$ \\
$2,0$ & . & . & . & $1$ & $18$ \\
$3,0$ & . & . & . & . & $12$ \\
$2,0,0$ & . & . & . & . & $1$ \\
total & . & . & $\bf 8$ & $\bf{151}$ & $\bf{5033}$ \\
\end{tabular}
\end{center}
\caption{The number of orientable hyperbolic manifolds with non-empty geodesic boundary of
given complexity, up to $4$. The ``topological boundary'' indicates the genera of the boundary
components, with zeroes correspond to cusps.}
\label{geodesic:table} \end{table}
\begin{remark} The two censuses of hyperbolic manifolds described in this Section have a slightly more experimental nature than the closed census of Section~\ref{closed:section}, since solving hyperbolicity equations and calculating the canonical decomposition involve numerical calculations with truncated digits. \end{remark}
\section{Complexity and volume of hyperbolic manifolds} \label{complexity:volume:section} We describe here some relations between the complexity and the volume of a hyperbolic 3-manifold.
\subsection{Ideal tetrahedra and octahedra} As Theorem~\ref{vol:c:teo} below shows, there is a constant $K$ such that ${\rm Vol} (M)< K \cdot c(M)$ for any hyperbolic $M$. Let $v_{\rm T} = 1.0149\ldots $ and $v_{\rm O} = 3.6638\ldots$ be the volumes respectively of the regular ideal hyperbolic tetrahedron and octahedron. \begin{theorem} \label{vol:c:teo} Let $M$ be hyperbolic, with or without boundary. If $\chi(M)=0$ we have ${\rm Vol}(M) \leqslant v_{\rm T}\cdot c(M)$. If $\chi(M)<0$ we have ${\rm Vol} (M)< v_{\rm O}\cdot c(M)$. \end{theorem} \begin{proof} First, note that by the naturality property of the complexity $c(M)$ is the minimum number of tetrahedra in an (ideal) triangulation. If $M$ is closed, take a minimal triangulation $T$ and straighten it. Tetrahedra may overlap or collapse to low-dimensional objects, having volume zero. Since geodesic tetrahedra have volume less than $v_{\rm T}$, we get the inequality.
If $M$ is not closed, let $T$ be an ideal triangulation for $M$ with $c(M)$ tetrahedra. We can realize topologically $M$ with its boundary tori removed, by partially truncating each tetrahedron in $T$ (\emph{i.e.}~removing the vertex only in presence of a cusp, and an open star of it in presence of true boundary). Then we can straighten every truncated tetrahedron with respect to the hyperbolic structure in $M$. As above, tetrahedra may overlap or collapse. In any case, the volume of each such will be at most $v_{\rm T}$ if there is no boundary, and strictly less than $v_{\rm O}$ in general, since any ideal tetrahedron has volume at most equal to $v_{\rm T}$, and any partially truncated tetrahedron has volume strictly less than $v_{\rm O}$~\cite{Ush}. \end{proof}
The constants $v_{\rm T}$ and $v_{\rm O}$ are the best possible ones, see Remark~\ref{best:constants:rem}. A converse result of type $c(M) < K' \cdot {\rm Vol} (M)$ is impossible, because for big $C$'s there are a finite number of hyperbolic manifolds with complexity less than $C$, and an infinite number of such with volume less than $C$.
\subsection{First segments of $c$ and ${\rm Vol}$} Complexity and volume give two partial orderings on the set $\calH$ of all hyperbolic 3-manifolds. By what was just said, they are globally qualitatively very different. Nevertheless, as noted in~\cite{MaFo}, they might have similar behaviours on some subsets of $\calH$. We propose the following conjecture.
\begin{conjecture} \label{volume:complexity:conj} Among hyperbolic manifolds with the same topological boundary, the ones with smallest complexity have volume smaller than the other ones. \end{conjecture} The conjecture is stated more precisely as follows: let $\calM_{\Sigma}$ be the set of hyperbolic manifolds having some fixed topological boundary $\Sigma$. Suppose $M\in\calM_{\Sigma}$ is so that $c(M')\geqslant c(M)$ for all $M'\in\calM_{\Sigma}$. We conjecture that ${\rm Vol}(M')>{\rm Vol}(M)$ for all $M'\in\calM_{\Sigma}$ having $c(M')>c(M)$. We now discuss our conjecture.
\subsubsection{Closed case} The closed hyperbolic manifolds with smallest $c=9$ are the four having smallest volume known, see Table~\ref{hyperbolic:table}. Therefore Conjecture~\ref{volume:complexity:conj} claims that these four are actually the ones having smallest volumes among all closed hyperbolic manifolds.
\subsubsection{Connected topological boundary} In this case, Conjecture~\ref{volume:complexity:conj} is true, as the following shows. \begin{theorem} \label{connected:surface:teo} Among hyperbolic manifolds whose topological boundary is a connected surface, the ones with smallest volume are the ones with smallest complexity. \end{theorem} \begin{proof} Among manifolds having one toric cusp, the figure-8 knot complement and its sibling are those with minimal volume $2\cdotv_{\rm T}$~\cite{CaMe} and minimal complexity $2$. Among those with a Klein bottle cusp, the Gieseking manifold is the one with minimal volume $v_{\rm T}$~\cite{Ad} and minimal complexity $1$. Our assertion restricted to orientable 3-manifolds bounded by a connected surface of higher genus is proved in~\cite{FriMaPe} combining the naturality property of the complexity with Miyamoto's description~\cite{Miy} of all such manifolds with minimal volume. The same proof also works in the general case. \end{proof}
\subsubsection{Experimental data} Conjecture~\ref{volume:complexity:conj} is true when restricted to the manifolds of Tables~\ref{hyperbolic:table}, ~\ref{cusped:table}, and~\ref{geodesic:table}, for all the boundary types involved (see~\cite{CaHiWe},~\cite{SnapPea}, and~\cite{FriMaPe2}). One sees from Table~\ref{hyperbolic:table} that the manifolds of type $(K,K)$, $(T,T)$, $(K,T)$, $(K,K,T)$, $(T,T,T)$, $(K,K,K)$, and $(K,K,K,K)$ with smallest complexity have respectively $c=2,4,4,4,6,6,$ and $6$. The manifolds with $c=2$ are constructed with two regular ideal tetrahedra, and hence have volume $2\cdotv_{\rm T}$. Those with $c=4$ are constructed either with $4$ regular ideal tetrahedra, hence having volume $4\cdotv_{\rm T} = 4.05976\ldots$, or with one regular ideal octahedron, of volume $v_{\rm O} = 3.6638\ldots$ (therefore Conjecture~\ref{volume:complexity:conj} claims that every other $M$ with the same topological boundary has volume bigger than $4\cdotv_{\rm T}$). Those with $c=6$ have volume $2\cdotv_{\rm D} = 5.3334\ldots$, where $v_{\rm D} = 2.6667\ldots$ is the volume of the ``triangular ideal drum'' used by Thurston~\cite{Thu} to construct the complement of the chain link of Fig.~\ref{chainlink:fig}, which is the only orientable manifold among them.
\begin{problem} Classify the hyperbolic (orientable) manifolds of smallest complexity among those having $\chi=0$ and $k$ toric cusps, and compute their volume, for each $k$. \end{problem}
\subsection{Matveev-Fomenko conjecture} \label{counterexample:subsection} As we mentioned above, the orderings given by $c$ and ${\rm Vol}$ are qualitatively different on the whole set $\calM$ of hyperbolic manifolds, but might be similar on some subsets of $\calM$. The following conjecture was proposed by Matveev and Fomenko in~\cite{MaFo}. \begin{conjecture}[Matveev-Fomenko~\cite{MaFo}] \label{MaFo:conj} Let $M$ be a hyperbolic manifold with one cusp. Among Dehn fillings $N$ and $N'$ of $M$, if $c(N)<c(N')$ then ${\rm Vol}(N)<{\rm Vol}(N')$. \end{conjecture} The complexity-$10$ closed census produces a counterexample to Conjecture~\ref{MaFo:conj}. \begin{proposition} Let $N(p/q)$ be the $p/q$-surgery on the figure-8 knot. We have \begin{eqnarray*} {\rm Vol}\big(N(5/2)\big) = 1.5294773\ldots & \quad & c\big(N(5/2)\big) = 11 \\ {\rm Vol}\big(N(7)\big) = 1.4637766\ldots & \quad & c\big(N(7)\big) > 11 \end{eqnarray*} \end{proposition} \begin{proof} We first note that $N(p/q) = N(-p/q)$ is the $(1,2,1-p/q)$-surgery on the chain link. The manifold $N(7)$ does not belong to Table~\ref{hyperbolic:table} (it is the manifold labeled as N.11 in our census of surgeries on the chain link of~\cite{weblist}), and hence has $c>11$, whereas $N(5/2)$ is the manifold N.12 and has $c=11$. \end{proof}
\section{Lower bounds} \label{lower:bounds:section} Providing upper bounds for the complexity of a given manifold $M$ is relatively easy: from any combinatorial description of $M$ one recovers a spine of $M$ with $n$ vertices, and certainly $c(M)\leqslant n$. Finding lower bounds is a much more difficult task. The only $\partial$-irreducible manifolds whose complexity is known are those listed in the censuses of Sections~\ref{closed:section} and~\ref{hyperbolic:section}, and some infinite families of hyperbolic manifolds with bundary described below. In particular, for a closed irreducible $M$, the value $c(M)$ is only known when $c(M)\leqslant 10$, \emph{i.e.} for a finite number of manifolds. \subsection{The closed case} The only available lower bound for closed irreducible orientable manifolds is the following one, due to Matveev and
Pervova. We denote by $|{\rm Tor}(H_1(M))|$ the order of the torsion subgroup of $H_1(M)$, while $b_1$ is the rank of the free part, \emph{i.e.}~the fist Betti number of $M$. \begin{theorem}[Matveev-Pervova~\cite{MatPer}] Let $M$ be a closed orientable irreducible manifold different from $L(3,1)$.
Then $c(M)\geqslant 2\cdot\log_5|{\rm Tor}(H_1(M))|+b_1-1$. \end{theorem} Recall that Theorem~\ref{non:hyperbolic:teo} holds only for $c\leqslant 10$. Actually, the same formulas in the statement give an upper bound for $c(M)$. Some such upper bounds for lens spaces, torus bundles, and simple Seifert manifolds were previously found by Matveev and Anisov, who proposed the following conjectures. \begin{conjecture}[Matveev~\cite{Mat:book}] We have
$$c\big(L(p,q)\big) = |p,q|-2 \quad {\rm and} \quad c\big(S^2,(2,1),(2,1),(m,1),-1\big) = m$$ \end{conjecture} \begin{conjecture}[Anisov~\cite{An:towards}] The complexity of a torus bundle $M$ over $S^1$ with monodromy $A\in\GL_2(\matZ)$ is
$c(M) = \min\{\|A\|+5, 6\}.$ \end{conjecture}
\subsection{Families of hyperbolic manifolds with boundary of known complexity} \label{families:subsection} The following corollaries of Theorem~\ref{vol:c:teo} were first noted by Anisov. \begin{corollary}[Anisov~\cite{An}] The complexity of a hyperbolic manifold decomposing into $n$ ideal regular tetrahedra is $n$. \end{corollary} \begin{corollary}[Anisov~\cite{An}] The punctured torus bundle with monodromy $\matr 2111^n$ is a hyperbolic manifold of complexity $2n$. \end{corollary} For each $n\geqslant 2$, Frigerio, Martelli, and Petronio defined~\cite{FriMaPe} the family $\calM_n$ of all orientable compact manifolds admitting an ideal triangulation with one edge and $n$ tetrahedra. \begin{theorem}[Frigerio-Martelli-Petronio~\cite{FriMaPe}] \label{FriMaPe:teo} Let $M\in\calM_n$. Then $M$ is hyperbolic with a genus-$n$ surface as geodesic boundary, and without cusps. It has complexity $n$. Its homology, volume, Heegaard genus, and Turaev-Viro invariants also depend only on $n$. \end{theorem} The manifolds in $\calM_n$ are distinguished by their Kojima's canonical decomposition (see Subsection~\ref{Kojima:subsection}), which is precisely the triangulation with one edge defining them. Therefore combinatorially different such triangulations give different manifolds. \begin{theorem}[Frigerio-Martelli-Petronio~\cite{FriMaPe, FriMaPe3}] \label{FriMaPe:growth:teo} Manifolds in $\calM_n$ correspond bijectively to triangulations with one edge and $n$ tetrahedra. The cardinality $\#\calM_n$ grows as $n^n$. \end{theorem} We say that a sequence $a_n$ \emph{grows as $n^n$} when there exist constants $0<k<K$ such that $n^{k\cdot n}<a_n<n^{K\cdot n}$ for all $n\gg 0$. \begin{corollary}[Frigerio-Martelli-Petronio~\cite{FriMaPe3}] \label{hyp:growth:teo} The number of hyperbolic manifolds of complexity $n$ grows as $n^n$. \end{corollary} \begin{remark} \label{best:constants:rem} From the families introduced here we see that the inequalities of Theorem~\ref{vol:c:teo} cannot be strengthened. The torus bundles $M$ above have ${\rm Vol}(M)=v_{\rm T}\cdot c(M)$, and the manifolds in $\calM_n$ have ${\rm Vol}(M) = v_n\cdot c(M)$, with $v_n$ equals to the volume of a truncated tetrahedron with all angles $\pi/(3n)$, so that $v_n\tov_{\rm O}$ for $n\to\infty$. \end{remark}
The set $\calM_n$ is also the set mentioned in Theorem~\ref{connected:surface:teo} of all manifolds having both minimal complexity and minimal volume among those with a genus-$n$ surface as boundary. We therefore get from Table~\ref{geodesic:table} that $\#\calM_n$ is $8, 74, 2340$ for $n=2,3,4$.
The class $\calM_n$ is actually contained as $\calM_n = \calM_{n,0}$ in a bigger family $\calM_{g,k}$, defined in~\cite{FriMaPe3}. The set $\calM_{g,k}$ consists of all orientable hyperbolic manifolds of complexity $g+k$ with connected geodesic boundary of genus $g$ and $k$ cusps. Theorems~\ref{FriMaPe:teo} and~\ref{FriMaPe:growth:teo} hold similarly for all such sets. For any fixed $g$ and $k$, $\calM_{g,k}$ is the set of all manifolds with minimum complexity among those with that topological boundary. Therefore Conjecture~\ref{volume:complexity:conj} would imply the following. \begin{conjecture}[Frigerio-Martelli-Petronio~\cite{FriMaPe3}] The manifolds of smallest volume among those with a genus-$g$ geodesic surface as boundary and $k$ cusps are those in $\calM_{g,k}$. \end{conjecture}
\section{Minimal spines} \label{minimal:section} We describe here some known results about minimal spines, which are crucial for computing the censuses of Sections~\ref{closed:section} and~\ref{hyperbolic:section}. \subsection{The algorithm} \label{algorithm:subsection} The algorithm used to classify all manifolds with increasing complexity $n$ typically works as follows: \begin{enumerate} \item list all special spines with $n$ vertices (or triangulations with $n$ tetrahedra); \item remove from the list the many spines that are easily seen to be non-minimal, or not to thicken to an irreducible (or hyperbolic) manifold; \item try to recognize the manifolds obtained from thickening the remaining spines; \item eliminate from that list of manifolds the duplicates, and the manifolds that have already been found previously in some complexity-$n'$ census for some $n'<n$. \end{enumerate} Typically, step (1) produces a huge list of spines, $99.99\ldots$ \% of which are canceled via some quick criterion of non-minimality during step (2), and one is left with a much smaller list, so that steps (3) and (4) can be done by hand.
\subsection{Cutting dead branches} \label{branches:subsection} Step (1) of the algorithm above needs a huge amount of computer time already for $c=5$, due to the very big number of spines listed. Therefore one actually uses the non-minimality criteria (step (2)) \emph{while} listing the special spines with $n$ vertices (step (1)), to cut many ``dead branches''. Step (1) remains the most expensive one in terms of computer time, so it is worth describing it with some details.
A special spine or its dual (possibly ideal) triangulation $T$ (see Remark~\ref{spine:triangulation:rem}) with $n$ tetrahedra can be encoded roughly as follows. Take the face-pairing 4-valent graph $G$ of the tetrahedra in $T$. It has $n$ vertices and $2n$ edges. After fixing a simplex on each vertex, a label in $S_3$ on each (oriented) edge of $G$ encodes how the faces are glued. We therefore get $6^{2n}$ gluings (the same combinatorial $T$ is usually realized by many distinct gluings). Point (1) in the algorithm consists of two steps: \begin{itemize} \item[(1a)] classify all 4-valent graphs $G$ with $n$ vertices; \item[(1b)] for each graph $G$, fix a simplex on each vertex, and try the $6^{2n}$ possible labelings on edges. \end{itemize} Step (1b) is by far the most expensive one, because it contains many ``dead branches''; most of them are cut as follows: a partial labeling of some $k$ of the $2n$ edges defines a partial gluing of the tetrahedra. If such partial gluing already fulfills some local non-minimality criterion, we can forget about every labeling containing this partial one.
\begin{remark} \label{ografi:rem} A spine of an \emph{orientable} manifold can be encoded more efficiently by fixing an immersion of the graph $G$ in $\matR^2$, and assigning a colour in $\matZ_2$ to each vertex and a colour in $\matZ_3$ to each edge~\cite{BePe}. \end{remark}
Local non-minimality criteria used to cut the branches are listed in Subsection~\ref{criteria:subsection}. We discuss in Subsection~\ref{graph:subsection} another powerful tool, which works in the closed case only: it turns out that most $4$-valent graphs $G$ can be quickly checked \emph{a priori} not to give rise to any minimal spine (of closed manifolds). \subsection{Local non-minimality criteria} \label{criteria:subsection} We start with the following results. \begin{proposition}[Matveev~\cite{Mat90}] \label{small:faces:prop} Let $P$ be a minimal special spine of a 3-manifold $M$. Then $P$ contains no embedded face with at most $3$ edges. \end{proposition} \begin{proposition}[Matveev~\cite{Mat90}] \label{counterpass:prop} Let $P$ be a minimal special spine of a closed orientable 3-manifold $M$. Let $e$ be an edge of $P$. A face $f$ cannot be incident $3$ times to $e$, and it cannot run twice on $e$ with opposite directions. \end{proposition} In the orientable setting, both Propositions~\ref{small:faces:prop} and~\ref{counterpass:prop} are special cases of the following. Recall that $S(P)$ is the subset of a special spine $P$ consisting of all points of type (2) and (3) shown in Fig.~\ref{special:fig}. \begin{proposition}[Martelli-Petronio~\cite{MaPe}] Let $P$ be a minimal spine of a closed orientable 3-manifold $M$. Every simple closed curve $\gamma\subset P$ bounding a disc in the ball $M\setminus P$ and intersecting $S(P)$ transversely in at most $3$ points is contained in a small neighborhood of a point of $P$. \end{proposition} Analogous results in the possibly non-orientable setting are proved by Burton~\cite{Bu:graphs}.
\subsection{Four-valent graphs} \label{graph:subsection} Quite surprisingly, some 4-valents graphs can be checked \emph{a priori} not to give any minimal special spine of closed 3-manifold. \begin{remark} The face-pairing graph of a (possibly ideal) triangulation is also the set $S(P)$ in the dual special spine $P$. \end{remark} \begin{proposition}[Burton~\cite{Bu:graphs}] \label{Burton:prop} The face-pairing graph $G$ of a minimal triangulation with at least $3$ tetrahedra does not contain any portions of the types shown in Fig.~\ref{portions:fig}-(1,2,3), except if $G$ itself is as in Fig.~\ref{portions:fig}-(4). \end{proposition} \begin{figure}
\caption{Portions of graphs forbidden for minimal triangulations/spines of closed manifolds.}
\label{portions:fig}
\end{figure} A portion of $G$ is of type shown in Fig.~\ref{portions:fig}-(2,3,4) when it is as in that picture, with chains of arbitrary length. In the algorithm of Subsection~\ref{branches:subsection}, step (1b) can be therefore restricted to the \emph{useful} 4-valent graphs, \emph{i.e.}~the ones that do not contain the portions forbidden by Proposition~\ref{Burton:prop}. Table~\ref{Burton:table}, taken from~\cite{Bu:graphs}, shows that some 40 \% of the graphs are useful. \begin{table} \begin{center} \begin{tabular}{cccccccccc}
\phantom{\Big|} $n$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ \\ \hline
\phantom{\Big|} useful & $2$ & $4$ & $12$ & $39$ & $138$ & $638$ & $3366$ & $20751$ & $143829$ \\
all & $4$ & $10$ & $28$ & $97$ & $359$ & $1635$ & $8296$ & $48432$ & $316520$ \end{tabular} \end{center} \caption{Useful graphs among all 4-valent graphs with $n\leqslant 11$ vertices.} \label{Burton:table} \end{table}
\section{Bricks} \label{bricks:section} As shown in Sections~\ref{definitions:section} and~\ref{minimal:section}, classifying all closed $\matP^2$-irreducible\ manifolds with complexity $n$ reduces to listing all minimal special spines of such manifolds with $n$ vertices. Non-minimality criteria as those listed in Section~\ref{minimal:section} are then crucial to eliminate the many non-minimal spines (by cutting ``dead branches'') and gain a lot of computer time. Actually, closed manifolds often have many minimal spines, and it is not necessary to list them all: a criterion that eliminates some, but not all, minimal spines of the same manifold is also suitable for us. This is the basic idea which underlies the decomposition of closed $\matP^2$-irreducible\ manifolds into \emph{bricks}, introduced by Martelli and Petronio in~\cite{MaPe}, and described in the orientable case in this Section. (For the nonorientable one, see~\cite{MaPe:nonori}.)
\subsection{A quick introduction} The theory is roughly described as follows: every closed irreducible manifold $M$ decomposes along tori into pieces on which the complexity is additive. Each torus is marked with a $\theta$-graph in it, and the complexity of each piece is not the usual one, because it depends on that graphs. A manifold $M$ which does not decompose is a \emph{brick}. Every closed irreducible manifold decomposes into bricks. The decomposition is not unique, but there can be only a finite number of such. In order to classify all manifolds with $c\leqslant 10$, one classifies all bricks with $c\leqslant 10$, and then assemble them in all possible (finite) ways to recover the manifolds.
For $c\leqslant 10$, bricks are atoroidal, hence either Seifert or hyperbolic. And the decomposition into bricks is tipically a mixure of the {\rm JSJ}, the graph-manifolds decomposition, and the thick-thin decomposition for hyperbolic manifolds. Very few closed manifolds do not decompose, \emph{i.e.}~are themselves bricks.
\begin{proposition} There are $25$ closed bricks with $c\leqslant 10$. They are: $24$ Seifert manifolds of type $\big(S^2,(2,1),(m,1),(n,1),-1\big)$, and the hyperbolic manifold N.34 of Table~\ref{hyperbolic:table}. \end{proposition} Among closed bricks, we have Poincar\'e homology sphere $\big(S^2,(2,1),(3,1),(5,1),-1\big).$ \begin{proposition} There are $21$ non-closed bricks with $c\leqslant 10$. \end{proposition} There are $4978$ closed irreducible manifolds with $c\leqslant 10$, see Table~\ref{closed:table}. Therefore $4953 = 4978-25$ such manifolds are obtained with the $21$ bricks above.
Before giving precise definitions, we note that the \emph{layered triangulations}~\cite{Bu, JaRu} of the solid torus $H$ are particular decompositions of $H$ into bricks. Our experimental results show the following. \begin{proposition} Every closed irreducible atoroidal manifold with $c\leqslant 10$ has a minimal triangulation containing a (possibly degenerate~\cite{Bu}) layered triangulation, except for some $\big(S^2,(2,1),(m,1),(n,1),-1\big)$ and the hyperbolic N.34 of Table~\ref{hyperbolic:table}. \end{proposition}
\subsection{$\theta$-graphs in the torus} In this paper, a \emph{$\theta$-graph} $\theta$ in the torus $T$ is a graph with two vertices and three edges inside $T$, having an open disc as a complement. That is, it is a trivalent spine of $T$. Dually, this is a one-vertex triangulation of $T$.
The set of all $\theta$-graphs in $T$ up to isotopy can be described as follows. After choosing a meridian and a longitude, every \emph{slope} on $T$ (\emph{i.e.}~isotopy class of simple closed essential curves) is determined by a number $p/q\in\matQ\cup\{\infty\}$. Those numbers are the ideal vertices of the Farey tesselation of the Poincar\'e disc sketched in Fig.~\ref{tesselation:fig}-left. A $\theta$-graph contains three slopes, which are the vertices of an ideal triangle of the tesselation. This gives a correspondence between the $\theta$-graphs in $T$ and the triangles of the tesselation. Two $\theta$-graphs correspond to two adjacent triangles when they share two slopes, \emph{i.e.}~when they are related by a \emph{flip}, shown in Fig.~\ref{tesselation:fig}-right.
\begin{figure}
\caption{The Farey tesselation of the Poincar\'e disc into ideal triangles (left) and a flip (right).}
\label{tesselation:fig}
\end{figure}
\subsection{Manifolds with marked boundary} Let $M$ be a connected compact 3-manifold with (possibly empty) boundary consisting of tori. By associating to each torus component of $\partial M$ a $\theta$-graph, we get a {\em manifold with marked boundary}.
Let $M$ and $M'$ be two marked manifolds, and $T\subset \partial M, T'\subset \partial M'$ be two boundary tori. A homeomorphism $\psi:T\to T'$ sending the marking of $T$ to the one of $T'$ is an \emph{assembling} of $M$ and $M'$. The result is a new marked manifold $N=M\cup_\psi M'$. We define analogously a \emph{self-assembling} of $M$ along two tori $T,T'\subset\partial M$, the only difference is that for some technical reason we allow the map to send one $\theta\subset T$ either to $\theta'\subset T$ itself or to one of the $3$ other $\theta$-graphs obtained from $\theta'$ via a flip.
\subsection{Spines and complexity for marked manifolds} The notion of spine extends from the class of closed manifold to the class of manifolds with marked boundary. \begin{defn} Recall from Subsection~\ref{definitions:subsection} that a compact polyhedron is \emph{simple} when the link of each point is contained in \includegraphics[width =.3 cm]{mercedes_small.eps}. A sub-polyhedron $P$ of a manifold with marked boundary $M$ is called a {\em spine} of $M$ if: \begin{itemize} \item $P\cup\partial M$ is simple, \item $M\setminus(P \cup \partial M)$ is an open ball, \item $P \cap \partial M$ is a graph contained in the marking of $\partial M$. \end{itemize} \end{defn} Note that $P$ is not in general a spine of $M$ in the usual sense\footnote{To avoid confusion, the term \emph{skeleton} was used in~\cite{MaPe}.}. The {\em complexity} of a 3-manifold with marked boundary $M$ is of course defined as the minimal number of vertices of a simple spine of $M$. Three fundamental properties extend from the closed case to the case with marked boundary: complexity is still additive under connected sums, it is finite-to-one on orientable irreducible manifolds, and every orientable irreducible $M$ with $c(M)>0$ has a minimal special spine~\cite{MaPe}. (Here, a spine $P\subset M$ is \emph{special} when $P\cup\partial M$ is: the spine $P$ is actually a \emph{special spine with boundary}, with $\partial P=\partial M\cap P$ consisting of all the $\theta$-graphs in $\partial M$.)
\subsection{Bricks} An important easy fact is that if $M$ is obtained by assembling $M_1$ and $M_2$, and $P_i$ is a spine of $M_i$, then $P_1\cup P_2$ is a spine of $M$. This implies the first part of the following result.
\begin{proposition}[Martelli-Petronio~\cite{MaPe}] If $M$ is obtained by assembling $M_1$ and $M_2$, we have $c(M)\leqslant c(M_1)+c(M_2)$. If $M$ is obtained by self-assembling $N$, we have $c(M)\leqslant c(N)+6$. \end{proposition}
When $c(M)=c(M_1)+c(M_2)$ or $c(M)=c(N)+6$, and the manifolds involved are irreducible\footnote{This hypothesis is actually determinant only in one case, see~\cite{MaPe}.}, the (self-)assembling is called \emph{sharp}. \begin{defn} An orientable irreducible marked manifold $M$ is a \emph{brick} when it is not the result of any sharp (self-)assembling. \end{defn}
\begin{theorem}[Martelli-Petronio~\cite{MaPe}] Every closed orientable irreducible $M$ is obtained from some bricks via a combination of sharp (self-)assemblings. \end{theorem} There are only a finite number of such combinations giving the same $M$.
\subsection{The algorithm that finds the bricks} The algorithm described in Subsection~\ref{branches:subsection} also works for classifying all bricks of increasing complexity, with some modifications, which we now sketch. As we said above, every brick with $c>0$ has a minimal spine $P$ such that $P\cup\partial M$ is special. The $4$-valent graph $H=S(P\cup\partial M)$ contains the $\theta$-graphs marking the boundary $\partial M$. By substituting (\emph{i.e.}~identifying) in $H$ each $\theta$-graph with a point, we get a simpler $4$-valent graph $G$. We mark the edges of $G$ containing that new points with a symbol $\star$. It is then possible to encode the whole $P$ by assigning labels in $S_3$ on the remaining edges of $G$, as in Subsection~\ref{branches:subsection}. The spine $P$ is uniquely determined by such data.
Every edge of $G$ can have a label in $S_3\cup\{\star\}$, giving $7^{2n}$ possibilities to analyze during step (1b) of the algorithm (actually, they are $2^n(3+1)^{2n}$ by Remark~\ref{ografi:rem}). Although there are more possibilities to analyze than in the closed case ($7^{2n}$ against $6^{2n}$), the non-minimality criteria for bricks listed below are so powerful, that step (1b) is actually experimentally much quicker for bricks than for closed manifolds. This should be related with the experimental fact that there are much more manifolds than bricks.
\begin{proposition}[Martelli-Petronio~\cite{MaPe}] Let $P$ be a minimal special spine of a brick with $c>3$. The $3$ faces incident to an edge $e$ of $P$ are all distinct. A face can be incident to at most one $\theta$-graph in $\partial P$. \end{proposition}
\begin{theorem}[Martelli-Petronio~\cite{MaPe}] \label{bridge:teo} Let $G$ be the $4$-valent graph associated to a minimal special spine of a brick with $c>3$. Then: \begin{enumerate} \item no pair of edges disconnects $G$; \item if $c\leqslant 10$ and a quadruple of edges disconnects $G$, one of the two resulting components must be of one of the forms shown in Fig.~\ref{2bridge:fig}. \end{enumerate} \end{theorem} \begin{figure}
\caption{If $4$ edges disconnect $G$, then one of the two pieces is of one of these types.}
\label{2bridge:fig}
\end{figure} Point 2 of Theorem~\ref{bridge:teo} is proved for $c\leqslant 9$ in~\cite{MaPe} and conjectured there to be true for all $c$: its extension to the case $c=10$ needed here is technical and we omit it. We can restrict step (1b) of the algorithm to the \emph{useful} $4$-valent graphs, \emph{i.e.}~the ones that are not forbidden by Theorem~\ref{bridge:teo}. Table~\ref{useful:bricks:table} shows that only $2.1$ \% of the graphs are useful for $c=10,11$.
\begin{table} \begin{center} \begin{tabular}{cccccccccc}
\phantom{\Big|} $n$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ \\ \hline
\phantom{\Big|} useful & $1$ & $2$ & $4$ & $11$ & $27$ & $57$ & $205$ & $1008$ & $6549$ \\
all & $4$ & $10$ & $28$ & $97$ & $359$ & $1635$ & $8296$ & $48432$ & $316520$ \end{tabular} \end{center} \caption{Useful graphs among all 4-valent graphs with $n\leqslant 11$ vertices.} \label{useful:bricks:table} \end{table}
\subsection{Bricks with $c\leqslant 10$.} \label{bricks:found:subsection} We list here the bricks found. There are two kinds of bricks: the closed ones, and the ones with boundary. The closed ones correspond to the closed irreducible 3-manifolds that do not decompose. \begin{theorem} \label{closed:bricks:teo} The closed bricks having $c\leqslant 10$ are: \begin{itemize} \item $\big(S^2,(2,1),(3,1),(m,1),-1\big)$ with $m\geqslant 5, m\neq 6$, having $c=m$; \item $\big(S^2,(2,1),(n,1),(m,1),-1\big)$ not of the type above and with $\{n,m\}\neq\{3,6\},\{4,4\}$, having $c=n+m-2$; \item the closed hyperbolic manifold N.34 from Table~\ref{hyperbolic:table}, with volume $1.75712\ldots$ and homology $\matZ_7$, obtained as a $(1,-5,-3/2)$-surgery on the chain link, having $c=10$. \end{itemize} \end{theorem} \begin{remark} The manifolds $\big(S^2,(2,1),(n,1),(m,1),-1\big)$ with $\{n,m\}=\{3,6\}$ or $\{4,4\}$ are not bricks. Actually, they are flat torus bundles, whereas every other such manifold is atoroidal. \end{remark} In the following statement, we denote by $N(\alpha,\beta,\gamma)$ the following marked manifold: take the chain link of Fig.~\ref{chainlink:fig}; if $\alpha\in\matQ$, perform an $\alpha$-surgery on one component, and if $\alpha = \theta^{(i)}$, drill that component and mark the new torus with the $\theta$-graph containing the slopes $\infty, i$, and $i+1$. Do the same for $\beta$ and $\gamma$ (the choice of the components does not matter, see Fig.~\ref{chainlink:fig}). \begin{theorem} \label{non:closed:bricks:teo} The bricks with boundary having $c\leqslant 10$ are: \begin{description} \item[$c=0$:] one marked $T\times [0,1]$ and two marked solid tori; \item[$c=1$:] one marked $T\times [0,1]$; \item[$c=3$:] one marked (pair of pants)$\times S^1$; \item[$c=8$:] one marked $\big(D,(2,1),(3,1)\big)$, and $N(1,-4,\theta^{(-1)})$; \item[$c=9$:] four bricks of type $N(\alpha, \beta, \gamma)$, with $(\alpha,\beta,\gamma)$ being one of the following: $$(1,-5, \theta^{(-1)}), \ (1, \theta^{(-2)}, \theta^{(-2)}), \ (\theta^{(-3)}, \theta^{(-2)}, \theta^{(-2)}), \ (\theta^{(-2)}, \theta^{(-2)}, \theta^{(-2)});$$ \item[$c=10$:] eleven bricks of type $N(\alpha, \beta, \gamma)$, with $(\alpha,\beta,\gamma)$ being one of the following: $$(1,2,\theta^{(i)})\ {\rm with}\ i\in\{-3,-2,-1,0\}, \quad (1,-6, \theta^{(-1)}), $$ $$(-5, \theta^{(-2)}, \theta^{(-1)}), \quad (-5, \theta^{(-1)}, \theta^{(-1)}), \quad (1, \theta^{(-1)}, \theta^{(-1)}), $$ $$(1, \theta^{(-4)}, \theta^{(-1)}), \quad (2, \theta^{(-2)}, \theta^{(-2)}), \quad (\theta^{(-3)}, \theta^{(-1)}, \theta^{(-1)}),$$ and three marked complements of the same link, shown in Fig.~\ref{chain4:fig}. \end{description} \end{theorem}
\begin{remark} Using the bricks with $c\leqslant 1$, one constructs every marked solid torus. This construction is the \emph{layered solid torus} decomposition~\cite{Bu, JaRu}. An atoroidal manifold with $c\leqslant 10$ is either itself a brick, or it decomposes into one brick $B$ of Theorem~\ref{non:closed:bricks:teo} and some layered solid tori. \end{remark}
\begin{remark} The generic graph manifold decomposes into some Seifert bricks with $c\leqslant 3$. As Theorem~\ref{non:hyperbolic:teo} suggests, the only exceptions with $c\leqslant 10$ are the closed bricks listed by Theorem~\ref{closed:bricks:teo}, and some surgeries of the Seifert brick with $c=8$. \end{remark} \begin{remark} Table~\ref{hyperbolic:table} is deduced from Theorems~\ref{closed:bricks:teo} and~\ref{non:closed:bricks:teo}, using SnapPea via a python script available from~\cite{weblist}. \end{remark} \begin{remark} The proof of Theorem~\ref{non:hyperbolic:teo} from~\cite{MaPe:geometric} extends to $c=10$. One has to check that the new hyperbolic bricks with $c=10$ do not contribute to the complexity of non-hyperbolic manifolds, at least for $c=10$: we omit this discussion. \end{remark}
\begin{figure}
\caption{The complement of a chain link with $4$ components.}
\label{chain4:fig}
\end{figure} We end this Section with a conjecture, motivated by our experimental results, which implies that the decomposition into bricks is always finer than the {\rm JSJ}. \begin{conjecture} Every brick is atoroidal. \end{conjecture}
\begin{finalspacing}{}
\name{Bruno Martelli}
\address{Dipartimento di Matematica \\% Universit\`a di Pisa \\% Via F.~Buonarroti 2 \\% 56127 Pisa, Italy }
\email{[email protected]}
\classmark{57M27 (primary), 57M20, 57M50 (secondary)}
\keywords{3-manifolds, spines, complexity, enumeration}
\label{martelli_last}
\end{finalspacing}
\end{document} | arXiv |
Find the sum of the real roots of the polynomial
\[x^6 + x^4 - 115x^3 + x^2 + 1 = 0.\]
Clearly, $x = 0$ is not a root. We can divide the equation by $x^3,$ to get
\[x^3 + x - 115 + \frac{1}{x} + \frac{1}{x^3} = 0.\]Let $y = x + \frac{1}{x}.$ Then
\[y^3 = x^3 + 3x + \frac{3}{x} + \frac{1}{x^3},\]so
\[x^3 + \frac{1}{x^3} = y^3 - 3 \left( x + \frac{1}{x} \right) = y^3 - 3y.\]Thus, our equation becomes
\[y^3 - 3y + y - 115 = 0,\]or $y^3 - 2y - 115 = 0.$ This equation factors as $(y - 5)(y^2 + 5y + 23) = 0.$ The quadratic factor has no real roots, so $y = 5.$ Then
\[x + \frac{1}{x} = 5,\]or $x^2 - 5x + 1 = 0.$ This quadratic does have real roots, and by Vieta's formulas, their sum is $\boxed{5}.$ | Math Dataset |
Search results for: A. Zarubin
Search for a heavy pseudoscalar boson decaying to a Z and a Higgs boson at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A. M. Sirunyan, A. Tumasyan, W. Adam, F. Ambrogi, more
A search is presented for a heavy pseudoscalar boson $$\text {A}$$ A decaying to a Z boson and a Higgs boson with mass of 125$$\,\text {GeV}$$ GeV . In the final state considered, the Higgs boson decays to a bottom quark and antiquark, and the Z boson decays either into a pair of electrons, muons, or neutrinos. The analysis is performed using a data sample corresponding to an integrated luminosity...
Search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at 13 TeV
The CMS collaboration, A. M. Sirunyan, A. Tumasyan, W. Adam, more
Abstract Results are reported of a search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at the LHC. The data sample corresponds to an integrated luminosity of 35.9 fb−1 collected at a center-of-mass energy of 13 TeV using the CMS detector. The results are interpreted in the context of models of gauge-mediated supersymmetry breaking. Production...
Search for the associated production of the Higgs boson and a vector boson in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV via Higgs boson decays to τ leptons
Abstract A search for the standard model Higgs boson produced in association with a W or a Z boson and decaying to a pair of τ leptons is performed. A data sample of proton-proton collisions collected at s $$ \sqrt{s} $$ = 13 TeV by the CMS experiment at the CERN LHC is used, corresponding to an integrated luminosity of 35.9 fb−1. The signal strength is measured relative to the expectation...
Search for a low-mass τ−τ+ resonance in association with a bottom quark in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV
Abstract A general search is presented for a low-mass τ−τ+ resonance produced in association with a bottom quark. The search is based on proton-proton collision data at a center-of-mass energy of 13 TeV collected by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9 fb−1. The data are consistent with the standard model expectation. Upper limits at 95% confidence level...
Search for supersymmetry in events with a photon, jets, $$\mathrm {b}$$ b -jets, and missing transverse momentum in proton–proton collisions at 13$$\,\text {Te}\text {V}$$ Te
A search for supersymmetry is presented based on events with at least one photon, jets, and large missing transverse momentum produced in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te . The data correspond to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 and were recorded at the LHC with the CMS detector in 2016. The analysis characterizes signal-like...
Combined measurements of Higgs boson couplings in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
Combined measurements of the production and decay rates of the Higgs boson, as well as its couplings to vector bosons and fermions, are presented. The analysis uses the LHC proton–proton collision data set recorded with the CMS detector in 2016 at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te , corresponding to an integrated luminosity of 35.9$${\,\text {fb}^{-1}} $$ fb-1 . The combination is based...
Measurement of inclusive very forward jet cross sections in proton-lead collisions at s N N $$ \sqrt{s_{\mathrm{NN}}} $$ = 5.02 TeV
Abstract Measurements of differential cross sections for inclusive very forward jet production in proton-lead collisions as a function of jet energy are presented. The data were collected with the CMS experiment at the LHC in the laboratory pseudorapidity range −6.6 < η < −5.2. Asymmetric beam energies of 4 TeV for protons and 1.58 TeV per nucleon for Pb nuclei were used, corresponding to a...
Measurement of the energy density as a function of pseudorapidity in proton–proton collisions at $$\sqrt{s} =13\,\text {TeV} $$ s=13TeV
A measurement of the energy density in proton–proton collisions at a centre-of-mass energy of $$\sqrt{s} =13$$ s=13 $$\,\text {TeV}$$ TeV is presented. The data have been recorded with the CMS experiment at the LHC during low luminosity operations in 2015. The energy density is studied as a function of pseudorapidity in the ranges $$-\,6.6<\eta <-\,5.2$$ -6.6<η<-5.2 and $$3.15<|\eta...
Comparative Analysis of Gene Expression in Vascular Cells of Patients with Advanced Atherosclerosis
M. S. Nazarenko, A. V. Markov, A. A. Sleptcov, I. A. Koroleva, more
Biochemistry (Moscow), Supplement Series B: Biomedical Chemistry > 2019 > 13 > 1 > 74-80
A comparative analysis of gene expression profiles of carotid atherosclerotic plaques and intact internal thoracic arteries of patients with advanced atherosclerosis was performed by using the Human-12 BeadChip Microarray (Illumina, USA). The most down-regulated genes in the carotid atherosclerotic plaques were APOD, FABP4, CIDEC, and FOSB, in contrast, up-regulated gene was SPP1 (|FC| > 64; pFDR <...
Measurement of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ production cross section, the top quark mass, and the strong coupling constant using dilepton events in pp collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A measurement of the top quark–antiquark pair production cross section $$\sigma _{\mathrm {t}\overline{\mathrm {t}}} $$ σtt¯ in proton–proton collisions at a centre-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te is presented. The data correspond to an integrated luminosity of $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 , recorded by the CMS experiment at the CERN LHC in 2016. Dilepton events ($$\mathrm...
Search for vector-like quarks in events with two oppositely charged leptons and jets in proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te
A search for the pair production of heavy vector-like partners $$\mathrm {T}$$ T and $$\mathrm {B}$$ B of the top and bottom quarks has been performed by the CMS experiment at the CERN LHC using proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te . The data sample was collected in 2016 and corresponds to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 . Final states...
Measurements of the pp → WZ inclusive and differential production cross sections and constraints on charged anomalous triple gauge couplings at s $$ \sqrt{s} $$ = 13 TeV
Abstract The WZ production cross section is measured in proton-proton collisions at a centre-of-mass energy s $$ \sqrt{s} $$ = 13 TeV using data collected with the CMS detector, corresponding to an integrated luminosity of 35.9 fb−1. The inclusive cross section is measured to be σtot(pp → WZ) = 48.09 − 0.96+ 1.00 (stat) − 0.37+ 0.44 (theo) − 2.17+ 2.39 (syst) ± 1.39(lum) pb, resulting in...
Search for nonresonant Higgs boson pair production in the b b ¯ b b ¯ $$ \mathrm{b}\overline{\mathrm{b}}\mathrm{b}\overline{\mathrm{b}} $$ final state at s $$ \sqrt{s} $$ = 13 TeV
Abstract Results of a search for nonresonant production of Higgs boson pairs, with each Higgs boson decaying to a b b ¯ $$ \mathrm{b}\overline{\mathrm{b}} $$ pair, are presented. This search uses data from proton-proton collisions at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb−1, collected by the CMS detector at the LHC. No signal is observed, and...
Search for contact interactions and large extra dimensions in the dilepton mass spectra from proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV
Abstract A search for nonresonant excesses in the invariant mass spectra of electron and muon pairs is presented. The analysis is based on data from proton-proton collisions at a center-of-mass energy of 13 TeV recorded by the CMS experiment in 2016, corresponding to a total integrated luminosity of 36 fb−1. No significant deviation from the standard model is observed. Limits are set at 95% confidence...
Measurement of the top quark mass in the all-jets final state at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV and combination with the lepton+jets channel
A top quark mass measurement is performed using $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 of LHC proton–proton collision data collected with the CMS detector at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV . The measurement uses the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ all-jets final state. A kinematic fit is performed to reconstruct the decay of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ system...
Search for resonant production of second-generation sleptons with same-sign dimuon events in proton–proton collisions at $$\sqrt{s} = 13\,\text {TeV} $$ s=13TeV
A search is presented for resonant production of second-generation sleptons ($$\widetilde{\mu } _{\mathrm {L}}$$ μ~L , $$\widetilde{\nu }_{\mu }$$ ν~μ ) via the R-parity-violating coupling $${\lambda ^{\prime }_{211}}$$ λ211′ to quarks, in events with two same-sign muons and at least two jets in the final state. The smuon (muon sneutrino) is expected to decay into a muon and a neutralino (chargino),...
Search for resonant t t ¯ $$ \mathrm{t}\overline{\mathrm{t}} $$ production in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV
Abstract A search for a heavy resonance decaying into a top quark and antiquark t t ¯ $$ \left(\mathrm{t}\overline{\mathrm{t}}\right) $$ pair is performed using proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV. The search uses the data set collected with the CMS detector in 2016, which corresponds to an integrated luminosity of 35.9 fb−1. The analysis considers three exclusive...
Search for excited leptons in ℓℓγ final states in proton-proton collisions at s = 13 $$ \sqrt{\mathrm{s}}=13 $$ TeV
Abstract A search is presented for excited electrons and muons in ℓℓγ final states at the LHC. The search is based on a data sample corresponding to an integrated luminosity of 35.9 fb−1 of proton-proton collisions at a center-of-mass energy of 13 TeV, collected with the CMS detector in 2016. This is the first search for excited leptons at s $$ \sqrt{s} $$ = 13 TeV. The observation is consistent...
Search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks is performed in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te collected with the CMS detector at the LHC. The analyzed data sample corresponds to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 . The signal is characterized by a large missing transverse...
Publication language
CMS (140)
HADRON-HADRON SCATTERING (29)
HIGGS (20)
SUPERSYMMETRY (12)
EXOTICA (8)
CROSS SECTION (6)
BSM (5)
EXTRA DIMENSIONS (5)
B2G (4)
ELECTROWEAK (4)
HEAVY ION (4)
RESONANCES (4)
W′ (4)
B-PHYSICS (3)
DIBOSON (3)
MUON (3)
ROPA NAFTOWA (3)
TAU (3)
2HDM (2)
ALPHA-S (2)
AQGC (2)
B-TAGGING (2)
CHARGE ASYMMETRY (2)
DETECTORS (2)
DIJET (2)
DILEPTONS (2)
DIMUONS (2)
DIPHOTON (2)
GRAVITON (2)
HEAVY-IONS (2)
LEPTON-FLAVOUR-VIOLATION (2)
LOW MISSING TRANSVERSE ENERGY (2)
MSSM (2)
MUONS (2)
PHOTONS (2)
QUARKONIUM PRODUCTION (2)
RIDGE (2)
STRONG COUPLING CONSTANT (2)
TPRIME (2)
VH (2)
Τ (2)
13 TEV (1)
3-JET MASS (1)
ADD (1)
ADDITIVE TECHNOLOGY (1)
ALPHAT (1)
ANOMALOUS COUPLING (1)
ANOMALOUS COUPLINGS (1)
ARRAYS (1)
ASYMMETRY (1)
ATGC (1)
ATOMIC EMISSION SPECTRAL ANALYSIS (1)
B HADRONS (1)
B0 DECAYS (1)
BOTTOMONIA (1)
BOTTOMONIUM (1)
BSM SEARCHES (1)
CAROTID ATHEROSCLEROSIS (1)
CATHODE STRIP CHAMBER (1)
CATHODES (1)
CHARGE RATIO (1)
CHARM-TAGGING (1)
CKM (1)
CONTACT INTERACTIONS (1)
CORRELATIONS (1)
COSMIC RAYS (1)
CP VIOLATION (1)
CURRENT SENSITIVITY (1)
DALITZ DECAY (1)
Bulletin of the Russian Academy of Sciences: Physics (3)
Inorganic Materials (3)
Optoelectronics, Instrumentation and Data Processing (2)
Prace Instytutu Nafty i Gazu (2)
Prace Naukowe Instytutu Nafty i Gazu (2)
Biochemistry (Moscow), Supplement Series B: Biomedical Chemistry (1)
Differential Equations (1)
Journal of Structural Chemistry (1)
Measurement Techniques (1)
Physics of Particles and Nuclei (1)
Russian Engineering Research (1) | CommonCrawl |
\begin{document}
\title {Distribution of the Height of Local Maxima of Gaussian Random Fields \thanks{Research partially supported by NIH grant R01-CA157528.}} \author{Dan Cheng\\ North Carolina State University
\and Armin Schwartzman \\ North Carolina State University }
\maketitle
\begin{abstract}
Let $\{f(t): t\in T\}$ be a smooth Gaussian random field over a parameter space $T$, where $T$ may be a subset of Euclidean space or, more generally, a Riemannian manifold. We provide a general formula for the distribution of the height of a local maximum ${\mathbb P}\{f(t_0)>u | t_0 \text{ is a local maximum of } f(t) \}$ when $f$ is non-stationary. Moreover, we establish asymptotic approximations for the overshoot distribution of a local maximum ${\mathbb P}\{f(t_0)>u+v | t_0 \text{ is a local maximum of } f(t) \text{ and } f(t_0)>v\}$ as $v\to \infty$. Assuming further that $f$ is isotropic, we apply techniques from random matrix theory related to the Gaussian orthogonal ensemble to compute such conditional probabilities explicitly when $T$ is Euclidean or a sphere of arbitrary dimension. Such calculations are motivated by the statistical problem of detecting peaks in the presence of smooth Gaussian noise. \end{abstract}
{\bf Keywords:} Height; overshoot; local maxima; Riemannian manifold; Gaussian orthogonal ensemble; isotropic field; Euler characteristic; sphere.
\section{Introduction} In certain statistical applications such as peak detection problems [cf. Schwartzman et al. (2011) and Cheng and Schwartzman (2014)], we are interested in the tail distribution of the height of a local maximum of a Gaussian random field. This is defined as the probability that the height of the local maximum exceeds a fixed threshold at that point, conditioned on the event that the point is a local maximum of the field. Roughly speaking, such conditional probability can be stated as \begin{equation}\label{Eq:Palm t0}
{\mathbb P}\{f(t_0)>u | t_0 \text{ is a local maximum of } f(t) \}, \end{equation} where $\{f(t): t\in T\}$ is a smooth Gaussian random field parameterized on an $N$-dimensional set $T\subset{\mathbb R}^N$ whose interior is non-empty, $t_0\in \overset{\circ}{T}$ (the interior of $T$) and $u\in {\mathbb R}$. In peak detection problems, this distribution is useful in assessing the significance of local maxima as candidate peaks. In addition, such distribution has been of interest for describing fluctuations of the cosmic background in astronomy [cf. Bardeen et al. (1985) and Larson and Wandelt (2004)] and describing the height of sea waves in oceanography [cf. Longuet-Higgins (1952, 1980), Lindgren (1982) and Sobey (1992)].
As written, the conditioning event in (\ref{Eq:Palm t0}) has zero probability. To make the conditional probability well-defined mathematically, we follow the original approach of Cramer and Leadbetter (1967) for smooth stationary Gaussian process in one dimension, and adopt instead the definition \begin{equation}\label{Eq:Palm Ut0}
F_{t_0}(u) :=\lim_{\varepsilon\to 0} {\mathbb P}\{f(t_0)>u | \exists \text{ a local maximum of } f(t) \text{ in } U_{t_0}(\varepsilon) \}, \end{equation} if the limit on the right hand side exists, where $U_{t_0}(\varepsilon)=t_0\oplus(-\varepsilon/2,\varepsilon/2)^N$ is the $N$-dimensional open cube of side $\varepsilon$ centered at $t_0$. We call (\ref{Eq:Palm Ut0}) the distribution of the height of a local maximum of the random field.
Because this distribution is conditional on a point process, which is the set of local maxima of $f$, it falls under the general category of Palm distributions [cf. Adler et al. (2012) and Schneider and Weil (2008)]. Evaluating this distribution analytically has been known to be a difficult problem for decades. The only known results go back to Cramer and Leadbetter (1967) who provided an explicit expression for one-dimensional stationary Gaussian processes, and Belyaev (1967, 1972) and Lindgren (1972) who gave an implicit expression for stationary Gaussian fields over Euclidean space.
As a first contribution, in this paper we provide general formulae for \eqref{Eq:Palm Ut0} for non-stationary Gaussian fields and $T$ being a subset of Euclidean space or a Riemannian manifold of arbitrary dimension. As opposed to the well-studied global supremum of the field, these formulae only depend on local properties of the field. Thus, in principle, stationarity and ergodicity are not required, nor is knowledge of the global geometry or topology of the set in which the random field is defined. The caveat is that our formulae involve the expected number of local maxima (albeit within a small neighborhood of $t_0$), so actual computation becomes hard for most Gaussian fields except, as described below, for isotropic cases.
We also investigate the overshoot distribution of a local maximum, which can be roughly stated as \begin{equation}\label{Eq:overshoot t0}
{\mathbb P}\{f(t_0)>u+v | t_0 \text{ is a local maximum of } f(t) \text{ and } f(t_0)>v \}, \end{equation} where $u>0$ and $v \in {\mathbb R}$. The motivation for this distribution in peak detection is that, since local maxima representing candidate peaks are called significant if they are sufficiently high, it is enough to consider peaks that are already higher than a pre-threshold $v$. As before, since the conditioning event in (\ref{Eq:overshoot t0}) has zero probability, we adopt instead the formal definition \begin{equation}\label{Eq:overshoot Ut0} \begin{split}
\bar{F}_{t_0}(u, v) :=\lim_{\varepsilon\to 0} {\mathbb P}\{f(t_0)>u+v | \exists \text{ a local maximum of } f(t) \text{ in } U_{t_0}(\varepsilon) \text{ and } f(t_0)>v\}, \end{split} \end{equation} if the limit on the right hand side exists. It turns out that, when the pre-threshold $v$ is high, a simple asymptotic approximation to (\ref{Eq:overshoot Ut0}) can be found because in that case, the expected number of local maxima can be approximated by a simple expression similar to the expected Euler characteristic of the excursion set above level $v$ [cf. Adler and Taylor (2007)].
The appeal of the overshoot distribution had already been realized by Belyaev (1967, 1972), Nosko (1969, 1970a, 1970b) and Adler (1981), who showed that, in stationary case, it is asymptotically equivalent to an exponential distribution. In this paper we give a much tighter approximation to the overshoot distribution which, again, depends only on local properties of the field and thus, in principle, does not require stationarity nor ergodicity. However, stationarity does enable obtaining an explicit closed-form approximation such that the error is super-exponentially small. In addition, the limiting distribution has the appealing property that it does not depend on the correlation function of the field, so these parameters need not be estimated in statistical applications.
As a third contribution, we extend the Euclidean results mentioned above for both (\ref{Eq:Palm Ut0}) and (\ref{Eq:overshoot Ut0}) to Gaussian fields over Riemannian manifolds. The extension is not difficult once it is realized that, because all calculations are local, it is essentially enough to change the local geometry of Euclidean space by the local geometry of the manifold and most arguments in the proofs can be easily changed accordingly.
As a fourth contribution, we obtain exact (non-asymptotic) closed-form expressions for isotropic fields, both on Euclidean space and the $N$-dimensional sphere. This is achieved by means of an interesting recent technique employed in Euclidean space by Fyodorov (2004), Aza\"is and Wschebor (2008) and Auffinger (2011) involving random matrix theory. The method is based on the realization that the (conditional) distribution of the Hessian $\nabla^2 f$ of an isotropic Gaussian field $f$ is closely related to that of a Gaussian Orthogonal Ensemble (GOE) random matrix. Hence, the known distribution of the eigenvalues of a GOE is used to compute explicitly the expected number of local maxima required in our general formulae described above.
As an example, we show the detailed calculation for isotropic Gaussian fields on ${\mathbb R}^2$. Furthermore, by extending the GOE technique to the $N$-dimensional sphere, we are able to provide explicit closed-form expressions on that domain as well, showing the two-dimensional sphere as a specific example.
The paper is organized as follows. In Section \ref{Section:general Euclidean}, we provide general formulae for both the distribution and the overshoot distribution of the height of local maxima for smooth Gaussian fields on Euclidean space. The explicit formulae for isotropic Gaussian fields are then obtained by techniques from random matrix theory. Based on the Euclidean case, the results are then generalized to Gaussian fields over Riemannian manifolds in Section \ref{section:general manifolds}, where we also study isotropic Gaussian fields on the sphere. Lastly, Section \ref{Section:proofs of main results} contains the proofs of main theorems as well as some auxiliary results.
\section{Smooth Gaussian Random Fields on Euclidean Space}\label{Section:general Euclidean} \subsection{Height Distribution and Overshoot Distribution of Local Maxima} Let $\{f(t): t\in T\}$ be a real-valued, $C^2$ Gaussian random field parameterized on an $N$-dimensional set $T\subset{\mathbb R}^N$ whose interior is non-empty. Let \begin{equation*} \begin{split} f_i(t)&=\frac{\partial f(t)}{\partial t_i}, \quad \nabla f(t)= (f_1(t), \ldots, f_N(t))^T, \quad \Lambda(t)={\rm Cov}(\nabla f(t)),\\ f_{ij}(t)&=\frac{\partial^2 f(t)}{\partial t_it_j}, \quad \nabla^2 f(t)= (f_{ij}(t))_{1\le i, j\le N}, \end{split} \end{equation*} and denote by ${\rm index}(\nabla^2 f(t))$ the number of negative eigenvalues of $\nabla^2 f(t)$. We will make use of the following conditions. \begin{itemize} \item[({\bf C}1).] $f \in C^2(T)$ almost surely and its second derivatives satisfy the \emph{mean-square H\"older condition}: for any $t_0\in T$, there exist positive constants $L$, $\eta$ and $\delta$ such that \begin{equation*}
{\mathbb E}(f_{ij}(t)-f_{ij}(s))^2 \leq L^2 \|t-s\|^{2\eta}, \quad \forall t,s\in U_{t_0}(\delta),\ i, j= 1, \ldots, N. \end{equation*}
\item[({\bf C}2).] For every pair $(t, s)\in T^2$ with $t\neq s$, the Gaussian random vector $$(f(t), \nabla f(t), f_{ij}(t),\,
f(s), \nabla f(s), f_{ij}(s), 1\leq i\leq j\leq N)$$ is non-degenerate. \end{itemize} Note that $({\bf C}1)$ holds when $f\in C^3(T)$ and $T$ is closed and bounded.
The following theorem, whose proof is given in Section \ref{Section:proofs of main results}, provides the formula for $F_{t_0}(u)$ defined in (\ref{Eq:Palm Ut0}) for smooth Gaussian fields over ${\mathbb R}^N$. \begin{theorem}\label{Thm:Palm distr} Let $\{f(t): t\in T\}$ be a Gaussian random field satisfying $({\bf C}1)$ and $({\bf C}2)$. Then for each $t_0\in \overset{\circ}{T}$ and $u\in {\mathbb R}$, \begin{equation}\label{Eq:Palm distr Euclidean}
F_{t_0}(u)= \frac{{\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{f(t_0)> u\}} \mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}}|\nabla f(t_0)=0\}}{{\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}} | \nabla f(t_0)=0\}}. \end{equation} \end{theorem}
The implicit formula in (\ref{Eq:Palm distr Euclidean}) generalizes the results for stationary Gaussian fields in Cram\'er and Leadbetter (1967, p. 243), Belyaev (1967, 1972) and Lindgren (1972) in the sense that stationarity is no longer required.
Note that the conditional expectations in (\ref{Eq:Palm distr Euclidean}) are hard to compute, since they involve the indicator functions on the eigenvalues of a random matrix. However, in Section \ref{Section:isotropic Euclidean} and Section \ref{Section:isotropic sphere} below, we show that (\ref{Eq:Palm distr Euclidean}) can be computed explicitly for isotropic Gaussian fields.
The following result shows the exact formula for the overshoot distribution defined in (\ref{Eq:overshoot Ut0}). \begin{theorem}\label{Thm:overshoot distr} Let $\{f(t): t\in T\}$ be a Gaussian random field satisfying $({\bf C}1)$ and $({\bf C}2)$. Then for each $t_0\in \overset{\circ}{T}$, $v\in {\mathbb R}$ and $u>0$, \begin{equation*} \begin{split}
\bar{F}_{t_0}(u,v)= \frac{{\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{f(t_0)> u+v\}} \mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}}|\nabla f(t_0)=0\}}{{\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{f(t_0)> v\}} \mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}}|\nabla f(t_0)=0\}}. \end{split} \end{equation*} \end{theorem} \begin{proof}\ The result follows from similar arguments for proving Theorem \ref{Thm:Palm distr}. \end{proof}
The advantage of overshoot distribution is that we can explore the asymptotics as the pre-threshold $v$ gets large. Theorem \ref{Thm:Palm distr high level} below, whose proof is given in Section \ref{Section:proofs of main results}, provides an asymptotic approximation to the overshoot distribution of a smooth Gaussian field over ${\mathbb R}^N$. This approximation is based on the fact that as the exceeding level tends to infinity, the expected number of local maxima can be approximated by a simpler form which is similar to the expected Euler characteristic of the excursion set. \begin{theorem}\label{Thm:Palm distr high level} Let $\{f(t): t\in T\}$ be a centered, unit-variance Gaussian random field satisfying $({\bf C}1)$ and $({\bf C}2)$. Then for each $t_0\in \overset{\circ}{T}$ and each fixed $u>0$, there exists $\alpha>0$ such that as $v\to \infty$, \begin{equation}\label{Eq:Palm distr high level}
\bar{F}_{t_0}(u,v)= \frac{\int_{u+v}^\infty \phi(x) {\mathbb E}\{{\rm det} \nabla^2 f(t_0)|f(t_0)=x, \nabla f(t_0)=0\}dx}{\int_v^\infty \phi(x) {\mathbb E}\{{\rm det} \nabla^2 f(t_0)|f(t_0)=x, \nabla f(t_0)=0\}dx}(1+o(e^{-\alpha v^2})). \end{equation} Here and in the sequel, $\phi(x)$ denotes the standard Gaussian density. \end{theorem}
Note that the expectation in \eqref{Eq:Palm distr high level} is computable since the indicator function does not exist anymore. However, for non-stationary Gaussian random fields over ${\mathbb R}^N$ with $N\ge 2$, the general expression of the expectation in \eqref{Eq:Palm distr high level} would be complicated. Fortunately, as a polynomial in $x$, the coefficient of the highest order of the expectation above is relatively simple, see Lemma \ref{Lem:conditional expectation for N} below. This gives the following approximation to the overshoot distribution for general smooth Gaussian fields over ${\mathbb R}^N$.
\begin{corollary}\label{Cor:Palm distr high level o(1)} Let the assumptions in Theorem \ref{Thm:Palm distr high level} hold. Then for each $t_0\in \overset{\circ}{T}$ and each fixed $u>0$, as $v\to \infty$, \begin{equation}\label{eq:Palm distr high level o(1)} \bar{F}_{t_0}(u,v)= \frac{(u+v)^{N-1}e^{-(u+v)^2/2}}{v^{N-1}e^{-v^2/2}}(1+O(v^{-2})). \end{equation} \end{corollary} \begin{proof}\ The result follows immediately from Theorem \ref{Thm:Palm distr high level} and Lemma \ref{Lem:conditional expectation for N} below. \end{proof}
It can be seen that the result in Corollary \ref{Cor:Palm distr high level o(1)} reduces to the exponential asymptotic distribution given by Belyaev (1967, 1972), Nosko (1969, 1970a, 1970b) and Adler (1981), but the result here gives the approximation error and does not require stationarity. Compared with (\ref{Eq:Palm distr high level}), (\ref{eq:Palm distr high level o(1)}) provides a less accurate approximation, since the error is only $O(v^{-2})$, but it provides a simple explicit form.
Next we show some cases where the approximation in (\ref{Eq:Palm distr high level}) becomes relatively simple and with the same degree of accuracy, i.e., the error is super-exponentially small. \begin{corollary}\label{Cor:overshoot 1D} Let the assumptions in Theorem \ref{Thm:Palm distr high level} hold. Suppose further the dimension $N=1$ or the field $f$ is stationary, then for each $t_0\in \overset{\circ}{T}$ and each fixed $u>0$, there exists $\alpha>0$ such that as $v\to \infty$, \begin{equation}\label{Eq:Palm distr high level stationary} \bar{F}_{t_0}(u,v)= \frac{H_{N-1}(u+v)e^{-(u+v)^2/2}}{H_{N-1}(v)e^{-v^2/2}}(1+o(e^{-\alpha v^2})), \end{equation} where $H_{N-1}(x)$ is the Hermite polynomial of order $N-1$. \end{corollary} \begin{proof}\ (i) Suppose first $N=1$. Since ${\rm Var}(f(t)) \equiv 1$, ${\mathbb E} \{f(t)f'(t)\}\equiv 0$ and ${\mathbb E} \{f''(t)f(t)\} = -{\rm Var}(f'(t)) = -\Lambda(t)$. It follows that \begin{equation*} \begin{split}
{\mathbb E} &\{{\rm det} \nabla^2 f(t)| f(t)=x, \nabla f(t)=0\} = {\mathbb E}\{f''(t)| f(t)=x, f'(t)=0\}\\ &= ({\mathbb E} \{f''(t)f(t)\}, {\mathbb E} \{f''(t)f'(t)\}) \left( \begin{array}{cc} 1 & 0 \\ 0 & \frac{1}{{\rm Var}(f'(t))} \\\end{array} \right) \left( \begin{array}{c} x \\ 0 \\\end{array} \right)= -\Lambda(t) x. \end{split} \end{equation*} Plugging this into (\ref{Eq:Palm distr high level}) yields the desired result.
(ii) If $f$ is stationary, it can be shown that [cf. Lemma 11.7.1 in Adler and Taylor (2007)], $$
{\mathbb E}\{{\rm det} \nabla^2 f(t_0)| f(t_0)=x, \nabla f(t_0)=0\}=(-1)^N{\rm det}(\Lambda(t_0)) H_N(x). $$ Then (\ref{Eq:Palm distr high level stationary}) follows from Theorem \ref{Thm:Palm distr high level} and the following formula for Hermite polynomials \begin{equation*} \int_v^\infty H_N(x) e^{-x^2/2}\, dx = H_{N-1}(v) e^{-v^2/2}. \end{equation*} \end{proof}
An interesting property of the results obtained about the overshoot distribution is that the asymptotic approximations in Corollaries \ref{Cor:Palm distr high level o(1)} and \ref{Cor:overshoot 1D} do not depend on the location $t_0$, even in the case where stationarity is not assumed. In addition, they do not require any knowledge of spectral moments of $f$ except for zero mean and constant variance. In this sense, the distributions are convenient for use in statistics because the correlation function of the field need not be estimated.
\subsection{Isotropic Gaussian Random Fields on Euclidean Space}\label{Section:isotropic Euclidean} We show here the explicit formulae for both the height distribution and the overshoot distribution of local maxima for isotropic Gaussian random fields. To our knowledge, this article is the first attempt to obtain these distributions explicitly for $N \ge 2$. The main tools are techniques from random matrix theory developed in Fyodorov (2004), Aza\"is and Wschebor (2008) and Auffinger (2011).
Let $\{f(t): t\in T\}$ be a real-valued, $C^2$, centered, unit-variance isotropic Gaussian field parameterized on an $N$-dimensional set $T\subset{\mathbb R}^N$. Due to isotropy, we can write the covariance function of the field as ${\mathbb E}\{f(t)f(s)\}=\rho(\|t-s\|^2)$ for an appropriate function $\rho(\cdot): [0,\infty) \rightarrow {\mathbb R}$, and denote \begin{equation}\label{Eq:kappa} \rho'=\rho'(0), \quad \rho''=\rho''(0), \quad \kappa=-\rho'/\sqrt{\rho''}. \end{equation} By isotropy again, the covariance of $(f(t), \nabla f(t), \nabla^2 f(t))$ only depends on $\rho'$ and $\rho''$, see Lemma \ref{Lem:cov of isotropic Euclidean} below. In particular, by Lemma \ref{Lem:cov of isotropic Euclidean}, we see that ${\rm Var}(f_i(t))=-2\rho'$ and ${\rm Var}(f_{ii}(t))=12\rho''$ for any $i\in\{1,\ldots, N\}$, which implies $\rho'<0$ and $\rho''>0$ and hence $\kappa>0$. We need the following condition for further discussions. \begin{itemize} \item[({\bf C}3).] $\kappa \le 1$ (or equivalently $\rho''-\rho'^2\ge 0$). \end{itemize}
\begin{example} \label{Example:rho} Here are some examples of covariance functions with corresponding $\rho$ satisfying $({\bf C}3)$.
(i) Powered exponential: $\rho(r)=e^{-cr}$, where $c>0$. Then $\rho'=-c$, $\rho''=c^2$ and $\kappa=1$.
(ii) Cauchy: $\rho(r)=(1+r/c)^{-\beta}$, where $c>0$ and $\beta>0$. Then $\rho'=-\beta/c$, $\rho''=\beta(\beta+1)/c^2$ and $\kappa=\sqrt{\beta/(\beta+1)}$. \end{example}
\begin{remark}\label{Remark:C3}
By Aza\"is and Wschebor (2010), $({\bf C}3)$ holds when $\rho(\|t-s\|^2)$, $t, s\in {\mathbb R}^N$, is a positive definite function for every dimension $N\ge 1$. The cases in Example \ref{Example:rho} are of this kind. \end{remark}
We shall use \eqref{Eq:Palm distr Euclidean} to compute the distribution of the height of a local maximum. As mentioned before, the conditional distribution on the right hand side of \eqref{Eq:Palm distr Euclidean} is extremely hard to compute. In Section \ref{Section:proofs of main results} below, we build connection between such distribution and certain GOE matrix to make the computation available.
Recall that an $N\times N$ random matrix $M_N$ is said to have the Gaussian Orthogonal Ensemble (GOE) distribution if it is symmetric, with centered Gaussian entries $M_{ij}$ satisfying ${\rm Var}(M_{ii})=1$, ${\rm Var}(M_{ij})=1/2$ if $i<j$ and the random variables $\{M_{ij}, 1\leq i\leq j\leq N\}$ are independent. Moreover, the explicit formula for the distribution $Q_N$ of the eigenvalues $\lambda_i$ of $M_N$ is given by [cf. Auffinger (2011)] \begin{equation}\label{Eq:GOE density}
Q_N(d\lambda)=\frac{1}{c_N} \prod_{i=1}^N e^{-\frac{1}{2}\lambda_i^2}d\lambda_i \prod_{1\leq i<j\leq N}|\lambda_i-\lambda_j|\mathbbm{1}_{\{\lambda_1\leq\ldots\leq\lambda_N\}}, \end{equation} where the normalization constant $c_N$ can be computed from Selberg's integral \begin{equation}\label{Eq:normalization constant} c_N=\frac{1}{N!}(2\sqrt{2})^N \prod_{i=1}^N\Gamma\Big(1+\frac{i}{2}\Big). \end{equation} We use notation ${\mathbb E}_{GOE}^N$ to represent the expectation under density $Q_N(d\lambda)$, i.e., for a measurable function $g$, $$ {\mathbb E}_{GOE}^N [g(\lambda_1, \ldots, \lambda_N)] = \int_{\lambda_1 \le \ldots \le \lambda_N} g(\lambda_1, \ldots, \lambda_N) Q_N(d\lambda). $$
\begin{theorem}\label{Thm:Palm distr iso Euclidean} Let $\{f(t): t\in T\}$ be a centered, unit-variance, isotropic Gaussian random field satisfying $({\bf C}1)$, $({\bf C}2)$ and $({\bf C}3)$. Then for each $t_0\in \overset{\circ}{T}$ and $u\in {\mathbb R}$, \begin{equation*} \begin{split} F_{t_0}(u)= \left\{
\begin{array}{l l}
\frac{(1-\kappa^2)^{-1/2} \int_u^\infty \phi(x){\mathbb E}_{GOE}^{N+1}\left\{ \exp\left[\lambda_{N+1}^2/2 - (\lambda_{N+1}-\kappa x/\sqrt{2} )^2/(1-\kappa^2) \right]\right\}dx}{{\mathbb E}_{GOE}^{N+1}\left\{ \exp\left[-\lambda_{N+1}^2/2 \right] \right\}} & \quad \text{if $\kappa \in (0, 1)$},\\
\frac{\int_u^\infty \phi(x){\mathbb E}_{GOE}^{N}\left\{ \left(\prod_{i=1}^N|\lambda_i-x/\sqrt{2}|\right) \mathbbm{1}_{\{\lambda_N<x/\sqrt{2}\}} \right\}dx}{\sqrt{2/\pi}\Gamma\left(\frac{N+1}{2}\right){\mathbb E}_{GOE}^{N+1}\left\{ \exp\left[-\lambda_{N+1}^2/2 \right] \right\}} & \quad \text{if $\kappa = 1$},
\end{array} \right. \end{split} \end{equation*} where $\kappa$ is defined in \eqref{Eq:kappa}. \end{theorem} \begin{proof}\ Since $f$ is centered and has unit variance, the numerator in \eqref{Thm:Palm distr} can be written as $$
\int_u^\infty \phi(x) {\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}}|f(t_0)=x, \nabla f(t_0)=0\}dx. $$ Applying Theorem \ref{Thm:Palm distr} and Lemmas \ref{Lem:expectation of local max} and \ref{Lem:expectation of local max above u} below gives the desired result. \end{proof} \begin{remark} The formula in Theorem \ref{Thm:Palm distr iso Euclidean} shows that for an isotropic Gaussian field over ${\mathbb R}^N$, $F_{t_0}(u)$ only depends on $\kappa$. Therefore, we may write $F_{t_0}(u)$ as $F_{t_0}(u, \kappa)$. As a consequence of Lemma \ref{Lem:GOE for det Hessian}, $F_{t_0}(u, \kappa)$ is continuous in $\kappa$, hence the formula for the case of $\kappa = 1$ (i.e. $\rho''-\rho'^2= 0$) can also be derived by taking the limit $\lim_{\kappa \uparrow 1}F_{t_0}(u, \kappa)$. \end{remark}
Next we show an example on computing $F_{t_0}(u)$ explicitly for $N=2$. The calculation for $N=1$ and $N > 2$ is similar and thus omitted here. In particular, the formula for $N=1$ derived in such method can be verified to be the same as in Cramer and Leadbetter (1967).
\begin{example} Let $N=2$. Applying Proposition \ref{Prop:GOE expectation for N=2} below with $a=1$ and $b=0$ gives \begin{equation}\label{Eq:GOE expectation for a=1 N=2} {\mathbb E}_{GOE}^{N+1}\bigg\{ \exp\bigg[-\frac{\lambda_{N+1}^2}{2} \bigg] \bigg\}= \frac{\sqrt{6}}{6}. \end{equation} Applying Proposition \ref{Prop:GOE expectation for N=2} again with $a=1/(1-\kappa^2)$ and $b=\kappa x/\sqrt{2}$, one has \begin{equation}\label{Eq:GOE expectation for a and b N=2} \begin{split} &\quad {\mathbb E}_{GOE}^{N+1}\bigg\{ \exp\bigg[\frac{\lambda_{N+1}^2}{2} - \frac{(\lambda_{N+1}-\kappa x/\sqrt{2} )^2}{1-\kappa^2} \bigg]\bigg\}\\ &=\frac{\sqrt{1-\kappa^2}}{\pi\sqrt{2}}\bigg\{\pi\kappa^2(x^2-1)\Phi\Big(\frac{\kappa x}{\sqrt{2-\kappa^2}} \Big) + \frac{\kappa x\sqrt{2-\kappa^2}\sqrt{\pi}}{\sqrt{2}}e^{-\frac{\kappa^2x^2}{2(2-\kappa^2)}} \\ &\quad + \frac{2\pi}{\sqrt{3-\kappa^2}}e^{-\frac{\kappa^2x^2}{2(3-\kappa^2)}}\Phi\Big(\frac{\kappa x}{\sqrt{(3-\kappa^2)(2-\kappa^2)}} \Big) \bigg\}, \end{split} \end{equation} where $\Phi(x)=(2\pi)^{-1/2}\int_{-\infty}^x e^{-\frac{t^2}{2}} dt$ is the c.d.f. of standard Normal random variable. Let $h(x)$ be the density function of the distribution of the height of a local maximum, i.e. $h(x)=-F'_{t_0}(x)$. By Theorem \ref{Thm:Palm distr iso Euclidean}, (\ref{Eq:GOE expectation for a=1 N=2}) and (\ref{Eq:GOE expectation for a and b N=2}), \begin{equation}\label{Eq:h on R^2} \begin{split} h(x) &=\sqrt{3}\kappa^2(x^2-1)\phi(x)\Phi\Big(\frac{\kappa x}{\sqrt{2-\kappa^2}} \Big) + \frac{\kappa x\sqrt{3(2-\kappa^2)}}{2\pi}e^{-\frac{x^2}{2-\kappa^2}} \\ &\quad+\frac{\sqrt{6}}{\sqrt{\pi(3-\kappa^2)}}e^{-\frac{3x^2}{2(3-\kappa^2)}}\Phi\Big(\frac{\kappa x}{\sqrt{(3-\kappa^2)(2-\kappa^2)}} \Big), \end{split} \end{equation} and hence $F_{t_0}(u)=\int_u^\infty h(x)dx$. Figure \ref{Fig:h on R^2} shows several examples. Shown in solid red is the extreme case of $({\bf C}3)$, $\kappa=1$, which simplifies to \begin{equation*} h(x) = \sqrt{3}(x^2-1)\phi(x)\Phi(x) + \frac{\sqrt{3}}{2\pi}xe^{-x^2}+\frac{\sqrt{3}}{\sqrt{\pi}}e^{-\frac{3x^2}{4}}\Phi\Big(\frac{x}{\sqrt{2}} \Big). \end{equation*} As an interesting phenomenon, it can be seen from both (\ref{Eq:h on R^2}) and Figure~\ref{Fig:h on R^2} that $h(x) \to \phi(x)$ if $\kappa \to 0$. \begin{figure}
\caption{Density function $h(x)$ of the distribution $F_{t_0}$ for isotropic Gaussian fields on ${\mathbb R}^2$.}
\label{Fig:h on R^2}
\end{figure} \end{example}
\begin{theorem}\label{Thm:overshoot isotropic} Let $\{f(t): t\in T\}$ be a centered, unit-variance, isotropic Gaussian random field satisfying $({\bf C}1)$, $({\bf C}2)$ and $({\bf C}3)$. Then for each $t_0\in \overset{\circ}{T}$, $v\in {\mathbb R}$ and $u>0$, \begin{equation}\label{Eq:overshoot isotropic} \begin{split} \bar{F}_{t_0}(u,v)= \left\{
\begin{array}{l l}
\frac{\int_{u+v}^\infty \phi(x){\mathbb E}_{GOE}^{N+1}\left\{ \exp\left[\lambda_{N+1}^2/2 - (\lambda_{N+1}-\kappa x/\sqrt{2} )^2/(1-\kappa^2) \right]\right\}dx}{\int_v^\infty \phi(x){\mathbb E}_{GOE}^{N+1}\left\{ \exp\left[\lambda_{N+1}^2/2 - (\lambda_{N+1}-\kappa x/\sqrt{2} )^2/(1-\kappa^2) \right]\right\}dx} & \quad \text{if $\kappa \in (0,1)$},\\
\frac{\int_{u+v}^\infty \phi(x){\mathbb E}_{GOE}^{N}\left\{ \left(\prod_{i=1}^N|\lambda_i-x/\sqrt{2}|\right) \mathbbm{1}_{\{\lambda_N<x/\sqrt{2}\}} \right\}dx}{\int_v^\infty \phi(x){\mathbb E}_{GOE}^{N}\left\{ \left(\prod_{i=1}^N|\lambda_i-x/\sqrt{2}|\right) \mathbbm{1}_{\{\lambda_N<x/\sqrt{2}\}} \right\}dx} & \quad \text{if $\kappa = 1$},
\end{array} \right. \end{split} \end{equation} where $\kappa$ is defined in \eqref{Eq:kappa}. \end{theorem} \begin{proof}\ The result follows immediately by applying Theorem \ref{Thm:overshoot distr} and Lemma \ref{Lem:expectation of local max above u} below. \end{proof}
Note that the expectations in \eqref{Eq:overshoot isotropic} can be computed similarly to \eqref{Eq:GOE expectation for a and b N=2} for any $N \ge 1$, thus Theorem \ref{Thm:overshoot isotropic} provides an explicit formula for the overshoot distribution of isotropic Gaussian fields. On the other hand, since isotropy implies stationarity, the approximation to overshoot distribution for large $v$ is simply given by Corollary \ref{Cor:overshoot 1D}.
\section{Smooth Gaussian Random Fields on Manifolds}\label{section:general manifolds} \subsection{Height Distribution and Overshoot Distribution of Local Maxima} Let $(M,g)$ be an $N$-dimensional Riemannian manifold and let $f$ be a smooth function on $M$. Then the \emph{gradient} of $f$, denoted by $\nabla f$, is the unique continuous vector field on $M$ such that
$g(\nabla f, X) = X f$
for every vector field $X$. The \emph{Hessian} of $f$, denoted by $\nabla^2 f$, is the double differential form defined by
$\nabla^2 f (X,Y)= XY f - \nabla_X Y f$,
where $X$ and $Y$ are vector fields, $\nabla_X$ is the Levi-Civit\'a connection of $(M,g)$. To make the notations consistent with the Euclidean case, we fix an orthonormal frame $\{E_i\}_{1\le i\le N}$, and let \begin{equation}\label{Eq:gradient and hessian on manifolds} \begin{split} \nabla f &= (f_1, \ldots, f_N)=(E_1f, \ldots, E_Nf),\\ \nabla^2 f &= (f_{ij})_{1\le i, j\le N}= (\nabla^2 f (E_i, E_j))_{1\le i, j\le N}. \end{split} \end{equation} Note that if $t$ is a critical point, i.e. $\nabla f(t)=0$, then $\nabla^2 f (E_i, E_j)(t)=E_iE_jf(t)$, which is similar to the Euclidean case.
Let $B_{t_0}(\varepsilon)=\{t\in M: d(t,t_0)\leq \varepsilon\}$ be the geodesic ball of radius $\varepsilon$ centered at $t_0\in \overset{\circ}{M}$, where $d$ is the distance function induced by the Riemannian metric $g$. We also define $F_{t_0}(u)$ as in (\ref{Eq:Palm Ut0}) and $\bar{F}_{t_0}(u,v)$ as in (\ref{Eq:overshoot Ut0}) with $U_{t_0}(\varepsilon)$ replaced by $B_{t_0}(\varepsilon)$, respectively.
We will make use of the following conditions. \begin{itemize} \item[$({\bf C}1')$.] $f \in C^2(M)$ almost surely and its second derivatives satisfy the \emph{mean-square H\"older condition}: for any $t_0\in M$, there exist positive constants $L$, $\eta$ and $\delta$ such that \begin{equation*} {\mathbb E}(f_{ij}(t)-f_{ij}(s))^2 \leq L^2 d(t,s)^{2\eta}, \quad \forall t,s\in B_{t_0}(\delta),\ i, j= 1, \ldots, N. \end{equation*}
\item[$({\bf C}2')$.] For every pair $(t, s)\in M^2$ with $t\neq s$, the Gaussian random vector $$(f(t), \nabla f(t), f_{ij}(t),\,
f(s), \nabla f(s), f_{ij}(s), 1\leq i\leq j\leq N)$$ is non-degenerate. \end{itemize} Note that $({\bf C}1')$ holds when $f\in C^3(M)$.
Theorem \ref{Thm:Palm distr manifolds} below, whose proof is given in Section \ref{Section:proofs of main results}, is a generalization of Theorems \ref{Thm:Palm distr}, \ref{Thm:overshoot distr}, \ref{Thm:Palm distr high level} and Corollary \ref{Cor:Palm distr high level o(1)}. It provides formulae for both the height distribution and the overshoot distribution of local maxima for smooth Gaussian fields over Riemannian manifolds. Note that the formal expressions are exactly the same as in Euclidean case, but now the field is defined on a manifold. \begin{theorem}\label{Thm:Palm distr manifolds} Let $(M,g)$ be an oriented $N$-dimensional $C^3$ Riemannian manifold with a $C^1$ Riemannian metric $g$. Let $f$ be a Gaussian random field on $M$ such that $({\bf C}1')$ and $({\bf C}2')$ are fulfilled. Then for each $t_0 \in \overset{\circ}{M}$, $u, v\in {\mathbb R}$ and $w>0$, \begin{equation}\label{Eq:Palm distr manifolds 1} \begin{split}
F_{t_0}(u) &= \frac{{\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{f(t_0)> u\}} \mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}}|\nabla f(t_0)=0\}}{{\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}} | \nabla f(t_0)=0\}},\\
\bar{F}_{t_0}(w,v) &= \frac{{\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{f(t_0)> w+v\}} \mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}}|\nabla f(t_0)=0\}}{{\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{f(t_0)> v\}} \mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}}|\nabla f(t_0)=0\}}. \end{split} \end{equation} If we assume further that $f$ is centered and has unit variance, then for each fixed $w>0$, there exists $\alpha>0$ such that as $v\to \infty$, \begin{equation}\label{Eq:Palm distr manifolds 3} \begin{split}
\bar{F}_{t_0}(w,v) &= \frac{\int_{w+v}^\infty \phi(x) {\mathbb E}\{{\rm det} \nabla^2 f(t_0)|f(t_0)=x, \nabla f(t_0)=0\}dx}{\int_v^\infty \phi(x) {\mathbb E}\{{\rm det} \nabla^2 f(t_0)|f(t_0)=x, \nabla f(t_0)=0\}dx}(1+o(e^{-\alpha v^2}))\\ &= \frac{(w+v)^{N-1}e^{-(w+v)^2/2}}{v^{N-1}e^{-v^2/2}}(1+O(v^{-2})). \end{split} \end{equation} \end{theorem}
It is quite remarkable that the second approximation in \eqref{Eq:Palm distr manifolds 3} does not depend on the curvature of the manifold nor the covariance function of the field, which need not have any stationary properties other than zero mean and constant variance.
\subsection{Isotropic Gaussian Random Fields on the Sphere}\label{Section:isotropic sphere} Similarly to the Euclidean case, we explore the explicit formulae for both the height distribution and the overshoot distribution of local maxima for isotropic Gaussian random fields on a particular manifold, sphere.
Consider an isotropic Gaussian random field $\{f(t): t\in \mathbb{S}^N\}$, where $\mathbb{S}^N\subset {\mathbb R}^{N+1}$ is the $N$-dimensional unit sphere. For the purpose of simplifying the arguments, we will focus here on the case $N\ge 2$. The special case of the circle, $N=1$, requires separate treatment but extending our results to that case is straightforward.
The following theorem by Schoenberg (1942) characterizes the covariance function of an isotropic Gaussian field on sphere [see also Gneiting (2013)]. \begin{theorem}\label{Thm:Schoenberg} A continuous function $C(\cdot, \cdot):\mathbb{S}^N\times \mathbb{S}^N \rightarrow {\mathbb R} $ is the covariance of an isotropic Gaussian field on $\mathbb{S}^N$, $N \ge 2$, if and only if it has the form \begin{equation*} C(t,s)= \sum_{n=0}^\infty a_n P_n^\lambda({\langle} t, s \rangle), \quad t, s \in \mathbb{S}^N, \end{equation*} where $\lambda=(N-1)/2$, $a_n \geq 0$, $\sum_{n=0}^\infty a_nP_n^\lambda(1) <\infty$, and $P_n^\lambda$ are ultraspherical polynomials defined by the expansion \begin{equation*} (1-2rx +r^2)^{-\lambda}=\sum_{n=0}^\infty r^n P_n^\lambda(x), \quad x\in [-1,1]. \end{equation*} \end{theorem}
\begin{remark} (i). Note that [cf. Szeg\"o (1975, p. 80)] \begin{equation}\label{Eq:us polynomials at 1} P_n^\lambda(1)=\binom{n+2\lambda-1}{n} \end{equation} and $\lambda=(N-1)/2$, therefore, $\sum_{n=0}^\infty a_nP_n^\lambda(1) <\infty$ is equivalent to $\sum_{n=0}^\infty n^{N-2}a_n <\infty$.
(ii). When $N=2$, $\lambda=1/2$ and $P_n^\lambda$ become \emph{Legendre polynomials}. For more results on isotropic Gaussian fields on $\mathbb{S}^2$, see a recent monograph by Marinucci and Peccati (2011).
(iii). Theorem \ref{Thm:Schoenberg} still holds for the case $N=1$ if we set [cf. Schoenberg (1942)] $P_n^0({\langle} t, s \rangle)=\cos(n\arccos{\langle} t, s \rangle)=T_n({\langle} t, s \rangle)$, where $T_n$ are \emph{Chebyshev polynomials of the first kind} defined by the expansion \begin{equation*} \frac{1-rx}{1-2rx +r^2}=\sum_{n=0}^\infty r^n T_n(x), \quad x\in [-1,1]. \end{equation*} The arguments in the rest of this section can be easily modified accordingly. \end{remark}
The following statement $({\bf C}1'')$ is a smoothness condition for Gaussian fields on sphere. Lemma \ref{Lem:C^3 sphere} below shows that $({\bf C}1'')$ implies the pervious smoothness condition $({\bf C}1')$. \begin{itemize} \item[$({\bf C}1'')$.] The covariance $C(\cdot, \cdot)$ of $\{f(t): t\in \mathbb{S}^N\}$, $N \ge 2$, satisfies \begin{equation*} C(t,s)= \sum_{n=0}^\infty a_n P_n^\lambda({\langle} t, s \rangle), \quad t, s \in \mathbb{S}^N, \end{equation*} where $\lambda=(N-1)/2$, $a_n \geq 0$, $\sum_{n=1}^\infty n^{N+8}a_n<\infty$, and $P_n^\lambda$ are ultraspherical polynomials. \end{itemize}
\begin{lemma}\label{Lem:C^3 sphere} {\rm [Cheng and Xiao (2014)].} Let $f$ be an isotropic Gaussian field on $\mathbb{S}^N$ such that $({\bf C}1'')$ is fulfilled. Then the covariance $C(\cdot, \cdot)\in C^5(\mathbb{S}^N\times \mathbb{S}^N)$ and hence $({\bf C}1')$ holds for $f$. \end{lemma}
For a unit-variance isotropic Gaussian field $f$ on $\mathbb{S}^N$ satisfying $({\bf C}1'')$, we define \begin{equation}\label{Def:C' and C''} \begin{split}
C'&=\sum_{n=1}^\infty a_n\Big(\frac{d}{dx}P_n^\lambda(x)|_{x=1}\Big)=(N-1)\sum_{n=1}^\infty a_nP_{n-1}^{\lambda+1}(1), \\
C''&=\sum_{n=2}^\infty a_n\Big(\frac{d^2}{dx^2}P_n^\lambda(x)|_{x=1}\Big)=(N-1)(N+1)\sum_{n=2}^\infty a_nP_{n-2}^{\lambda+2}(1). \end{split} \end{equation} Due to isotropy, the covariance of $(f(t), \nabla f(t), \nabla^2 f(t))$ only depends on $C'$ and $C''$, see Lemma \ref{Lem:joint distribution sphere} below. In particular, by Lemma \ref{Lem:joint distribution sphere} again, ${\rm Var}(f_i(t))=C'$ and ${\rm Var}(f_{ii}(t))=C'+3C''$ for any $i\in\{1,\ldots, N\}$. We need the following condition on $C'$ and $C''$ for further discussions. \begin{itemize} \item[$({\bf C}3')$.] $C''+C'-C'^2\ge 0$. \end{itemize} \begin{remark}\label{Remark:C3'}\ Note that $({\bf C}3')$ holds when $C(\cdot, \cdot)$ is a covariance function (i.e. positive definite function) for every dimension $N\ge 2$ (or equivalently for every $N\ge 1$). In fact, by Schoenberg (1942), if $C(\cdot, \cdot)$ is a covariance function on $\mathbb{S}^N$ for every $N\ge 2$, then it is necessary of the form \begin{equation*} C(t,s)= \sum_{n=0}^\infty b_n {\langle} t, s \rangle^n, \quad t, s \in \mathbb{S}^N, \end{equation*} where $b_n\ge 0$. Unit-variance of the field implies $\sum_{n=0}^\infty b_n=1$. Now consider the random variable $X$ that assigns probability $b_n$ to the integer $n$. Then $C'=\sum_{n=1}^\infty nb_n={\mathbb E} X$, $C''=\sum_{n=2}^\infty n(n-1)b_n={\mathbb E} X(X-1)$ and $C''+C'-C'^2={\rm Var}(X)\ge 0$, hence $({\bf C}3')$ holds. \end{remark}
\begin{theorem}\label{Thm:Palm distr sphere} Let $\{f(t): t\in \mathbb{S}^N\}$, $N\ge 2$, be a centered, unit-variance, isotropic Gaussian field satisfying $({\bf C}1'')$, $({\bf C}2')$ and $({\bf C}3')$. Then for each $t_0\in \mathbb{S}^N$ and $u\in {\mathbb R}$, \begin{equation*} \begin{split} F_{t_0}(u)= \left\{
\begin{array}{l l}
\frac{\big(\frac{C''+C'}{C''+C'-C'^2}\big)^{1/2}\int_u^\infty \phi(x){\mathbb E}_{GOE}^{N+1}\Big\{ \exp\Big[\frac{\lambda_{N+1}^2}{2} - \frac{C''\big(\lambda_{N+1}-\frac{C'x}{\sqrt{2C''}} \big)^2}{C''+C'-C'^2} \Big]\Big\}dx}{{\mathbb E}_{GOE}^{N+1}\big\{ \exp\big[\frac{1}{2}\lambda_{N+1}^2 -\frac{C''}{C''+C'}\lambda_{N+1}^2 \big] \big\}} & \text{if $C''+C'-C'^2> 0$},\\
\frac{\int_u^\infty \phi(x){\mathbb E}_{GOE}^{N}\big\{ \big(\prod_{i=1}^N|\lambda_i-\frac{C'x}{\sqrt{2C''}}|\big) \mathbbm{1}_{\{\lambda_N<\frac{C'x}{\sqrt{2C''}}\}} \big\} dx}{\big(\frac{2C''}{\pi(C''+C')}\big)^{1/2} \Gamma\left(\frac{N+1}{2}\right) {\mathbb E}_{GOE}^{N+1}\big\{ \exp\big[\frac{1}{2}\lambda_{N+1}^2 -\frac{C''}{C''+C'}\lambda_{N+1}^2 \big] \big\}} & \text{if $C''+C'-C'^2= 0$},
\end{array} \right. \end{split} \end{equation*} where $C'$ and $C''$ are defined in (\ref{Def:C' and C''}). \end{theorem} \begin{proof}\ The result follows from applying Theorem \ref{Thm:Palm distr manifolds} and Lemmas \ref{Lem:expectation of local max sphere} and \ref{Lem:expectation of local max above u sphere} below. \end{proof}
\begin{remark} The formula in Theorem \ref{Thm:Palm distr sphere} shows that for isotropic Gaussian fields over $\mathbb{S}^N$, $F_{t_0}(u)$ depends on both $C'$ and $C''$. Therefore, we may write $F_{t_0}(u)$ as $F_{t_0}(u, C', C'')$. As a consequence of Lemma \ref{Lem:GOE for det Hessian sphere}, $F_{t_0}(u, C', C'')$ is continuous in $C'$ and $C''$, hence the formula for the case of $C''+C'-C'^2= 0$ can also be derived by taking the limit $\lim_{C''+C'-C'^2 \downarrow 0}F_{t_0}(u, C', C'')$. \end{remark}
\begin{example} Let $N=2$. Applying Proposition \ref{Prop:GOE expectation for N=2} with $a=\frac{C''}{C''+C'}$ and $b=0$ gives \begin{equation}\label{Eq:GOE expectation for a=1 N=2 sphere} {\mathbb E}_{GOE}^{N+1}\bigg\{ \exp\bigg[\frac{1}{2}\lambda_{N+1}^2 -\frac{C''}{C''+C'}\lambda_{N+1}^2 \bigg] \bigg\}= \frac{\sqrt{2}}{2}\bigg\{\frac{C'}{2C''}\Big( \frac{C''+C'}{C''}\Big)^{1/2}+\Big( \frac{C''+C'}{3C''+C'}\Big)^{1/2}\bigg\}. \end{equation} Applying Proposition \ref{Prop:GOE expectation for N=2} again with $a=\frac{C''}{C''+C'-C'^2}$ and $b=\frac{C'x}{\sqrt{2C''}}$, one has \begin{equation}\label{Eq:GOE expectation for a and b N=2 sphere} \begin{split} &\quad {\mathbb E}_{GOE}^{N+1}\bigg\{ \exp\bigg[\frac{\lambda_{N+1}^2}{2} - \frac{C''\Big(\lambda_{N+1}-\frac{C'x}{\sqrt{2C''}} \Big)^2}{C''+C'-C'^2} \bigg]\bigg\}\\ &=\frac{1}{\pi\sqrt{2}}\Big( \frac{C''+C'-C'^2}{C''} \Big)^{1/2}\bigg\{ \frac{C'^2(x^2-1)+C'}{C''}\pi\Phi\Big(\frac{C'x}{\sqrt{2C''+C'-C'^2}} \Big) \\ &\quad+ \frac{xC'\sqrt{2C''+C'-C'^2}}{C''\sqrt{2}}\sqrt{\pi}e^{-\frac{C'^2x^2}{2(2C''+C'-C'^2)}} \\ &\quad + \frac{2\pi\sqrt{C''}}{\sqrt{3C''+C'-C'^2}}e^{-\frac{C'^2x^2}{2(3C''+C'-C'^2)}}\Phi\Big(\frac{xC'\sqrt{C''}}{\sqrt{(2C''+C'-C'^2)(3C''+C'-C'^2)}} \Big) \bigg\}. \end{split} \end{equation} Let $h(x)$ be the density function of the distribution of the height of a local maximum, i.e. $h(x)=-F'_{t_0}(x)$. By Theorem \ref{Thm:Palm distr sphere}, together with (\ref{Eq:GOE expectation for a=1 N=2 sphere}) and (\ref{Eq:GOE expectation for a and b N=2 sphere}), we obtain \begin{equation}\label{Eq:h on S^2} \begin{split} h(x)&=\bigg(\frac{C'}{2C''}+\Big(\frac{C''}{3C''+C'}\Big)^{1/2}\bigg)^{-1} \bigg\{ \frac{C'^2(x^2-1)+C'}{C''}\phi(x)\Phi\Big(\frac{C'x}{\sqrt{2C''+C'-C'^2}} \Big) \\ &\quad+ \frac{xC'\sqrt{2C''+C'-C'^2}}{2\pi C''}e^{-\frac{(2C''+C')x^2}{2(2C''+C'-C'^2)}} \\ &\quad + \frac{\sqrt{2C''}}{\sqrt{\pi}\sqrt{3C''+C'-C'^2}}e^{-\frac{(3C''+C')x^2}{2(3C''+C'-C'^2)}} \Phi\Big(\frac{xC'\sqrt{C''}}{\sqrt{(2C''+C'-C'^2)(3C''+C'-C'^2)}} \Big) \bigg\}, \end{split} \end{equation} and hence $F_{t_0}(u)=\int_u^\infty h(x)dx$.
Figure \ref{Fig:h on S^2} shows several examples. The extreme case of $({\bf C}3')$, $C''+C'-C'^2=0$, is obtained when $C(t,s) = {\langle} t, s \rangle^n$, $n \ge 2$. Shown in solid red is the case $n=2$, which simplifies to \[ h(x) = (2x^2-1)\phi(x)\Phi(\sqrt{2}x) + \frac{x \sqrt{2}}{2\pi} e^{-\frac{3x^2}{2}} + \frac{1}{\sqrt{\pi}}e^{-x^2} \Phi(x). \] \begin{figure}
\caption{Density function $h(x)$ of the distribution $F_{t_0}$ for isotropic Gaussian fields on $\mathbb{S}^2$.}
\label{Fig:h on S^2}
\end{figure} It can be seen from both (\ref{Eq:h on S^2}) and Figure~\ref{Fig:h on S^2} that $h(x) \to \phi(x)$ if $ \max (C', C'^2)/C'' \to 0$. \end{example}
\begin{theorem}\label{Thm:Overshoot sphere exact}
Let $\{f(t): t\in \mathbb{S}^N\}$, $N\ge 2$, be a centered, unit-variance, isotropic Gaussian field satisfying $({\bf C}1'')$, $({\bf C}2')$ and $({\bf C}3')$. Then for each $t_0\in \mathbb{S}^N$ and $u, v>0$, \begin{equation*} \begin{split} \bar{F}_{t_0}(u,v)= \left\{
\begin{array}{l l}
\frac{\int_{u+v}^\infty \phi(x){\mathbb E}_{GOE}^{N+1}\Big\{ \exp\Big[\frac{\lambda_{N+1}^2}{2} - \frac{C''\big(\lambda_{N+1}-\frac{C'x}{\sqrt{2C''}} \big)^2}{C''+C'-C'^2} \Big]\Big\}dx}{\int_{v}^\infty \phi(x){\mathbb E}_{GOE}^{N+1}\Big\{ \exp\Big[\frac{\lambda_{N+1}^2}{2} - \frac{C''\big(\lambda_{N+1}-\frac{C'x}{\sqrt{2C''}} \big)^2}{C''+C'-C'^2} \Big]\Big\}dx} & \quad \text{if $C''+C'-C'^2> 0$},\\
\frac{\int_{u+v}^\infty \phi(x){\mathbb E}_{GOE}^{N}\big\{ \big(\prod_{i=1}^N|\lambda_i-\frac{C'x}{\sqrt{2C''}}|\big) \mathbbm{1}_{\{\lambda_N<\frac{C'x}{\sqrt{2C''}}\}} \big\} dx}{\int_v^\infty \phi(x){\mathbb E}_{GOE}^{N}\big\{ \big(\prod_{i=1}^N|\lambda_i-\frac{C'x}{\sqrt{2C''}}|\big) \mathbbm{1}_{\{\lambda_N<\frac{C'x}{\sqrt{2C''}}\}} \big\} dx} & \quad \text{if $C''+C'-C'^2= 0$},
\end{array} \right. \end{split} \end{equation*} where $C'$ and $C''$ are defined in (\ref{Def:C' and C''}). \end{theorem} \begin{proof}\ The result follows immediately by applying Theorem \ref{Thm:Palm distr manifolds} and Lemma \ref{Lem:expectation of local max above u sphere}. \end{proof}
Because the exact expression in Theorem \ref{Thm:Overshoot sphere exact} may be complicated for large $N$, we now derive a tight approximation to it, which is analogous to Corollary \ref{Cor:overshoot 1D} for the Euclidean case.
Let $\chi(A_u(f,\mathbb{S}^N))$ be the Euler characteristic of the excursion set $A_u(f,\mathbb{S}^N) = \{t\in \mathbb{S}^N: f(t)> u\}$. Let $\omega_j = {\rm Vol}(\mathbb{S}^j)$, the spherical area of the $j$-dimensional unit sphere $\mathbb{S}^j$, i.e., $\omega_j=2\pi^{(j+1)/2}/\Gamma(\frac{j+1}{2})$. The lemma below provides the formula for the expected Euler characteristic of the excursion set. \begin{lemma}\label{Lem:MEC sphere} {\rm [Cheng and Xiao (2014)].} Let $\{f(t): t\in \mathbb{S}^N\}$, $N\ge 2$, be a centered, unit-variance, isotropic Gaussian field satisfying $({\bf C}1'')$ and $({\bf C}2')$. Then \begin{equation*} \begin{split} {\mathbb E}\{\chi(A_u(f,\mathbb{S}^N))\} = \sum_{j=0}^N (C')^{j/2} \mathcal{L}_j (\mathbb{S}^N) \rho_j(u), \end{split} \end{equation*} where $C'$ is defined in (\ref{Def:C' and C''}), $\rho_0(u)= 1-\Phi(u)$, $\rho_j(u) = (2\pi)^{-(j+1)/2} H_{j-1} (u) e^{-u^2/2}$ with Hermite polynomials $H_{j-1}$ for $j\geq 1$ and, for $j=0, \ldots, N$, \begin{equation}\label{Eq:L-K curvature} \begin{split} \mathcal{L}_j (\mathbb{S}^N) = \left\{
\begin{array}{l l}
2 \binom{N}{j}\frac{\omega_N}{\omega_{N-j}} & \quad \text{if $N-j$ is even}\\
0 & \quad \text{otherwise}
\end{array} \right. \end{split} \end{equation} are the Lipschitz-Killing curvatures of $\mathbb{S}^N$. \end{lemma}
\begin{theorem}\label{Thm:overshoot sphere} Let $\{f(t): t\in \mathbb{S}^N\}$, $N\ge 2$, be a centered, unit-variance, isotropic Gaussian field satisfying $({\bf C}1'')$ and $({\bf C}2')$. Then for each $t_0\in \mathbb{S}^N$ and each fixed $u>0$, there exists $\alpha>0$ such that as $v\to \infty$, \begin{equation}\label{Eq:overshoot sphere} \begin{split} \bar{F}_{t_0}(u,v)=\frac{\sum_{j=0}^N (C')^{j/2} \mathcal{L}_j (\mathbb{S}^N) \rho_j(u+v)}{\sum_{j=0}^N (C')^{j/2} \mathcal{L}_j (\mathbb{S}^N) \rho_j(v)}(1+o(e^{-\alpha v^2})), \end{split} \end{equation} where $C'$ is defined in (\ref{Def:C' and C''}), $\rho_j(u)$ and $\mathcal{L}_j (\mathbb{S}^N)$ are as in Lemma \ref{Lem:MEC sphere}. \end{theorem}
\begin{remark} Note that \eqref{Eq:overshoot sphere} depends on the covariance function only through its first derivative $C'$. In comparison with Corollary \ref{Cor:overshoot 1D} for the Euclidean case, there we only have the highest order term of the expected Euler characteristic expansion because we do not consider the boundaries of $T$. On the sphere, we need all terms in the expansion since sphere has no boundary. \end{remark}
\begin{proof}\ By Theorem \ref{Thm:Palm distr manifolds}, \begin{equation*}
\bar{F}_{t_0}(u,v)= \frac{\int_{u+v}^\infty \phi(x) {\mathbb E}\{{\rm det} \nabla^2 f(t_0)|f(t_0)=x, \nabla f(t_0)=0\}dx}{\int_v^\infty \phi(x) {\mathbb E}\{{\rm det} \nabla^2 f(t_0)|f(t_0)=x, \nabla f(t_0)=0\}dx}(1+o(e^{-\alpha v^2})). \end{equation*} Since $f$ is isotropic, integrating the numerator and denominator above over $\mathbb{S}^N$, we obtain \begin{equation*} \begin{split}
\bar{F}_{t_0}(u,v)&= \frac{\int_{\mathbb{S}^N}\int_{u+v}^\infty \phi(x) {\mathbb E}\{{\rm det} \nabla^2 f(t)|f(t)=x, \nabla f(t)=0\}dx dt}{\int_{\mathbb{S}^N} \int_v^\infty \phi(x) {\mathbb E}\{{\rm det} \nabla^2 f(t)|f(t)=x, \nabla f(t)=0\}dxdt}(1+o(e^{-\alpha v^2}))\\ &=\frac{{\mathbb E}\{\chi(A_{u+v}(f,\mathbb{S}^N))\}}{{\mathbb E}\{\chi(A_v(f,\mathbb{S}^N))\}}(1+o(e^{-\alpha v^2})), \end{split} \end{equation*} where the last line comes from applying the Kac-Rice Metatheorem to the Euler characteristic of the excursion set, see Adler and Taylor (2007, pp. 315-316). The result then follows from Lemma \ref{Lem:MEC sphere}. \end{proof}
\section{Proofs and Auxiliary Results}\label{Section:proofs of main results} \subsection{Proofs for Section \ref{Section:general Euclidean}} For $u>0$, let $\mu(t_0, \varepsilon)$, $\mu_N(t_0, \varepsilon)$, $\mu_N^u(t_0, \varepsilon)$ and $\mu_N^{u-}(t_0, \varepsilon)$ be the number of critical points, the number of local maxima, the number of local maxima above $u$ and the number of local maxima below $u$ in $U_{t_0}(\varepsilon)$ respectively. More precisely, \begin{equation}\label{Def:various mu's} \begin{split} \mu(t_0, \varepsilon) &=\#\{t\in U_{t_0}(\varepsilon): \nabla f(t)=0\},\\ \mu_N(t_0, \varepsilon) &=\#\{t\in U_{t_0}(\varepsilon): \nabla f(t)=0, {\rm index}(\nabla^2 f(t))=N\},\\ \mu_N^u(t_0, \varepsilon) &=\#\{t\in U_{t_0}(\varepsilon): f(t)> u, \nabla f(t)=0, {\rm index}(\nabla^2 f(t))=N\},\\ \mu_N^{u-}(t_0, \varepsilon) &=\#\{t\in U_{t_0}(\varepsilon): f(t)\le u, \nabla f(t)=0, {\rm index}(\nabla^2 f(t))=N\}, \end{split} \end{equation} where ${\rm index}(\nabla^2 f(t))$ is the number of negative eigenvalues of $\nabla^2 f(t)$.
In order to prove Theorem \ref{Thm:Palm distr}, we need the following lemma which shows that, for the number of critical points over the cube of lengh $\varepsilon$, its factorial moment decays faster than the expectation as $\varepsilon$ tends to 0. Our proof is based on similar arguments in the proof of Lemma 3 in Piterbarg (1996). \begin{lemma}\label{Lem:Piterbarg} Let $\{f(t): t\in T\}$ be a Gaussian random field satisfying $({\bf C}1)$ and $({\bf C}2)$. Then for each fixed $t_0\in \overset{\circ}{T}$, as $\varepsilon\to 0$, \begin{equation*} {\mathbb E} \{\mu(t_0, \varepsilon)(\mu(t_0, \varepsilon) -1)\}= o(\varepsilon^N). \end{equation*} \end{lemma} \begin{proof}\ By the Kac-Rice formula for factorial moments [cf. Theorem 11.5.1 in Adler and Taylor (2007)], \begin{equation}\label{Eq:factorial moment} \begin{split} {\mathbb E}\{\mu(t_0, \varepsilon)(\mu(t_0, \varepsilon) -1)\}=\int_{U_{t_0}(\varepsilon)}\int_{U_{t_0}(\varepsilon)} E_1(t,s) p_{\nabla f(t), \nabla f(s)}(0,0) dtds, \end{split} \end{equation} where $$
E_1(t,s) = {\mathbb E}\{|{\rm det}\nabla^2 f(t)||{\rm det}\nabla^2 f(s)|| \nabla f(t)=\nabla f(s)=0\}. $$ By Taylor's expansion, \begin{equation}\label{Eq:Taylor expansion}
\nabla f(s)= \nabla f(t) + \nabla^2 f(t)(s-t)^T + \|s-t\|^{1+\eta}\mathbf{Z}_{t,s}, \end{equation} where $\mathbf{Z}_{t,s}=(Z_{t,s}^1, \ldots, Z_{t,s}^N)^T$ is a Gaussian vector field, with properties to be specified. In particular, by condition ({\bf C}1), for $\varepsilon$ small enough, \begin{equation*}
\sup_{t,s\in U_{t_0}(\varepsilon), \, t\ne s}{\mathbb E}\|\mathbf{Z}_{t,s}\|^2 \leq C_1, \end{equation*} where $C_1$ is some positive constant. Therefore, we can write $$
E_1(t,s) = {\mathbb E}\{|{\rm det}\nabla^2 f(t)||{\rm det}\nabla^2 f(s)|| \nabla f(t)=0, \nabla^2 f(t)(s-t)^T =- \|s-t\|^{1+\eta}\mathbf{Z}_{t,s}\}. $$ Note that the determinant of the matrix $\nabla^2 f(t)$ is equal to the determinant of the matrix \begin{equation*} \begin{split} \left( \begin{array}{cccc} 1 & -(s_1-t_1) & \cdots & -(s_N-t_N) \\ 0 & & & \\ \vdots & & \nabla^2 f(t) &\\ 0 & & & \end{array} \right). \end{split} \end{equation*}
For any $i=2,\ldots, N+1$, multiply the $i$th column of this matrix by $(s_i-t_i)/\|s_i-t_i\|^2$, take the sum of all such columns and add the result to the first column, obtaining the matrix \begin{equation*} \begin{split} \left( \begin{array}{cccc} 0 & -(s_1-t_1) & \cdots & -(s_N-t_N) \\
-\|s-t\|^{-1+\eta}Z_{t,s}^1 & & & \\ \vdots & & \nabla^2 f(t) &\\
-\|s-t\|^{-1+\eta}Z_{t,s}^N & & & \end{array} \right), \end{split} \end{equation*}
whose determinant is still equal to the determinant of $\nabla^2 f(t)$. Let $r=\max_{1\leq i\leq N}|s_i-t_i|$, \begin{equation*} \begin{split} A_{t,s}=\left( \begin{array}{cccc} 0 & -(s_1-t_1)/r & \cdots & -(s_N-t_N)/r \\ Z_{t,s}^1 & & & \\ \vdots & & \nabla^2 f(t) &\\ Z_{t,s}^N & & & \end{array} \right). \end{split} \end{equation*} Using properties of a determinant, it follows that \begin{equation*}
|{\rm det} \nabla^2 f(t)|= r\|s-t\|^{-1+\eta}|{\rm det}A_{t,s}|\leq \|s-t\|^{\eta}|{\rm det}A_{t,s}|. \end{equation*}
Let $e_{t,s}=(s-t)^T/\|s-t\|$, then we obtain \begin{equation}\label{Eq:E1 and E2}
E_1(t,s)\leq \|s-t\|^{\eta}E_2(t,s), \end{equation} where \begin{equation*} \begin{split}
E_2(t,s) &= {\mathbb E}\{|{\rm det}A_{t,s}||{\rm det}\nabla^2 f(s)|| \nabla f(t)=0, \nabla^2 f(t)(s-t)^T =- \|s-t\|^{1+\eta}\mathbf{Z}_{t,s}\}\\
&= {\mathbb E}\{|{\rm det}A_{t,s}||{\rm det}\nabla^2 f(s)|| \nabla f(t)=0, \nabla^2 f(t)e_{t,s} + \|s-t\|^\eta\mathbf{Z}_{t,s} =0\}. \end{split} \end{equation*} By ({\bf C}1) and ({\bf C}2), there exists $C_2>0$ such that \begin{equation*} \sup_{t,s\in U_{t_0}(\varepsilon), \, t\ne s} E_2(t,s) \le C_2. \end{equation*} By (\ref{Eq:factorial moment}) and (\ref{Eq:E1 and E2}), \begin{equation*} \begin{split}
{\mathbb E}\{\mu(t_0, \varepsilon)(\mu(t_0, \varepsilon) -1)\}\le C_2\int_{U_{t_0}(\varepsilon)}\int_{U_{t_0}(\varepsilon)} \|s-t\|^{\eta} p_{\nabla f(t), \nabla f(s)}(0,0) dtds. \end{split} \end{equation*}
It is obvious that \begin{equation*} p_{\nabla f(t), \nabla f(s)}(0,0) \le \frac{1}{(2\pi)^N \sqrt{{\rm det Cov}(\nabla f(t), \nabla f(s))}}. \end{equation*}
Applying Taylor's expansion (\ref{Eq:Taylor expansion}), we obtain that as $\|s-t\| \to 0$, \begin{equation*} \begin{split} &{\rm det Cov}(\nabla f(t), \nabla f(s)) \\
&\quad = {\rm det Cov}(\nabla f(t), \nabla f(t) + \nabla^2 f(t)(s-t)^T + \|s-t\|^{1+\eta}\mathbf{Z}_{t,s})\\
&\quad = {\rm det Cov}(\nabla f(t), \nabla^2 f(t)(s-t)^T + \|s-t\|^{1+\eta}\mathbf{Z}_{t,s})\\
&\quad = \|s-t\|^{2N} {\rm det Cov}(\nabla f(t), \nabla^2 f(t)e_{t,s} + \|s-t\|^\eta\mathbf{Z}_{t,s})\\
&\quad = \|s-t\|^{2N} {\rm det Cov}(\nabla f(t), \nabla^2 f(t)e_{t,s})(1+o(1)), \end{split} \end{equation*} where the last determinant is bounded away from zero uniformly in $t$ and $s$ due to the regularity condition ({\bf C}2). Therefore, there exists $C_3>0$ such that \begin{equation*} \begin{split}
{\mathbb E} \{\mu(t_0, \varepsilon)(\mu(t_0, \varepsilon) -1)\} \leq C_3\int_{U_{t_0}(\varepsilon)}\int_{U_{t_0}(\varepsilon)}\frac{1}{\|t-s\|^{N-\eta}}dtds, \end{split} \end{equation*} where $C_3$ and $\eta$ are some positive constants. Recall the elementary inequality $$ \frac{x_1+\cdots +x_N}{N} \geq (x_1\cdots x_N)^{1/N}, \quad \forall x_1, \ldots, x_N>0. $$ It follows that \begin{equation*} \begin{split}
{\mathbb E} \{\mu(t_0, \varepsilon)(\mu(t_0, \varepsilon) -1)\} &\leq C_3N^{\eta-N}\int_{U_{t_0}(\varepsilon)}\int_{U_{t_0}(\varepsilon)}\prod_{i=1}^N |t_i-s_i|^{\frac{\eta}{N}-1}dtds\\ &= C_3N^{\eta-N} \bigg(\int_{-\varepsilon/2}^{\varepsilon/2} \int_{-\varepsilon/2}^{\varepsilon/2} |x-y|^{\frac{\eta}{N}-1}dxdy\bigg)^N \\ &= C_3N^\eta\bigg(\frac{2N}{\eta(\eta+N)} \bigg)^N \varepsilon^{N+\eta} = o(\varepsilon^N). \end{split} \end{equation*} \end{proof}
\begin{proof}{\bf of Theorem \ref{Thm:Palm distr}}\ By the definition in (\ref{Eq:Palm Ut0}), \begin{equation}\label{Eq:conditional prob form of Palm} \begin{split} F_{t_0}(u) =\lim_{\varepsilon\to 0} \frac{ {\mathbb P}\{f(t_0)>u, \mu_N(t_0, \varepsilon)\geq 1\}}{{\mathbb P}\{ \mu_N(t_0, \varepsilon)\geq 1\}}. \end{split} \end{equation} Let $p_i={\mathbb P}\{\mu_N(t_0, \varepsilon)=i\}$, then ${\mathbb P}\{\mu_N(t_0, \varepsilon) \geq 1 \} = \sum_{i=1}^\infty p_i$ and ${\mathbb E} \{\mu_N(t_0, \varepsilon) \} = \sum_{i=1}^\infty ip_i$, it follows that \begin{equation*} \begin{split} {\mathbb E} \{\mu_N(t_0, \varepsilon) \} - {\mathbb P}\{\mu_N(t_0, \varepsilon) \geq 1 \} &= \sum_{i=2}^\infty (i-1)p_i \\ &\leq \sum_{i=2}^\infty \frac{i(i-1)}{2}p_i = \frac{1}{2}{\mathbb E} \{\mu_N(t_0, \varepsilon)(\mu_N(t_0, \varepsilon) -1)\}. \end{split} \end{equation*} Therefore, by Lemma \ref{Lem:Piterbarg}, as $\varepsilon\to 0$, \begin{equation}\label{Eq:estimate prob of local max} \begin{split} {\mathbb P}\{\mu_N(t_0, \varepsilon) \geq 1 \} = {\mathbb E} \{\mu_N(t_0, \varepsilon) \} + o(\varepsilon^N). \end{split} \end{equation} Similarly, \begin{equation}\label{Eq:estimate prob of local max above u} \begin{split} {\mathbb P}\{\mu_N^u(t_0, \varepsilon) \geq 1 \} = {\mathbb E} \{\mu_N^u(t_0, \varepsilon) \} + o(\varepsilon^N). \end{split} \end{equation} Next we show that \begin{equation}\label{Eq:difference in numerator} \begin{split}
|{\mathbb P}\{f(t_0)>u, \mu_N(t_0, \varepsilon)\geq 1\}-{\mathbb P}\{\mu_N^u(t_0, \varepsilon) \geq 1 \}| = o(\varepsilon^N). \end{split} \end{equation} Roughly speaking, the probability that there exists a local maximum and the field exceeds $u$ at $t_0$ is approximately the same as the probability that there is at least one local maximum exceeding u. This is because in the limit, the local maximum occurs at $t_0$ and is greater than $u$. We show the rigorous proof below.
Note that for any evens $A$, $B$, $C$ such that $C\subset B$, \begin{equation*}
|{\mathbb P}(AB)-{\mathbb P}(C)| \leq {\mathbb P}(ABC^c) + {\mathbb P}(A^cC). \end{equation*} By this inequality, to prove (\ref{Eq:difference in numerator}), it suffices to show \begin{equation*} {\mathbb P}\{f(t_0)>u, \mu_N(t_0, \varepsilon)\geq 1, \mu_N^u(t_0, \varepsilon) = 0\} + {\mathbb P}\{f(t_0)\leq u, \mu_N^u(t_0, \varepsilon) \geq 1 \} = o(\varepsilon^N), \end{equation*} where the first probability above is the probability that the field exceeds $u$ at $t_0$ but all local maxima are below $u$, while the second one is the probability that the field does not exceed $u$ at $t_0$ but all local maxima exceed $u$.
Recall the definition of $\mu_N^{u-}(t_0, \varepsilon)$ in (\ref{Def:various mu's}), we have \begin{equation}\label{Eq:the first small prob} \begin{split} {\mathbb P}\{f(t_0)>u, \mu_N(t_0, \varepsilon)\geq 1, \mu_N^u(t_0, \varepsilon) = 0\} &\leq {\mathbb P}\{f(t_0)>u, \mu_N^{u-}(t_0, \varepsilon)\geq 1\}\\ &= {\mathbb E} \{\mu_N^{u-}(t_0, \varepsilon)\mathbbm{1}_{\{f(t_0)>u\}} \} + o(\varepsilon^N), \end{split} \end{equation} where the second line follows from similar argument for showing (\ref{Eq:estimate prob of local max}). By the Kac-Rice metatheorem, \begin{equation}\label{Eq:conditional expectation of mu} \begin{split}
&\quad {\mathbb E} \{\mu_N^{u-}(t_0, \varepsilon)\mathbbm{1}_{\{f(t_0)>u\}} \} =\int_u^\infty {\mathbb E}\{\mu_N^{u-}(t_0, \varepsilon)|f(t_0)=x\}p_{f(t_0)}(x) dx\\
&= \int_u^\infty p_{f(t_0)}(x) dx\int_{U_{t_0}(\varepsilon)} p_{\nabla f(t)}(0|f(t_0)=x) \\
&\quad \times {\mathbb E}\{|{\rm det} \nabla^2 f(t)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t))=N\}}\mathbbm{1}_{\{f(t)\leq u\}} | \nabla f(t)=0, f(t_0)=x\}dt. \end{split} \end{equation} By $({\bf C}1)$ and $({\bf C}2)$, for small $\varepsilon>0$, \begin{equation*} \begin{split}
\sup_{t\in U_{t_0}(\varepsilon)}p_{\nabla f(t)}(0|f(t_0)=x)\leq \sup_{t\in U_{t_0}(\varepsilon)} \frac{1}{(2\pi)^{N/2} ({\rm det Cov}(\nabla f(t)|f(t_0)))^{1/2}}\leq C \end{split} \end{equation*}
for some positive constant $C$. On the other hand, by continuity, conditioning on $f(t_0)=x>u$, $\sup_{t\in U_{t_0}(\varepsilon)}\mathbbm{1}_{\{f(t)\leq u\}}$ tends to 0 a.s. as $\varepsilon \to 0$. Therefore, for each $x>u$, by the dominated convergence theorem (we may choose $\sup_{t\in U_{t_0}(\varepsilon_0)} |{\rm det} \nabla^2 f(t)|$ as the dominating function for some $\varepsilon_0>0$), as $\varepsilon \to 0$, \begin{equation*} \begin{split}
\sup_{t\in U_{t_0}(\varepsilon)}{\mathbb E}\{|{\rm det} \nabla^2 f(t)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t))=N\}}\mathbbm{1}_{\{f(t)\leq u\}} | \nabla f(t)=0, f(t_0)=x\} \to 0. \end{split} \end{equation*} Plugging these facts into (\ref{Eq:conditional expectation of mu}) and applying the dominated convergence theorem, we obtain that as $\varepsilon\to 0$, \begin{equation*} \begin{split} &\frac{1}{\varepsilon^N}{\mathbb E} \{\mu_N^{u-}(t_0, \varepsilon)\mathbbm{1}_{\{f(t_0)>u\}} \}\\
&\quad \le \frac{C}{\varepsilon^N} \int_u^\infty \sup_{t\in U_{t_0}(\varepsilon)}{\mathbb E}\{|{\rm det} \nabla^2 f(t)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t))=N\}}\mathbbm{1}_{\{f(t)\leq u\}} | \nabla f(t)=0, f(t_0)=x\}\\ &\qquad \qquad \qquad \quad \times p_{f(t_0)}(x) dx\int_{U_{t_0}(\varepsilon)} dt\\ &\quad \to 0, \end{split} \end{equation*} which implies ${\mathbb E} \{\mu_N^{u-}(t_0, \varepsilon)\mathbbm{1}_{\{f(t_0)>u\}} \} = o(\varepsilon^N)$. By (\ref{Eq:the first small prob}), \begin{equation*} {\mathbb P}\{f(t_0)>u, \mu_N(t_0, \varepsilon)\geq 1, \mu_N^u(t_0, \varepsilon) = 0\} = o(\varepsilon^N). \end{equation*} Similar arguments yield \begin{equation*} {\mathbb P}\{f(t_0)\leq u, \mu_N^u(t_0, \varepsilon) \geq 1 \} = o(\varepsilon^N). \end{equation*} Hence (\ref{Eq:difference in numerator}) holds and therefore, \begin{equation}\label{Eq:limiting form of Palm distr} \begin{split} F_{t_0}(u) &=\lim_{\varepsilon\to 0} \frac{ {\mathbb P}\{f(t_0)>u, \mu_N(t_0, \varepsilon)\geq 1\}}{{\mathbb P}\{ \mu_N(t_0, \varepsilon)\geq 1\}} =\lim_{\varepsilon\to 0} \frac{ {\mathbb P}\{\mu_N^u(t_0, \varepsilon) \geq 1 \} + o(\varepsilon^N)}{{\mathbb P}\{ \mu_N(t_0, \varepsilon)\geq 1\}}\\ &=\lim_{\varepsilon\to 0} \frac{ {\mathbb E}\{\mu_N^u(t_0, \varepsilon) \} + o(\varepsilon^N)}{{\mathbb E}\{ \mu_N(t_0, \varepsilon)\} + o(\varepsilon^N)}, \end{split} \end{equation} where the last equality is due to (\ref{Eq:estimate prob of local max}) and (\ref{Eq:estimate prob of local max above u}). By the Kac-Rice metatheorem and Lebesgue's continuity theorem, \begin{equation*} \begin{split} &\lim_{\varepsilon\to 0}\frac{1}{\varepsilon^N}{\mathbb E}\{\mu_N^u(t_0, \varepsilon)\} \\ &\quad =\lim_{\varepsilon\to 0}
\frac{1}{\varepsilon^N}\int_{U_{t_0}(\varepsilon)} {\mathbb E}\{|{\rm det} \nabla^2 f(t)|\mathbbm{1}_{\{f(t)> u\}} \mathbbm{1}_{\{{\rm index}(\nabla^2 f(t))=N\}}|\nabla f(t)=0\}p_{\nabla f(t)}(0)dt\\
&\quad = {\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{f(t_0)> u\}} \mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}}|\nabla f(t_0)=0\}p_{\nabla f(t_0)}(0), \end{split} \end{equation*} and similarly, \begin{equation*} \begin{split}
\lim_{\varepsilon\to 0}\frac{1}{\varepsilon^N}{\mathbb E}\{\mu_N(t_0, \varepsilon)\} ={\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}} | \nabla f(t_0)=0\}p_{\nabla f(t_0)}(0). \end{split} \end{equation*} Plugging these into (\ref{Eq:limiting form of Palm distr}) yields (\ref{Eq:Palm distr Euclidean}). \end{proof}
\begin{proof}{\bf of Theorem \ref{Thm:Palm distr high level}}\ By Theorem \ref{Thm:overshoot distr}, \begin{equation}\label{Eq:Palm distri u and v} \begin{split}
\bar{F}_{t_0}(u,v)= \frac{\int_{u+v}^\infty \phi(x){\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}}| f(t_0)=x, \nabla f(t_0)=0\} dx}{\int_v^\infty \phi(x){\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}}| f(t_0)=x, \nabla f(t_0)=0\} dx}. \end{split} \end{equation} We shall estimate the conditional expectations above. Note that $f$ has unit-variance, taking derivatives gives \begin{equation*} {\mathbb E}\{ f(t_0)\nabla^2 f(t_0) \} = -{\rm Cov}(\nabla f(t_0)) = -\Lambda(t_0). \end{equation*} Since $\Lambda(t_0)$ is positive definite, there exists a unique positive definite matrix $Q_{t_0}$ such that $Q_{t_0}\Lambda(t_0)Q_{t_0}= I_N$ ($Q_{t_0}$ is also called the square root of $\Lambda(t_0)$), where $I_N$ is the $N\times N$ unit matrix. Hence \begin{equation*} {\mathbb E}\{f(t_0)(Q_{t_0} \nabla^2 f(t_0) Q_{t_0})\} = -Q_{t_0}\Lambda(t_0)Q_{t_0} = -I_N. \end{equation*} By the conditional formula for Gaussian random variables, \begin{equation*}
{\mathbb E}\{Q_{t_0} \nabla^2 f(t_0) Q_{t_0} | f(t_0)=x, \nabla f(t_0)=0 \} = -xI_N. \end{equation*} Make change of variable $$W(t_0) = Q_{t_0}\nabla^2 f(t_0)Q_{t_0} + xI_N,$$
where $W(t_0) = (W_{ij}(t_0))_{1\leq i, j\leq N}$. Then $(W(t_0)|f(t_0)=x, \nabla f(t_0)=0)$ is a Gaussian matrix whose mean is 0 and covariance is the same as that of $(Q_{t_0} \nabla^2 f(t_0) Q_{t_0} | f(t_0)=x, \nabla f(t_0)=0)$. Denote the density of Gaussian vector $((W_{ij}(t_0))_{1\leq i\leq j\leq N}|f(t_0)=x, \nabla f(t_0)=0)$ by $h_{t_0}(w)$, $w=(w_{ij})_{1\leq i\leq j\leq N}\in {\mathbb R}^{N(N+1)/2}$, then \begin{equation}\label{Eq:expectation of det} \begin{split}
{\mathbb E} & \{\text{det} (Q_{t_0}\nabla^2 f(t_0)Q_{t_0}) \mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))= N\}} | f(t_0)=x, \nabla f(t_0)=0 \}\\
& = {\mathbb E} \{\text{det} (Q_{t_0}\nabla^2 f(t_0)Q_{t_0}) \mathbbm{1}_{\{{\rm index}(Q_{t_0}\nabla^2 f(t_0)Q_{t_0})= N\}} | f(t_0)=x, \nabla f(t_0)=0 \} \\ & = \int_{w:\, {\rm index}((w_{ij})-xI_N) =N} \text{det} \Big((w_{ij})-xI_N \Big) h_{t_0}(w) \, dw, \end{split} \end{equation} where $(w_{ij})$ is the abbreviation of matrix $(w_{ij})_{1\leq i, j\leq k}$. Note that there exists a constant $c>0$ such that
$${\rm index}((w_{ij})-xI_N) =N, \quad \forall \|(w_{ij})\|:= \bigg(\sum_{i,j=1}^N w_{ij}^2\bigg)^{1/2} <\frac{x}{c}.$$ Thus we can write (\ref{Eq:expectation of det}) as \begin{equation}\label{Eq:expectation of det 2} \begin{split} &\int_{{\mathbb R}^{N(N+1)/2}} \text{det} \Big((w_{ij})-xI_N \Big) h_{t_0}(w) dw - \int_{w:\, {\rm index}((w_{ij})-xI_N) <N} \text{det} \Big((w_{ij})-xI_N \Big) h_{t_0}(w) \, dw\\
& \quad = {\mathbb E}\{\text{det} (Q_{t_0}\nabla^2 f(t_0)Q_{t_0}) | f(t_0)=x, \nabla f(t_0)=0 \} + Z(t,x), \end{split} \end{equation} where $Z(t,x)$ is the second integral in the first line of (\ref{Eq:expectation of det 2}) and it satisfies \begin{equation}\label{Eq:estimate Z(t,x)}
|Z(t,x)| \leq \int_{\|(w_{ij})\|\geq\frac{x}{c}} \bigg|\text{det} \Big((w_{ij})-xI_N \Big)\bigg| h_{t_0}(w) dw. \end{equation}
By the non-degenerate condition ({\bf C}2), there exists a constant $\alpha'>0$ such that as $\|(w_{ij})\| \to \infty$, $h_{t_0}(w) = o(e^{-\alpha' \|(w_{ij})\|^2})$. On the other hand, the determinant inside the integral in (\ref{Eq:estimate Z(t,x)}) is a polynomial in $w_{ij}$ and $x$, and it does not affect the exponentially decay, hence as $x\to \infty$, $|Z(t,x)| = o(e^{-\alpha x^2})$ for some constant $\alpha>0$. Combine this with (\ref{Eq:expectation of det}) and (\ref{Eq:expectation of det 2}), and note that $$ {\rm det} \nabla^2 f(t_0) = {\rm det} (Q_{t_0}^{-1}Q_{t_0}\nabla^2 f(t_0)Q_{t_0}Q_{t_0}^{-1})={\rm det}(\Lambda(t_0)){\rm det} (Q_{t_0}\nabla^2 f(t_0)Q_{t_0}), $$ we obtain that, as $x\to \infty$, \begin{equation*} \begin{split}
{\mathbb E}&\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}}| f(t_0)=x, \nabla f(t_0)=0\}\\
& = (-1)^N{\rm det}(\Lambda(t_0)){\mathbb E}\{{\rm det} (Q_{t_0}\nabla^2 f(t_0)Q_{t_0})\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}}| f(t_0)=x, \nabla f(t_0)=0\}\\
& = (-1)^N{\rm det}(\Lambda(t_0)){\mathbb E}\{{\rm det} (Q_{t_0}\nabla^2 f(t_0)Q_{t_0})| f(t_0)=x, \nabla f(t_0)=0\} + o(e^{-\alpha x^2})\\
& = (-1)^N{\mathbb E}\{{\rm det} \nabla^2 f(t_0)| f(t_0)=x, \nabla f(t_0)=0\} + o(e^{-\alpha x^2}) \end{split} \end{equation*} Plugging this into (\ref{Eq:Palm distri u and v}) yields (\ref{Eq:Palm distr high level}). \end{proof}
\begin{lemma}\label{Lem:conditional expectation for N} Under the assumptions in Theorem \ref{Thm:Palm distr high level}, as $x\to \infty$, \begin{equation}\label{Eq:det hessian conditional} \begin{split}
{\mathbb E}\{{\rm det} \nabla^2 f(t) | f(t)=x, \nabla f(t)=0 \}=(-1)^N {\rm det}(\Lambda(t)) x^N (1+O(x^{-2})). \end{split} \end{equation} \end{lemma} \begin{proof}\ Let $Q_t$ be the $N\times N$ positive definite matrix such that $Q_t\Lambda(t)Q_t = I_N$. Then we can write $\nabla^2 f(t)= Q^{-1}_tQ_t\nabla^2 f(t)Q_tQ^{-1}_t$ and therefore, \begin{equation}\label{Eq:det hessian conditional 1} \begin{split}
{\mathbb E}&\{{\rm det} \nabla^2 f(t) | f(t)=x, \nabla f(t)=0 \}\\
&={\rm det}(\Lambda(t)) {\mathbb E}\{{\rm det} (Q_t\nabla^2 f(t)Q_t) | f(t)=x, \nabla f(t)=0 \}. \end{split} \end{equation} Since $f(t)$ and $\nabla f(t)$ are independent, $$
{\mathbb E}\{Q_t \nabla^2 f(t) Q_t | f(t)=x, \nabla f(t)=0 \} = -xI_N. $$ It follows that \begin{equation}\label{Eq:det hessian conditional 2} \begin{split}
{\mathbb E}\{Q_t \nabla^2 f(t) Q_t | f(t)=x, \nabla f(t)=0 \}= {\mathbb E}\{{\rm det} (\widetilde{\Delta}(t) - xI_N) \}, \end{split} \end{equation} where $\widetilde{\Delta}(t)=(\widetilde{\Delta}_{ij}(t))_{1\le i,j\le N}$ is an $N\times N$ Gaussian random matrix such that ${\mathbb E}\{\widetilde{\Delta}(t)\} =0$ and its covariance matrix is independent of $x$. By the Laplace expansion of the determinant, \begin{equation*} \begin{split} {\rm det} (\widetilde{\Delta}(t) - xI_N) = (-1)^N[x^N - S_1(\widetilde{\Delta}(t))x^{N-1} + S_2(\widetilde{\Delta}(t))x^{N-2} + \cdots + (-1)^N S_N(\widetilde{\Delta}(t))], \end{split} \end{equation*} where $S_i(\widetilde{\Delta}(t))$ is the sum of the $\binom {N}{i}$ principle minors of order $i$ in $\widetilde{\Delta}(t)$. Taking the expectation above and noting that ${\mathbb E}\{S_1(\widetilde{\Delta}(t))\}=0$ since ${\mathbb E}\{\widetilde{\Delta}(t)\} =0$, we obtain that as $x\to \infty$, \begin{equation*} \begin{split} {\mathbb E}\{{\rm det} (\widetilde{\Delta}(t) - xI_N) \} = (-1)^Nx^N (1+O(x^{-2})). \end{split} \end{equation*} Combining this with (\ref{Eq:det hessian conditional 1}) and (\ref{Eq:det hessian conditional 2}) yields (\ref{Eq:det hessian conditional}). \end{proof}
\begin{lemma}\label{Lem:cov of isotropic Euclidean}{\rm [Aza\"is and Wschebor (2008), Lemma 2].} Let $\{f(t): t\in T\}$ be a centered, unit-variance, isotropic Gaussian random field satisfying $({\bf C}1)$ and $({\bf C}2)$. Then for each $t\in T$ and $i$, $j$, $k$, $l\in\{1,\ldots, N\}$, \begin{equation}\label{Eq:cov of derivatives} \begin{split} &{\mathbb E}\{f_i(t)f(t)\}={\mathbb E}\{f_i(t)f_{jk}(t)\}=0, \quad {\mathbb E}\{f_i(t)f_j(t)\}=-{\mathbb E}\{f_{ij}(t)f(t)\}=-2\rho'\delta_{ij},\\ &{\mathbb E}\{f_{ij}(t)f_{kl}(t)\}=4\rho''(\delta_{ij}\delta_{kl} + \delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk}), \end{split} \end{equation} where $\delta_{ij}$ is the Kronecker delta and $\rho'$ and $\rho''$ are defined in \eqref{Eq:kappa}. \end{lemma}
\begin{lemma}\label{Lem:GOE for det Hessian} Under the assumptions in Lemma \ref{Lem:cov of isotropic Euclidean}, the distribution of $\nabla^2f(t)$ is the same as that of $\sqrt{8\rho''}M_N + 2\sqrt{\rho''}\xi I_N$, where $M_N$ is a GOE random matrix, $\xi$ is a standard Gaussian variable independent of $M_N$ and $I_N$ is the $N\times N$ identity matrix. Assume further that $({\bf C}3)$ holds, then the conditional distribution of $(\nabla^2f(t)|f(t)=x)$ is the same as the distribution of $\sqrt{8\rho''}M_N + [2\rho'x +2\sqrt{\rho''-\rho'^2}\xi ]I_N$. \end{lemma}
\begin{proof} \ The first result is a direct consequence of Lemma \ref{Lem:cov of isotropic Euclidean}. For the second one, applying (\ref{Eq:cov of derivatives}) and the well-known conditional formula for Gaussian variables, we see that $(\nabla^2f(t)|f(t)=x)$ can be written as $\Delta + 2\rho'xI_N$, where $\Delta=(\Delta_{ij})_{1\leq i,j\leq N}$ is a symmetric $N\times N$ matrix with centered Gaussian entries such that \begin{equation*} {\mathbb E}\{\Delta_{ij}\Delta_{kl}\}=4\rho''(\delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk}) + 4(\rho''-\rho'^2)\delta_{ij}\delta_{kl}. \end{equation*} Therefore, $\Delta$ has the same distribution as the random matrix $\sqrt{8\rho''}M_N + 2\sqrt{\rho''-\rho'^2}\xi I_N$, completing the proof. \end{proof}
Lemma \ref{Lem:GOE computation} below is a revised version of Lemma 3.2.3 in Auffinger (2011). The proof is omitted here since it is similar to that of the reference above. \begin{lemma}\label{Lem:GOE computation} Let $M_N$ be an $N\times N$ GOE matrix and $X$ be an independent Gaussian random variable with mean $m$ and variance $\sigma^2$. Then, \begin{equation}\label{Eq:computing the det with index} \begin{split}
&{\mathbb E}\big\{|{\rm det}(M_N-XI_N)|\mathbbm{1}_{\{{\rm index}(M_N-XI_N)=N\}}\big\}\\ &\quad =\frac{\Gamma\big(\frac{N+1}{2}\big)}{\sqrt{\pi}\sigma}{\mathbb E}_{GOE}^{N+1}\bigg\{ \exp\bigg[\frac{\lambda_{N+1}^2}{2} - \frac{(\lambda_{N+1}-m )^2}{2\sigma^2} \bigg] \bigg\}, \end{split} \end{equation} where ${\mathbb E}_{GOE}^{N+1}$ is the expectation under the probability distribution $Q_{N+1}(d\lambda)$ as in (\ref{Eq:GOE density}) with $N$ replaced by $N+1$. \end{lemma}
\begin{lemma}\label{Lem:expectation of local max} Let $\{f(t): t\in T\}$ be a centered, unit-variance, isotropic Gaussian random field satisfying $({\bf C}1)$ and $({\bf C}2)$. Then for each $t\in T$, \begin{equation*} \begin{split}
{\mathbb E}&\{|{\rm det} \nabla^2 f(t)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t))=N\}}| \nabla f(t)=0\}\\ &=\Big(\frac{2}{\pi}\Big)^{1/2}\Gamma\Big(\frac{N+1}{2}\Big)(8\rho'')^{N/2}{\mathbb E}_{GOE}^{N+1}\bigg\{ \exp\bigg[-\frac{\lambda_{N+1}^2}{2} \bigg] \bigg\}. \end{split} \end{equation*} \end{lemma} \begin{proof}\ Since $\nabla^2 f(t)$ and $\nabla f(t)$ are independent for each fixed $t$, by Lemma \ref{Lem:GOE for det Hessian}, \begin{equation*} \begin{split}
{\mathbb E}&\{|{\rm det} \nabla^2 f(t)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t))=N\}}| \nabla f(t_0)=0\}\\
&= {\mathbb E}\{|{\rm det} \nabla^2 f(t)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t))=N\}}\}\\
&= {\mathbb E}\{|{\rm det} (\sqrt{8\rho''}M_N + 2\sqrt{\rho''}\xi I_N)|\mathbbm{1}_{\{{\rm index}(\sqrt{8\rho''}M_N + 2\sqrt{\rho''}\xi I_N)=N\}}\}\\
&= (8\rho'')^{N/2} {\mathbb E}\{|{\rm det}(M_N-XI_N)|\mathbbm{1}_{\{{\rm index}(M_N-XI_N)=N\}}\}, \end{split} \end{equation*} where $X$ is an independent centered Gaussian variable with variance $1/2$. Applying Lemma \ref{Lem:GOE computation} with $m=0$ and $\sigma=1/\sqrt{2}$, we obtain the desired result. \end{proof}
\begin{lemma}\label{Lem:expectation of local max above u} Let $\{f(t): t\in T\}$ be a centered, unit-variance, isotropic Gaussian random field satisfying $({\bf C}1)$, $({\bf C}2)$ and $({\bf C}3)$. Then for each $t\in T$ and $x\in {\mathbb R}$, \begin{equation*} \begin{split}
{\mathbb E}&\{|{\rm det} \nabla^2 f(t)| \mathbbm{1}_{\{{\rm index}(\nabla^2 f(t))=N\}}|f(t)=x, \nabla f(t)=0\}\\ &=\left\{
\begin{array}{l l}
\big(\frac{2}{\pi}\big)^{1/2} \Gamma\big(\frac{N+1}{2}\big)(8\rho'')^{N/2} \big(\frac{\rho''}{\rho''-\rho'^2}\big)^{1/2}\\
\quad \times {\mathbb E}_{GOE}^{N+1}\bigg\{ \exp\bigg[\frac{\lambda_{N+1}^2}{2} - \frac{\rho''\big(\lambda_{N+1}+\frac{\rho'x}{\sqrt{2\rho''}} \big)^2}{\rho''-\rho'^2} \bigg]\bigg\} & \quad \text{if $\rho''-\rho'^2> 0$},\\
(8\rho'')^{N/2}{\mathbb E}_{GOE}^{N}\big\{ \big(\prod_{i=1}^N|\lambda_i-\frac{x}{\sqrt{2}}|\big) \mathbbm{1}_{\{\lambda_N<\frac{x}{\sqrt{2}}\}} \big\} & \quad \text{if $\rho''-\rho'^2= 0$}.
\end{array} \right. \end{split} \end{equation*} \end{lemma} \begin{proof}\ Since $\nabla f(t)$ is independent of both $f(t)$ and $\nabla^2 f(t)$ for each fixed $t$, by Lemma \ref{Lem:GOE for det Hessian}, \begin{equation}\label{Eq:Conditional expectation by GOE} \begin{split}
{\mathbb E}&\{|{\rm det} \nabla^2 f(t)| \mathbbm{1}_{\{{\rm index}(\nabla^2 f(t))=N\}}|f(t)=x, \nabla f(t)=0\}\\
&={\mathbb E}\{|{\rm det} \nabla^2 f(t)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t))=N\}}| f(t)=x\}\\
&= {\mathbb E}\{|{\rm det} (\sqrt{8\rho''}M_N + [2\rho'x +2\sqrt{\rho''-\rho'^2}\xi ]I_N)|\\ &\qquad \times \mathbbm{1}_{\{{\rm index}(\sqrt{8\rho''}M_N + [2\rho'x +2\sqrt{\rho''-\rho'^2}\xi ]I_N)=N\}}\}. \end{split} \end{equation} When $\rho''-\rho'^2> 0$, then (\ref{Eq:Conditional expectation by GOE}) can be written as \begin{equation*} \begin{split}
(8\rho'')^{N/2} {\mathbb E}\{|{\rm det}(M_N-XI_N)|\mathbbm{1}_{\{{\rm index}(M_N-XI_N)=N\}}\}, \end{split} \end{equation*} where $X$ is an independent Gaussian variable with mean $m=-\frac{\rho'x}{\sqrt{2\rho''}}$ and variance $\sigma^2= \frac{\rho''-\rho'^2}{2\rho''}$. Applying Lemma \ref{Lem:GOE computation} yields the formula for the case of $\rho''-\rho'^2> 0$.
When $\rho''-\rho'^2= 0$, i.e. $\rho'=-\sqrt{\rho''}$, then (\ref{Eq:Conditional expectation by GOE}) becomes \begin{equation*} \begin{split}
&(8\rho'')^{N/2} {\mathbb E}\{|{\rm det}(M_N-\frac{x}{\sqrt{2}}I_N)|\mathbbm{1}_{\{{\rm index}(M_N-\frac{x}{\sqrt{2}}I_N)=N\}}\}\\
&\quad =(8\rho'')^{N/2}{\mathbb E}_{GOE}^{N}\bigg\{ \bigg(\prod_{i=1}^N|\lambda_i-\frac{x}{\sqrt{2}}|\bigg) \mathbbm{1}_{\{\lambda_N<\frac{x}{\sqrt{2}}\}} \bigg\}. \end{split} \end{equation*} We finish the proof. \end{proof}
The following result can be derived from elementary calculations by applying the GOE density \eqref{Eq:GOE density}, the details are omitted here. \begin{proposition}\label{Prop:GOE expectation for N=2} Let $N=2$. Then for positive constants $a$ and $b$, \begin{equation*} \begin{split} &{\mathbb E}_{GOE}^{N+1}\bigg\{ \exp\Big[\frac{1}{2}\lambda_{N+1}^2 -a(\lambda_{N+1}-b)^2 \Big] \bigg\} \\ &\quad=\frac{1}{\sqrt{2}\pi}\bigg\{(\frac{1}{a} + 2b^2-1)\frac{\pi}{\sqrt{a}}\Phi\Big(\frac{b\sqrt{2a}}{\sqrt{a+1}} \Big) + \frac{b\sqrt{a+1}\sqrt{\pi}}{a}e^{-\frac{ab^2}{a+1}} \\ &\qquad + \frac{2\pi}{\sqrt{2a+1}}e^{-\frac{ab^2}{2a+1}}\Phi\Big(\frac{\sqrt{2}ab}{\sqrt{(2a+1)(a+1)}} \Big) \bigg\}. \end{split} \end{equation*} \end{proposition}
\subsection{Proofs for Section \ref{section:general manifolds}} Define $\mu(t_0, \varepsilon)$, $\mu_N(t_0, \varepsilon)$, $\mu_N^u(t_0, \varepsilon)$ and $\mu_N^{u-}(t_0, \varepsilon)$ as in (\ref{Def:various mu's}) with $U_{t_0}(\varepsilon)$ replaced by $B_{t_0}(\varepsilon)$ respectively. The following lemma, which will be used for proving Theorem \ref{Thm:Palm distr manifolds}, is an analogue of Lemma \ref{Lem:Piterbarg}. \begin{lemma}\label{Lem:Piterbarg lemma on manifold} Let $(M,g)$ be an oriented $N$-dimensional $C^3$ Riemannian manifold with a $C^1$ Riemannian metric $g$. Let $f$ be a Gaussian random field on $M$ such that $({\bf C}1')$ and $({\bf C}2')$ are fulfilled. Then for any $t_0 \in \overset{\circ}{M}$, as $\varepsilon \to 0$, \begin{equation*} \begin{split} {\mathbb E} \{\mu(t_0, \varepsilon)(\mu(t_0, \varepsilon) -1)\} = o(\varepsilon^N). \end{split} \end{equation*} \end{lemma} \begin{proof}\ Let $(U_\alpha, \varphi_\alpha)_{\alpha\in I}$ be an atlas on $M$ and let $\varepsilon$ be small enough such that $\varphi_\alpha(B_{t_0}(\varepsilon)) \subset \varphi_\alpha(U_\alpha)$ for some $\alpha\in I$. Set $$ f^\alpha= f\circ \varphi_\alpha^{-1}: \varphi_\alpha(U_\alpha) \subset {\mathbb R}^N \rightarrow {\mathbb R}. $$ Then it follows immediately from the diffeomorphism of $\varphi_\alpha$ and the definition of $\mu$ that \begin{equation*} \mu(t_0, \varepsilon)=\mu(f, U_\alpha; t_0, \varepsilon) \equiv \mu(f^\alpha, \varphi_\alpha(U_\alpha); \varphi_\alpha(t_0), \varepsilon). \end{equation*} Note that $({\bf C}1')$ and $({\bf C}2')$ imply that $f^\alpha$ satisfies $({\bf C}1)$ and $({\bf C}2)$. Applying Lemma \ref{Lem:Piterbarg} gives \begin{equation*} {\mathbb E} \{\mu(f^\alpha, \varphi_\alpha(U_\alpha); \varphi_\alpha(t_0), \varepsilon)[\mu(f^\alpha, \varphi_\alpha(U_\alpha); \varphi_\alpha(t_0), \varepsilon)-1]\} = o({\rm Vol}(\varphi_\alpha(B_{t_0}(\varepsilon))))=o(\varepsilon^N). \end{equation*} This verifies the desired result. \end{proof}
\begin{proof}{\bf of Theorem \ref{Thm:Palm distr manifolds}}\ Following the proof in Theorem \ref{Thm:Palm distr}, together with Lemma \ref{Lem:Piterbarg lemma on manifold} and the argument by charts in its proof, we obtain \begin{equation}\label{Eq:limiting form of Palm distr 2} \begin{split} F_{t_0}(u) &=\lim_{\varepsilon\to 0} \frac{ {\mathbb P}\{f(t_0)>u, \mu_N(t_0, \varepsilon)\geq 1\}}{{\mathbb P}\{ \mu_N(t_0, \varepsilon)\geq 1\}} =\lim_{\varepsilon\to 0} \frac{ {\mathbb P}\{\mu_N^u(t_0, \varepsilon) \geq 1 \} + o(\varepsilon^N)}{{\mathbb P}\{ \mu_N(t_0, \varepsilon)\geq 1\}}\\ &=\lim_{\varepsilon\to 0} \frac{ {\mathbb E}\{\mu_N^u(t_0, \varepsilon) \} + o(\varepsilon^N)}{{\mathbb E}\{ \mu_N(t_0, \varepsilon)\} + o(\varepsilon^N)}. \end{split} \end{equation} By the Kac-Rice metatheorem for random fields on manifolds [cf. Theorem 12.1.1 in Adler and Taylor (2007)] and Lebesgue's continuity theorem, \begin{equation*} \begin{split} &\lim_{\varepsilon\to 0}\frac{{\mathbb E}\{\mu_N^u(t_0, \varepsilon)\}}{{\rm Vol}(B_{t_0}(\varepsilon))} \\ &\quad=\lim_{\varepsilon\to 0}
\frac{1}{{\rm Vol}(B_{t_0}(\varepsilon))}\int_{B_{t_0}(\varepsilon)} {\mathbb E}\{|{\rm det} \nabla^2 f(t)|\mathbbm{1}_{\{f(t)> u\}} \mathbbm{1}_{\{{\rm index}(\nabla^2 f(t))=N\}}|\nabla f(t)=0\}\\ &\qquad \qquad\qquad \qquad\qquad\qquad \times p_{\nabla f(t)}(0){\rm Vol}_g\\
&\quad = {\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{f(t_0)> u\}} \mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}}|\nabla f(t_0)=0\}p_{\nabla f(t_0)}(0), \end{split} \end{equation*} where ${\rm Vol}_g$ is the volume element on $M$ induced by the Riemannian metric $g$. Similarly, \begin{equation*} \begin{split}
\lim_{\varepsilon\to 0}\frac{{\mathbb E}\{\mu_N(t_0, \varepsilon)\} }{{\rm Vol}(B_{t_0}(\varepsilon))}= {\mathbb E}\{|{\rm det} \nabla^2 f(t_0)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t_0))=N\}}|\nabla f(t_0)=0\}p_{\nabla f(t_0)}(0). \end{split} \end{equation*} Plugging these facts into (\ref{Eq:limiting form of Palm distr 2}) yields the first line of \eqref{Eq:Palm distr manifolds 1}. The second line of \eqref{Eq:Palm distr manifolds 1} follows similarly.
Applying Theorem \ref{Thm:Palm distr high level} and Corollary \ref{Cor:Palm distr high level o(1)}, together with the argument by charts, we obtain \eqref{Eq:Palm distr manifolds 3}. \end{proof}
Lemma \ref{Lem:joint distribution sphere} below is on the properties of the covariance of $(f(t), \nabla f(t), \nabla^2 f(t))$, where the gradient $\nabla f(t)$ and Hessian $\nabla^2 f(t)$ are defined as in (\ref{Eq:gradient and hessian on manifolds}) under some orthonormal frame $\{E_i\}_{1\le i\le N}$ on $\mathbb{S}^N$. Since it can be proved similarly to Lemma 3.2.2 or Lemma 4.4.2 in Auffinger (2011), the detailed proof is omitted here. \begin{lemma}\label{Lem:joint distribution sphere} Let $f$ be a centered, unit-variance, isotropic Gaussian field on $\mathbb{S}^N$, $N\ge 2$, satisfying $({\bf C}1'')$ and $({\bf C}2')$. Then \begin{equation}\label{Eq:cov of derivatives sphere} \begin{split} &{\mathbb E}\{f_i(t)f(t)\}={\mathbb E}\{f_i(t)f_{jk}(t)\}=0, \quad {\mathbb E}\{f_i(t)f_j(t)\}=-{\mathbb E}\{f_{ij}(t)f(t)\}=C'\delta_{ij},\\ &{\mathbb E}\{f_{ij}(t)f_{kl}(t)\}=C''(\delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk}) + (C''+C')\delta_{ij}\delta_{kl}, \end{split} \end{equation} where $C'$ and $C''$ are defined in \eqref{Def:C' and C''}. \end{lemma}
\begin{lemma}\label{Lem:GOE for det Hessian sphere}
Under the assumptions in Lemma \ref{Lem:joint distribution sphere}, the distribution of $\nabla^2f(t)$ is the same as that of $\sqrt{2C''}M_N + \sqrt{C''+C'}\xi I_N$, where $M_N$ is a GOE matrix and $\xi$ is a standard Gaussian variable independent of $M_N$. Assume further that $({\bf C}3')$ holds, then the conditional distribution of $(\nabla^2f(t)|f(t)=x)$ is the same as the distribution of $\sqrt{2C''}M_N + [\sqrt{C''+C'-C'^2}\xi - C'x]I_N$. \end{lemma} \begin{proof}\
The first result is an immediate consequence of Lemma \ref{Lem:joint distribution sphere}. For the second one, applying (\ref{Eq:cov of derivatives sphere}) and the well-known conditional formula for Gaussian random variables, we see that $(\nabla^2f(t)|f(t)=x)$ can be written as $\Delta - C'xI_N$, where $\Delta=(\Delta_{ij})_{1\leq i,j\leq N}$ is a symmetric $N\times N$ matrix with centered Gaussian entries such that \begin{equation*} {\mathbb E}\{\Delta_{ij}\Delta_{kl}\}=C''(\delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk}) + (C''+C'-C'^2)\delta_{ij}\delta_{kl}. \end{equation*} Therefore, $\Delta$ has the same distribution as the random matrix $\sqrt{2C''}M_N + \sqrt{C''+C'-C'^2}\xi I_N$, completing the proof. \end{proof}
By similar arguments for proving Lemmas \ref{Lem:expectation of local max} and \ref{Lem:expectation of local max above u}, and applying Lemma \ref{Lem:GOE for det Hessian sphere} instead of Lemma \ref{Lem:GOE for det Hessian}, we obtain the following two lemmas. \begin{lemma}\label{Lem:expectation of local max sphere} Let $\{f(t): t\in \mathbb{S}^N\}$ be a centered, unit-variance, isotropic Gaussian field satisfying $({\bf C}1'')$ and $({\bf C}2')$. Then for each $t\in \mathbb{S}^N$, \begin{equation*} \begin{split}
{\mathbb E}&\{|{\rm det} \nabla^2 f(t)|\mathbbm{1}_{\{{\rm index}(\nabla^2 f(t))=N\}}| \nabla f(t)=0\}\\ &=\Big(\frac{2C''}{\pi(C''+C')}\Big)^{1/2} \Gamma\Big(\frac{N+1}{2}\Big) (2C'')^{N/2}{\mathbb E}_{GOE}^{N+1}\bigg\{ \exp\bigg[\frac{1}{2}\lambda_{N+1}^2 -\frac{C''}{C''+C'}\lambda_{N+1}^2 \bigg] \bigg\}. \end{split} \end{equation*} \end{lemma}
\begin{lemma}\label{Lem:expectation of local max above u sphere} Let $\{f(t): t\in \mathbb{S}^N\}$ be a centered, unit-variance, isotropic Gaussian field satisfying $({\bf C}1'')$, $({\bf C}2')$ and $({\bf C}3')$. Then for each $t\in \mathbb{S}^N$ and $x\in {\mathbb R}$, \begin{equation*} \begin{split}
{\mathbb E}&\{|{\rm det} \nabla^2 f(t)| \mathbbm{1}_{\{{\rm index}(\nabla^2 f(t))=N\}}|f(t)=x, \nabla f(t)=0\}\\ &=\left\{
\begin{array}{l l}
\big(\frac{2C''}{\pi(C''+C'-C'^2)}\big)^{1/2} \Gamma\big(\frac{N+1}{2}\big) (2C'')^{N/2} \\
\quad \times {\mathbb E}_{GOE}^{N+1}\Big\{ \exp\Big[\frac{\lambda_{N+1}^2}{2} - \frac{C''\big(\lambda_{N+1}-\frac{C'x}{\sqrt{2C''}} \big)^2}{C''+C'-C'^2} \Big]\Big\} & \quad \text{if $C''+C'-C'^2> 0$},\\
(2C'')^{N/2}{\mathbb E}_{GOE}^{N}\big\{ \big(\prod_{i=1}^N|\lambda_i-\frac{C'x}{\sqrt{2C''}}|\big) \mathbbm{1}_{\{\lambda_N<\frac{C'x}{\sqrt{2C''}}\}} \big\} & \quad \text{if $C''+C'-C'^2= 0$}.
\end{array} \right. \end{split} \end{equation*} \end{lemma}
\par
\noindent {\bf Acknowledgments.} The authors thank Robert Adler of the Technion - Israel Institute of Technology for useful discussions and the anonymous referees for their insightful comments which have led to several improvements of this manuscript.
\begin{small}
\end{small}
\end{document} | arXiv |
EURASIP Journal on Audio, Speech, and Music Processing
A multichannel learning-based approach for sound source separation in reverberant environments
You-Siang Chen1,
Zi-Jie Lin1 &
Mingsian R. Bai ORCID: orcid.org/0000-0002-8866-106X1
EURASIP Journal on Audio, Speech, and Music Processing volume 2021, Article number: 38 (2021) Cite this article
In this paper, a multichannel learning-based network is proposed for sound source separation in reverberant field. The network can be divided into two parts according to the training strategies. In the first stage, time-dilated convolutional blocks are trained to estimate the array weights for beamforming the multichannel microphone signals. Next, the output of the network is processed by a weight-and-sum operation that is reformulated to handle real-valued data in the frequency domain. In the second stage, a U-net model is concatenated to the beamforming network to serve as a non-linear mapping filter for joint separation and dereverberation. The scale invariant mean square error (SI-MSE) that is a frequency-domain modification from the scale invariant signal-to-noise ratio (SI-SNR) is used as the objective function for training. Furthermore, the combined network is also trained with the speech segments filtered by a great variety of room impulse responses. Simulations are conducted for comprehensive multisource scenarios of various subtending angles of sources and reverberation times. The proposed network is compared with several baseline approaches in terms of objective evaluation matrices. The results have demonstrated the excellent performance of the proposed network in dereverberation and separation, as compared to baseline methods.
As an important problem in speech enhancement, source separation seeks to separate independent source signals from mixture signals, based on the spatial cue, the temporal-spectral cue, or statistical characteristics of sources. For semi-blind source separation, the free-field wave propagation model is assumed to facilitate a two-stage procedure of source localization and separation by using an array. Beamforming (BF) [1], time difference of arrival (TDOA) [2], and multiple signal classification (MUSIC) [3] are generally used source localization methods. In the separation stage, BF methods such as minimum power distortionless response (MPDR) can be employed to extract source signals, based on the direction of arrivals estimated in the localization stage [4, 5]. In addition to BF methods, Tikhonov regularization (TIKR) [6] which treats the separation problem as a linear inverse problem can also be used.
On the other hand, blind source separation (BSS) approaches do not rely on a wave propagation model and exploits mainly the time-frequency (T-F) or statistical characteristics of mixture signals. Independent component analysis (ICA) is a well-known BSS algorithm that separates the signals into statistically independent components [7,8,9,10,11]. ICA was further extended to deal with convolutive processes such as acoustic propagation, e.g., triple-N ICA for convolutive mixtures (TRINICON) [12]. An alternative separation algorithm, independent vector analysis (IVA) [13], cleverly circumvents the permutation issue in ICA by modeling the statistical interdependency between frequency components.
In this paper, we shall explore the possibility of addressing source separation problems using a learning-based approach, namely, deep neural networks (DNNs). Wang et al. approached source separation by using DNNs in which spectrogram was used as the input features [14]. Promising results were obtained in light of various network structures, including convolutional neural network (CNN) [15], recurrent neural network (RNN) [16], and the deep clustering (DC) method [17], etc. Furthermore, utterance-level permutation invariant training (uPIT) was introduced to resolve the label permutation problem [18]. Recently, fully convolutional time-domain audio separation network (Conv-TasNet) was proposed [19] to separate source signals in the time domain in a computationally efficient way.
Reverberation is detrimental to speech quality, which leads to degradation in speech intelligibility. Multichannel inverse filtering (MINT) was developed to achieve nearly perfect dereverberation [20]. Multi-channel linear prediction (MCLP) [21] based on a time-domain linear prediction model in the T-F domain was reported effective. As a refined version of MCLP, the weighted prediction error (WPE) algorithm was developed in the short-time Fourier transform (STFT) domain via a long-term linear prediction [22]. A multi-channel generalization can be found in [23,24,25]. DNN approaches have also become promising techniques for dereverberation. Mapping-based approaches [26] attempt to enhance directly the reverberated signals, whereas masking-based approaches [27] attempt to learn a "mask" for anechoic signals. In addition, combined systems of a DNN and the WPE unit were also suggested [28, 29].
Source separation in a reverberant field is particularly challenging. This problem was tackled by cascading a WPE unit and a MPDR beamformer [30, 31]. Several systems have been proposed in light of the joint optimization of the preceding two units [32, 33]. In a very recent work, the weighted power minimization distortionless response (WPD) [34] beamformer was developed by integrating optimally the WPE and MPDR units into a single convolutional beamformer. DNN-based approaches have also been reported recently. An end-to-end learning model was trained to establish a mapping from the reverberant mixture input to anechoic separated speech outputs [35]. Cascade systems [36, 37] were also investigated. Multichannel networks [38, 39] were proposed to exploit the spatial cue of microphone signals. In addition, integrated DNN and conventional beamformers are suggested in recent years [40,41,42].
Most approaches employ a cascaded structure in which a DNN is trained for the prior information required by the subsequent beamforming algorithm, e.g., a post-enhancement mask for the beamforming output, masking-based spatial cue estimation, and estimation of the spatial covariance matrix, etc. In practice, DNNs could have some limitations in obtaining the required information for array beamforming where the magnitude of target signals be held fixed in the training stage. Under this circumstance, there is no guarantee that fixed loss functions such as mean-square-error (MSE) or signal-to-noise ratio (SNR) will lead to an optimal estimate [43]. The proposed method seeks to achieve a synergetic integration of arrays and DNN to reformulate and implement the real-valued weight-and-sum operation in a multichannel DNN through a learning-based training for optimal weights. In addition, a new scale-independent MSE loss is derived for optimal estimation in the frequency domain. The proposed network is shown to be resilient to various reverberation conditions and subtending angles, as compared to the cascaded DNN-array network.
Known for its efficacy on the separation task, Conv-TasNet [19] uses the time-domain learnable analysis and synthesis transformation and time-dilated convolutional blocks as the separation module. Moreover, U-net [44] which constitutes of multiple convolutional layers on the basis of encoder-decoder structure was recently applied and proved its effectiveness on the dereverberation task [45, 46]. In this paper, we build upon Conv-TasNet and U-net to develop a two-stage dereverberation-separation end-to-end system. The proposed network consists of two parts according to the training strategies. In the first part, the network is trained for beamforming network (BF-net), whereas in the second part, a U-net follows as a non-linear postfilter of the BF-net whose parameters are imported from the first part. The experiments are conducted using the proposed network for the spatialized Voice Cloning Toolkit (VCTK) corpus [47]. The results are evaluated in terms of SI-SNR [43], Perceptual Evaluation of Speech Quality (PESQ) [48], and Short-Time Objective Intelligibility (STOI) [49].
Conventional approaches on separation and dereverberation
Several conventional methods to be used as the baseline approaches are reviewed in this section. The typical processing flow of these methods has a dereverberation unit as the front end, e.g., WPE [50] and a separation unit as the back end, e.g., MPDR [5], TIKR [6], or IVA [13]. The cascaded structure of the DNN method, Beam-TasNet [42], is also considered as the baseline to illustrate the benefit of end-to-end training with SI-SNR.
Dereverberation using the WPE
To account for the prolonged effects of reverberation, a multichannel convolutional signal model [50] for a single-source scenario is generally formulated in the T-F domain as
$$ \mathbf{x}\left(t,f\right)=\sum \limits_{l=0}^{L-1}\mathbf{h}\left(l,f\right)s\left(t-l,f\right), $$
where x(t, f ) = [x1(t, f ) x2(t, f ) … xM (t, f )]T is the microphone signal vector and h(l, f ) = [h1(l, f ) h2(l, f ) … hM (l, f )]T with l = 0, 1, …, L is the convolutional acoustic transfer functions from the source to the array microphones. A delayed autoregressive linear prediction model can be utilized to estimate recursively the late reverberation [23].
Dereverberation and separation systems
Three conventional methods and a DNN approach to be used as the baselines are summarized next.
The baseline method 1: WPE-MPDR approach
The first baseline method is depicted in Fig. 1. The reverberated mixture signals x(t, f) are de-reverberated by the WPE unit and then filtered by the MPDR beamformer. After the de-reverberated signals x̃(t, f) are acquired through WPE, the weight vector of MPDR [5] wMPDR can be obtained as
$$ {\mathbf{w}}_{MPDR}=\frac{{\mathbf{R}}_{xx}^{-1}\mathbf{a}\left({\theta}_n,f\right)}{{\mathbf{a}}^H\left({\theta}_n,f\right){\mathbf{R}}_{xx}^{-1}\mathbf{a}\left({\theta}_n,f\right)}, $$
The block diagram of the baseline method 1
where a(θn, f) ∈ℂM is the steering vector associated with the nth source at the direction θn and Rxx = E{x̃(t, f) x̃H(t, f)} is the spatial covariance matrix with E{.} being the expectation operator with respect to the time frames and can be estimated using recursive averaging. In this paper, the steering vector is modeled with the acoustic transfer function of the free-field plane-wave propagation. We investigate the scenario of the fixed source locations for which the direction of arrivals of source speakers are known.
The baseline method 2: WPE-TIKR approach
The baseline method 2 is illustrated in Fig. 2. The microphone signals are de-reverberated by using WPE, followed by the source signal extraction using TIKR. With the steering matrix A(f) = [a(θ1 , f ) … a(θn , f )] established with the known source locations, the source signals can be extracted by solving a linear inverse problem for the source signal vector s(t, f) in terms of TIKR [6]. That is,
$$ \mathbf{s}\left(t,f\right)={\left[{\mathbf{A}}^H(f)\mathbf{A}(f)+{\rho}^2\mathbf{I}\right]}^{-1}{\mathbf{A}}^H(f)\tilde{\mathbf{x}}\left(t,f\right), $$
where ρ is the regularization parameter that trades off the separability and audio quality of the extracted signals and I denotes the identity matrix.
The baseline method 3: WPE-IVA approach
The baseline method 3 is illustrated in Fig. 3. The mixture signals are de-reverberated by WPE, followed by the source signal extraction using IVA. The IVA algorithm resolves the permutation ambiguity in ICA by exploiting the interdependence of frequency components of a particular source. A de-mixing matrix W can be calculated using natural gradient method [51]. It follows that the independent source vector ŝ in the T-F domain can be separated as [13]
$$ \hat{\mathbf{s}}\left(t,f\right)=\mathbf{W}\left(t,f\right)\tilde{\mathbf{x}}\left(t,f\right), $$
To reduce the dimension of the de-reverberated signals when there are more microphones than sources, principle component analysis (PCA) [52] can be used.
The baseline method 4: Beam-TasNet approach
In Beam-TasNet, the front-end multichannel TasNet (MC-TasNet) [53] is trained with scale-dependent SNR to estimate the spatial covariance matrix for MVDR that serves as a back-end separator. MC-TasNet consists of a parallel encoder with unconstrained learnable kernels. Once the separated signals are obtained using MC-TasNet, the signal and noise spatial covariance matrices associated with some target source can be estimated. Next, an MVDR beamformer can be implemented with weights:
$$ {\mathbf{w}}_{MVDR}=\frac{{\left({\boldsymbol{\Phi}}_f^{N_n}\right)}^{-1}{\boldsymbol{\Phi}}_f^{S_n}}{\mathrm{Tr}\left({\left({\boldsymbol{\Phi}}_f^{N_n}\right)}^{-1}{\boldsymbol{\Phi}}_f^{S_n}\right)}\mathbf{u}, $$
where ΦfSn and ΦfNn denote the signal and noise covariance matrices of the nth source signals, Tr(·) denotes the trace operation, and u = [1 0 ⋯ 0]T is an M-dimensional vector with one element representing the reference microphone. In this evaluation, the refinement using voice activity detection is not used.
The proposed multichannel end-to-end NN
In this contribution, an end-to-end multichannel learning-based approach is proposed to separate source signals in reverberant rooms. The network performs joint dereverberation and separation on the basis of Conv-TasNet. Unlike original Conv-TasNet that uses the time-domain learnable transformation to generate features, we use instead STFT and inverse STFT to reduce the computational complexity for our BF-net. In addition, the masks in Conv-TasNet can be reformulated into a learning-based beamformer. Moreover, a U-net is concatenated to the output layer of the BF-net to serve as a postfilter of the beamformer.
Neural network-based beamforming
In array signal processing, an array aims to recover the source signals via the optimal beamforming weights w ∈ℂM:
$$ {\tilde{s}}_n\left(t,f\right)={\mathbf{w}}^H\mathbf{x}\left(t,f\right). $$
The learning approach of T-F masks can be applied to the training of the beamforming weights. By converting the complex representation to the real-valued representation that is amenable to NN platforms, Eq. (6) can be rewritten as
$$ \left[\begin{array}{cc}\operatorname{Re}{\left\{\mathbf{x}\right\}}^T& \operatorname{Im}{\left\{\mathbf{x}\right\}}^T\\ {}\operatorname{Im}{\left\{\mathbf{x}\right\}}^T& -\operatorname{Re}{\left\{\mathbf{x}\right\}}^T\end{array}\right]\left[\begin{array}{c}\operatorname{Re}\left\{\mathbf{w}\right\}\\ {}\operatorname{Im}\left\{\mathbf{w}\right\}\end{array}\right]=\left[\begin{array}{c}\operatorname{Re}\left\{{\tilde{s}}_n\right\}\\ {}\operatorname{Im}\left\{{\tilde{s}}_n\right\}\end{array}\right], $$
where Re{} and Im{} denote the real part and imaginary part operations. The goal of the NN training is to obtain the beamforming weights such that the masked signal well approximates the target signal
$$ {\tilde{\mathbf{S}}}_n=\sum \limits_{m=1}^M conj\left({\mathbf{W}}_m\right)\circ {\mathbf{X}}_m, $$
where the {S̃n, Wm, Xm}∈ℂF×T denote as the STFT of the nth target signals, the mth beamforming weights, and the mth microphone signal. The symbol "○" represents element-wise multiplication, conj(·) is the conjugate operation element-wisely applied on matrix Wm, and {F, T} denote the dimension of T-F bins. The preceding complex STFT representation of the nth target signal can be converted to its corresponding real part and imaginary part as follows:
$$ {\displaystyle \begin{array}{l}{\tilde{\mathbf{S}}}_n^r=\sum \limits_{m=1}^M{\mathbf{W}}_m^r\circ {\mathbf{X}}_m^r+\sum \limits_{m=1}^M{\mathbf{W}}_m^i\circ {\mathbf{X}}_m^i,\\ {}{\tilde{\mathbf{S}}}_n^i=\sum \limits_{m=1}^M{\mathbf{W}}_m^r\circ {\mathbf{X}}_m^i-\sum \limits_{m=1}^M{\mathbf{W}}_m^i\circ {\mathbf{X}}_m^r,\end{array}} $$
where the superscripts {r, i} indicate the real and imaginary part.
Dereverberation via spectral mapping
The reverberated speech signal is pre-processed by the NN-based beamforming to give the nth enhanced signal s̃n (t, f ). As indicated in the literature [54], the spectral mapping approach is in general more effective than the T-F masking approach for dereverberation problems. Therefore, an additional DNN is employed as a postfilter to learn the non-linear spectral mapping function ℋ(·). The speech signals can be de-reverberated by using this mapping function
$$ {\hat{s}}_n\left(t,f\right)=\mathscr{H}\left({\tilde{s}}_n\left(t,f\right)\right). $$
The mapping network ℋ is based on a U-net model.
Multichannel network structure
The proposed network depicted in Fig. 4 is comprised of two parts according to the training strategy. At the first stage, the BF-net learns to separate the independent reverberated source signal from the mixture signals received at microphones. At the second stage, the BF-net in conjunction with the U-net postfilter attempts to learn the spectral mapping between the reverberated signal and the anechoic signal of independent sources. To initialize the training, the parameters of the BF-net trained in the first stage are transferred to that in the second stage. In both stages, uPIT [18] is used to avoid permutation ambiguity. The network architectures are detailed next.
The structure of the proposed network based on two training stages
The first stage: the weight-and-sum beamforming network
The aim of this network is to generate N sets of optimal beamforming weights \( {\left\{{\mathrm{W}}_m^r,{\mathrm{W}}_m^i\right\}}_{m=1}^M \)∈ℝF×T for the weight-and-sum operation in Eq. (9). STFT is utilized to produce the input acoustic features. Inter-channel time, phase, and level differences (ITD, IPD, and ILD) [38] that are commonly used spatial cues can be estimated from the STFT data. In this contribution, we adopt ILD, cosine IPD, and sine IPD defined as
$$ {\displaystyle \begin{array}{l}\mathrm{ILD}=10\ \log\ \frac{\mid {x}_m\left(t,f\right)\mid }{\mid {x}_1\left(t,f\right)\mid },\\ {}\cos\ \mathrm{IPD}=\cos\ \left[\angle {x}_m\left(t,f\right)-\angle {x}_1\left(t,f\right)\right],\\ {}\mathrm{and}\ \sin\ \mathrm{IPD}=\sin\ \left[\angle {x}_m\left(t,f\right)-\angle {x}_1\left(t,f\right)\right],\end{array}} $$
where the first microphone is used as the reference sensor and xm(t, f ), m = 2, …, M, is the STFT of the mth microphone signal. In addition, the spectral features such as log power spectral density (LPSD), cosine, and sine phase of the first microphone are combined with the spatial features. That is, we concatenate spatial features, \( {\left\{{\mathrm{X}}_{ILD},{\mathrm{X}}_{\cos IPD},{\mathrm{X}}_{\sin IPD}\right\}}_{m=1}^M \)∈ℝF×T, and spectral features of the first microphone, {XLPSD, Xcos∠x1, Xsin∠x1}∈ℝF×T to form the complete features, Λ∈ℝ3MF×T, as the input to the BF-net.
The BF-net leverages the main architecture of Conv-TasNet [19] which consists of multiple time-dilated convolutional blocks, as illustrated in Fig. 5. Each layer of the time-dilated blocks contains dilated factors of the number in two's powers (2D−1). The input data is zero padded to keep the output dimension for each convolutional block. The increasingly dilated kernel of a block repeats itself R times. The array weights are estimated through the 1 × 1 pointwise convolutional layer (1×1-Conv) with no activation function. The network is modified from Conv-TasNet by retaining only the residual path of the time-dilated CNN blocks. That is, every output of the convolutional block sums with its input to become the input of the next block. The detailed design of the convolution block is shown on the right-hand side of Fig. 5. Before the data is passed to the convolutional block, the input size is adjusted to B by using a bottleneck layer that is essentially a 1 × 1-Conv layer. In the convolutional block, the feature is adjusted to larger size H > B also through a 1 × 1-Conv layer. Followed by the depthwise separable convolution [55], the separated one-dimensional CNN with kernel size P convolves with the corresponding input vectors. Next, with the 1 × 1-Conv, the output size returns to B in order to merge with the input data to the next layer of the convolutional block. Parametric rectified linear unit (PReLU) is used as the activation function [56], with the aid of the global layer normalization [19].
The detailed structure of the beamforming network
The curriculum learning [57] is employed in the training stage. The training starts with using the reverberant utterances as the training target, followed by switching the targets to the anechoic utterances when the convergence condition of loss function is met. Finally, the N sets of separated signals, S̃ ∈ℝN×2×F×T, are obtained as described in Fig. 4. The hyperparameters of the non-causal time-dilated convolutional blocks employed in the BF-net are summarized in Table 1. Adam [58] is used as the optimizer with the learning rate 10−3.
Table 1 Hyper-parameters used in the first stage of the BF-net
The second stage: separation and dereverberation network
As illustrated in Fig. 4, the BF-net in conjunction with a U-net postfilter is employed in the second stage of joint network training. The U-net postfilter is intended for dereverberation. The parameters trained in the first stage are transferred to the BF-net in the second stage. The outcome of the training is the direct mapping between the N sets of the de-reverberated signals, Ŝdrv ∈ℝN×2×F×T, and the anechoic speech signals, S ∈ℝN×2×F×T. Before the estimated output of the BF-net, S̃ ∈ℝN×2×F×T, is passed to the U-net, the signals in STFT domain are pre-processed to obtain the spectral cues, including LPSD and its corresponding sine and cosine phases, \( {\left\{{\tilde{\mathbf{S}}}_{LPSD}^n,{\tilde{\mathbf{S}}}_{\cos \angle x}^n,{\tilde{\mathbf{S}}}_{\sin \angle x}^n\right\}}_1^N\in {\mathrm{\mathbb{R}}}^{F\times T} \). This feature set serves as the input to the U-net model with an appropriate input channel number. For example, if the output number in the first stage is N separated sources, the pre-processing channel number will be 3N. Hence, the feature size passed to the U-net is Λ̃ ∈ℝ3N×F×T.
The U-net model for a two-source problem is depicted in the Fig. 6. The encoder structure consists of two 3 × 3 two-dimensional CNN, where the output is zero-padded to keep the size of the data, followed by a rectified linear unit (ReLU) and a 2 × 2 max-pooling layer with a stride size equal to 2. In a down-sampling step, the number of input channels is doubled and the output features serve as the shared information for the decoder. The decoder up-samples the data through the 2 × 2 transpose convolutional network along with halved feature maps of the input channels, where each is followed by the concatenation of the corresponding maps from the encoder and a repeated 3 × 3 CNN layers with ReLU activation. To accelerate the training process, we also perform the depthwise separable convolution [55] in the consecutive CNN layers. The output layer produces the nth real and imaginary parts of the enhanced signal in STFT domain, \( {\hat{\mathbf{S}}}_{drv,n}=\left\{{\hat{\mathbf{S}}}_n^r,{\hat{\mathbf{S}}}_n^i\right\} \)∈ℝF×T, through a 1 × 1 CNN layer.
Example of the U-net for a two-source problem
The estimated signals can be recovered to the time-domain with the ISTFT process, where the overlap-and-add operation is applied. The network parameters are summarized in Fig. 6, with the channel number indicated and the kernel size of the associated layer labeled at the bottom. During training, Adam [58] is used as the optimizer with the learning rate of 10−4.
The objective function
The time-domain SI-SNR [43] is widely used as the objective function in separation tasks [19, 59]. However, if the system is designed in frequency domain, the direct minimization of the mean square error (MSE) is usually adopted as the objective function, which is not directly related to the separation criterion. Furthermore, because the target signals are usually the T-F spectrogram with a fixed magnitude, the estimated output is basically limited to a certain level. Therefore, the performance of the network will be intrinsically restricted by the definition of the MSE loss function. In order to improve the flexibility of the network output which is trained in the frequency-domain, the scale-invariant MSE (SI-MSE) is formulated by introducing a scaling factor γ:
$$ \mathcal{L}=={\left\Vert {\hat{\mathbf{S}}}_n-\gamma {\mathbf{S}}_n\right\Vert}_F^2, $$
where Ŝn and Sn are the nth estimated signal and the target signal in the STFT domain. By minimizing the objective function with respect to γ, the optimal scaling value γ can be obtained as
$$ \gamma =\frac{\sum_{t,f}{\hat{S}}_n^r\left(t,f\right){S}_n^r\left(t,f\right)+{\hat{S}}_n^i\left(t,f\right){S}_n^i\left(t,f\right)}{\sum_{t,f}{S}_n^r{\left(t,f\right)}^2+{S}_n^i{\left(t,f\right)}^2}, $$
where the \( \left\{{\hat{S}}_n^r\left(t,f\right),{\hat{S}}_n^i\left(t,f\right)\right\} \) denote the real and imaginary part of the nth estimated signal, Ŝn in Eq. (12) and so on for the target signal, Sn. Therefore, the MSE loss can be rewritten in the form of SI-SNR as
$$ SI\hbox{-} SNR\left({\hat{\mathbf{S}}}_n,\gamma {\mathbf{S}}_n\right):= 10{\log}_{10}\frac{{\left\Vert \gamma {\mathbf{S}}_n\right\Vert}_F^2}{{\left\Vert {\hat{\mathbf{S}}}_n-\gamma {\mathbf{S}}_n\right\Vert}_F^2}, $$
which can be optimized in the frequency domain with a scalable the network output. We adopt this objective function in both training stages and, meanwhile, the uPIT [18] is also employed to prevent the network outputs from the permutation ambiguity error. When the value of SI-SNR in the validation set is no longer decreasing after 10 consecutive epochs, the convergence criterion is said to be met and the training stages will be stopped.
Results and discussions
Dataset generation
Two array geometries fitted with different number of microphones examined including uniform circular arrays (UCAs) and uniform linear arrays (ULAs). As illustrated in Fig. 7, UCAs of 4.4 cm radius fitted with 2, 3, 4, and 6 microphones are illustrated at the upper row. ULAs of 15 cm fitted with 2, 4, and 6 microphones are illustrated at the lower row.
Two array geometries fitted with different number of microphones examined in the work
The dataset generation is considered in a Monte Carlo simulation. Two independent speakers are randomly positioned in rooms with five different sizes. The microphone array is also randomly placed in the same room at half of the room height. The sources are kept at least 0.5 m away from the wall. The two sources are kept at least 1 m apart, while the distance between the source and the array center is at least 0.7 m. The ranges of the azimuth angles, 0° to 360° and elevation angles, 0° to 70°, are examined. The dataset is remixed from the VCTK corpus [47] where the speech recordings are down-sampled to 16 kHz for our use. Speech segments of 92 speakers are randomly selected for training and validation, whereas 15 unseen speakers are selected for testing. The image source method (ISM) [60] is employed to generate room impulse responses (RIRs) with various reverberation times (T60) ranging from 200 ms to 900 ms. The anechoic signal received at the reference microphone is adopted as the training target. Mixture signals are generated by mixing four-second RIR-filtered utterance segments of two randomly selected speakers. Speech mixture with signal-to-interference-ratio ranging from – 5 dB to 5 dB used in the training and testing. The simulation settings are summarized in Table 2 and the resulting data size are 30000, 3000 for the training and testing set. The additional 5000 data for the validation are created with the same manner of the training set in order to determine the convergence of the network. To further improve the performance of the network, we also use the dynamic mixing (DM) approach [61] to augment the dataset. The training set is changed to the online data generation, where two randomly selected speech segments are convolved with the pre-generated RIRs and mixed together during the training phase.
Table 2 Data settings of the training and testing set
Evaluation of the proposed network
The separation performance of the proposed network is assessed according to the testing set in Table 2. The processed data are evaluated and averaged in terms of the improvement of time-domain SI-SNR [43] (∆SI-SNR), the improvement of PESQ [48] (∆PESQ), and the improvement of STOI [49] (∆STOI) with respect to the unprocessed signal received at the first microphone. In this section, the evaluation is based on the six-element UCA. The models to evaluate are BF-net (the first stage), BF-net with LSTM, BF-net with U-net, and BF-net with U-net and DM. The BF-net (the first stage) refers to the half-trained network where the training is only performed for the first stage. BF-net with LSTM is an alternative network where four layers of the deep long short-term memory (LSTM) with 1024 neurons are adopted as the non-linear postfilter. The BF-net with U-net is the complete model of the proposed network. Moreover, the performance can be further improved by utilizing the DM approach. Two sources with subtending angles within 0°–15°, 15°–45°, 45°–90°, and 90°–180° are investigated. The results summarized in Table 3 suggest that separation performance can be improved by the nonlinear postfilter network and adopting DM during training. It can be seen from the ∆SI-SNR results, the subtending angle of the two sources has little effect on the performance. However, the ΔPESQ score varies significantly with subtending angle. ΔPESQ increases for subtending angles less than 90°, slightly decreases for subtending angles larger than 90°. In addition, room responses with different reverberation times, T60 = 0.16 s, 0.36 s, 0.61 s, and 0.9 s are also investigated. In Table 4, ∆SI-SNR appears to be independent of the reverberation time. We can expect that the proposed network performs better when T60 is low than that of high T60 because the unprocessed signal is not significantly corrupted. ∆PESQ also follows the similar trend. The average scores of the performance indices including ∆STOI indicate that the six-channel BF-net with U-net and DM turns out to be the best model.
Table 3 Performance improvement of the proposed network evaluated with the six-channel UCA for different subtending angles
Table 4 Performance improvement of the proposed network evaluated with the six-channel UCA for different reverberation time
Comparison with the baseline approaches
In this section, we compare our best model with the traditional BF, BSS, and DNN approaches introduced in the Section 2 where WPE with MPDR and WPE with TIKR are the BF approaches, WPE with IVA is the BSS approach, while the Beam-TasNet approach is the DNN method. The test cases are identical to that discussed in the Section 4.2. The separation performance is summarized in Tables 5 and 6. The results indicate that the proposed network outperforms the baseline methods in three performance metrics. To be specific, ΔSI-SNR in Table 5 reveals that the performance of the BF approaches is highly dependent on the subtending angles. For closely spaced sources with the subtending angle within 0°–15°, WPE + TIKR performs poorly. In contrast, the BSS and the proposed learning-based approaches are more robust than the BF approach for separating closely spaced sources. Furthermore, ΔSI-SNR and ΔPESQ of the BSS approach and the proposed DNN-based approach exhibit little variation for different subtending angles and reverberation times. Although Beam-TasNet that performs well in ΔSI-SNR, enhancement is not satisfactory in terms of ΔPESQ and ΔSTOI in particular when the subtending angle is small or when the reverberation time is large. Because the estimation of the spatial covariance matrix for the MVDR beamformer relies heavily on MC-TasNet, the estimation error has significant impact on the performance of MVDR, especially in adverse acoustic conditions.
Table 5 Comparison of the separation approaches based on the six-channel UCA for different subtending angles
Table 6 Comparison of the separation approaches based on the six-channel UCA for different reverberation time
Genericity to different array geometry
To further assess the applicability of the proposed pipeline to different array geometries, two kinds of array geometries fitted with different number of microphones examined in the work. Tables 7 and 8 summarize the performance improvement for both UCAs and ULAs when applied in rooms with different reverberation times. The results in both tables indicate that the proposed network performs well for various numbers of microphones. Furthermore, the performance of the proposed network is increased with number of microphones in both UCAs and ULAs. The results also show that ULA can perform better than UCA when only two microphones are adopted, owing to larger aperture. In summary, the proposed network is applicable to different array geometries if the dataset is properly generated for the corresponding geometries. Nevertheless, the network trained on a UCA cannot be directly utilized on a ULA and re-training is required.
Table 7 Performance improvement for UCAs with different number of microphones when applied in rooms with different reverberation times
Table 8 Performance improvement for ULAs with different number of microphones when applied in rooms with different reverberation times
In this paper, we have proposed a multichannel learning-based DNN and demonstrated its efficacy in source separation in reverberant environments. The end-to end system relies on a joint training of a BF-net and a U-net. In light of the two-stage training strategy and the DM approach, the proposed six-channel network proves effective in dereverberation and separation. The proposed network has demonstrated superior performance in terms of SI-SNR, PESQ, and STOI, as compared with several baseline methods. The proposed network remains effective, even for closely spaced sources and high reverberation scenarios. Also, the applicability to different array geometries is validated if the dataset is properly generated for the corresponding geometries. However, the network trained on a UCA cannot be utilized directly on a ULA, and vice versa.
Despite the excellent performance of the DNN-based approach, it is noteworthy to mention some of its limitations. It is a "black box" approach in which physical insights play little role. Big data are required for training the network, which is difficult if not impossible in applications. Generalization may be limited if the dataset is not sufficiently comprehensive. These limitations to the DNNs turn out to be the strengths of the BF and BSS approaches. Network integration to create the synergy of these techniques is on the future research agenda.
The demonstration of the processed audio samples can be found at: https://Siang-Chen.github.io/
SI-MSE:
Scale invariant mean square error
SI-SNR:
Scale invariant signal-to-noise ratio
BSS:
Blind source separation
BF:
Beamforming
BF-net:
Beamforming network
MPDR:
Minimum power distortionless response
TIKR:
Tikhonov regularization
T-F:
Independent vector analysis
DNN:
Deep neural network
CNN:
Convolutional neural network
uPIT:
Utterance-level permutation invariant training
Conv-TasNet:
Fully convolutional time-domain audio separation network
WPE:
Weighted prediction error
STFT:
Short-time Fourier transform
PESQ:
Perceptual evaluation of speech quality
STOI:
Short-time objective intelligibility
1× 1-Conv:
1 × 1 pointwise convolutional layer
IPD:
Inter-channel phase differences
ILD:
Inter-channel level differences
LPSD:
Log power spectral density
UCA:
Uniform circular array
ULA:
Uniform linear array
Dynamic mixing
I. McCowan, Microphone arrays: a tutorial (Queensland University, Australia, 2001), p. 1
F. Gustafsson, F. Gunnarsson, in 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP'03). Positioning using time-difference of arrival measurements, vol 6 (2003), pp. VI–553
Z. Khan, M.M. Kamal, N. Hamzah, K. Othman, N. Khan, in 2008 IEEE International RF and Microwave Conference. Analysis of performance for multiple signal classification (MUSIC) in estimating direction of arrival (2008), pp. 524–529
K. Nakadai, K. Nakamura, in Wiley Encyclopedia of Electrical and Electronics Engineering. Sound source localization and separation, (New York: John Wiley & Sons, 2015), pp. 1–18
S.A. Vorobyov, Principles of minimum variance robust adaptive beamforming design. Signal Process. 93, 3264 (2013)
M. Fuhry, L. Reichel, A new Tikhonov regularization method. Numerical Algorithms 59, 433 (2012)
S. Amari, S.C. Douglas, A. Cichocki, H.H. Yang, in First IEEE Signal Processing Workshop on Signal Processing Advances in Wireless Communications. Multichannel blind deconvolution and equalization using the natural gradient (1997), pp. 101–104
M. Kawamoto, K. Matsuoka, N. Ohnishi, A method of blind separation for convolved non-stationary signals. Neurocomputing 22, 157 (1998)
T. Takatani, T. Nishikawa, H. Saruwatari, K. Shikano, in Seventh International Symposium on Signal Processing and Its Applications, 2003. Proceedings. High-fidelity blind separation for convolutive mixture of acoustic signals using simo-model-based in- dependent component analysis, vol 2 (2003), pp. 77–80
D.W. Schobben, P. Sommen, A frequency domain blind signal separation method based on decorrelation. IEEE Trans. Signal Process. 50, 1855 (2002)
S. Makino, H. Sawada, S. Araki, in Blind Speech Separation. Frequency-domain blind source separation (Dordrecht: Springer, 2007), pp. 47–78
H. Buchner, R. Aichner, W. Kellermann, A generalization of blind source separation algorithms for convolutive mixtures based on second-order statistics. IEEE Trans. Speech Audio Process. 13, 120 (2004)
T. Kim, I. Lee, T.-W. Lee, in 2006 Fortieth Asilomar Conference on Signals, Systems and Computers. Independent vector analysis: definition and algorithms (2006), pp. 1393–1396
Y. Wang, D. Wang, Towards scaling up classification- based speech separation. IEEE Trans. Audio Speech Lang. Process. 21, 1381 (2013)
S. Mobin, B. Cheung, B. Olshausen, Generalization challenges for neural architectures in audio source separation, arXiv preprint arXiv:1803.08629 (2018)
P.-S. Huang, M. Kim, M. Hasegawa-Johnson, P. Smaragdis, Joint optimization of masks and deep re- current neural networks for monaural source separation. IEEE/ACM Trans. Audio Speech Lang. Process. 23, 2136 (2015)
J.R. Hershey, Z. Chen, J. Le Roux, S. Watanabe, in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Deep clustering: discriminative embeddings for seg- mentation and separation (2016), pp. 31–35
M. Kolbæk, D. Yu, Z.-H. Tan, J. Jensen, Mul-titalker speech separation with utterance-level permutation invariant training of deep recurrent neural networks. IEEE/ACM Trans. Audio Speech Lang. Process. 25, 1901 (2017)
Y. Luo, N. Mesgarani, Conv-TasNet: Surpassing ideal time–frequency magnitude masking for speech separation. IEEE/ACM Trans. Audio Speech Lang. Process. 27, 1256 (2019)
K. Furuya, S. Sakauchi, A. Kataoka, in 2006 IEEE Inter-national Conference on Acoustics Speech and Signal Processing Proceedings. Speech dereverberation by combining MINT-based blind deconvolution and modified spectral subtraction, vol 1 (2006), p. I–I
T. Nakatani, B.-H. Juang, T. Yoshioka, K. Kinoshita, M. Miyoshi, in 2007 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. Importance of energy and spectral features in gaussian source model for speech dereverberation (New Paltz: IEEE, 2007), pp. 299–302
T. Nakatani, T. Yoshioka, K. Kinoshita, M. Miyoshi, B.-H. Juang, in 2008 IEEE International Conference on Acoustics, Speech and Signal Processing. Blind speech dereverberation with multi- channel linear prediction based on short time fourier transform representation (2008), pp. 85–88
T. Nakatani, T. Yoshioka, K. Kinoshita, M. Miyoshi, B.-H. Juang, Speech dereverberation based on variance- normalized delayed linear prediction. IEEE Trans. Audio Speech Lang. Process. 18(1717) (2010)
T. Yoshioka, T. Nakatani, M. Miyoshi, H.G. Okuno, Blind separation and dereverberation of speech mix- tures by joint optimization. IEEE Trans. Audio Speech Lang. Process. 19(69) (2010)
A. Jukić, N. Mohammadiha, T. van Waterschoot, T. Gerkmann, S. Doclo, in 2015 IEEE Inter- national Conference on Acoustics, Speech and Signal Processing (ICASSP). Multi-channel linear prediction-based speech dereverberation with low-rank power spectrogram approximation (2015), pp. 96–100
F. Weninger, S. Watanabe, Y. Tachioka, B. Schuller, in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Deep recurrent de-noising auto-encoder and blind de- reverberation for reverberated speech recognition (2014), pp. 4623–4627
D.S. Williamson, D. Wang, in 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP). Speech dereverberation and denoising using complex ratio masks (2017), pp. 5590–5594
J. Heymann, L. Drude, R. Haeb-Umbach, K. Kinoshita, T. Nakatani, in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Joint optimization of neural network- based WPE dereverberation and acoustic model for robust online ASR (2019), pp. 6655–6659
K. Kinoshita, M. Delcroix, H. Kwon, T. Mori, T. Nakatani, in Interspeech. Neural network-based spectrum estimation for online wpe dereverberation (2017), pp. 384–388
M. Delcroix, T. Yoshioka, A. Ogawa, Y. Kubo, M. Fuji-moto, N. Ito, K. Kinoshita, M. Espi, S. Araki, T. Hori, et al., Strategies for distant speech recognitionin reverberant environments. EURASIP J. Adv. Signal Process. 2015, 1 (2015)
W. Yang, G. Huang, W. Zhang, J. Chen, J. Benesty, in 2018 16th International Workshop on Acoustic Signal Enhancement (IWAENC). Dereverberation with differential microphone arrays and the weighted-prediction-error method (2018), pp. 376–380
M. Togami, in 2015 23rd European Signal Processing Conference (EUSIPCO). Multichannel online speech dereverberation under noisy environments (2015), pp. 1078–1082
L. Drude, C. Boeddeker, J. Heymann, R. Haeb-Umbach, K. Kinoshita, M. Delcroix, T. Nakatani, in Interspeech. Integrating neural network based beamforming and weighted pre- diction error dereverberation (2018), pp. 043–3047
T. Nakatani, K. Kinoshita, A unified convolutional beamformer for simultaneous denoising and dereverberation. IEEE Signal Process. Lett. 26, 903 (2019)
G. Wichern, J. Antognini, M. Flynn, L.R. Zhu, E. Mc-Quinn, D. Crow, E. Manilow, J.L. Roux, Wham!: Extending speech separation to noisy environments, arXiv preprint arXiv:1907.01160 (2019)
C. Ma, D. Li, X. Jia, Two-stage model and optimal si-snr for monaural multi-speaker speech separation in noisy environment, arXiv preprint arXiv:2004.06332 (2020)
T. Yoshioka, Z. Chen, C. Liu, X. Xiao, H. Erdogan, D. Dimitriadis, in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Low-latency speaker-independent continuous speech separation (2019), pp. 6980–6984
Z.-Q. Wang, D. Wang, in Interspeech. Integrating spectral and spatial features for multi-channel speaker separation (2018), pp. 2718–2722
J. Wu, Z. Chen, J. Li, T. Yoshioka, Z. Tan, E. Lin, Y. Luo, L. Xie, An end-to-end architecture of online multi-channel speech separation, arXiv preprint arXiv:2009.03141 (2020)
T. Nakatani, R. Takahashi, T. Ochiai, K. Kinoshita, R. Ikeshita, M. Delcroix, S. Araki, in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). DNN-supported mask-based convolutional beamforming for simultaneous denoising, dereverberation, and source separation (2020), pp. 6399–6403
Y. Fu, J. Wu, Y. Hu, M. Xing, L. Xie, in 2021 IEEE Spoken Language Technology Workshop (SLT). DESNET: A multi-channel network for simultaneous speech dereverberation, enhancement and separation (2021), pp. 857–864
T. Ochiai, M. Delcroix, R. Ikeshita, K. Kinoshita, T. Nakatani, S. Araki, in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Beam-Tasnet: Time-domain audio separation network meets frequency-domain beam- former (2020), pp. 6384–6388
J. Le Roux, S. Wisdom, H. Erdogan, J.R. Hershey, in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). SDR–half-baked or well done? (2019), pp. 626–630
O. Ronneberger, P. Fischer, T. Brox, in International Conference on Medical image computing and computer-assisted intervention. U-net: Convolutional networks for biomedical image segmentation (Cham: Springer, 2015), pp. 234–241
O. Ernst, S.E. Chazan, S. Gannot, J. Goldberger, in 2018 26th European Signal Processing Conference (EUSIPCO). Speech dereverberation using fully convolutional networks (2018), pp. 390–394
V. Kothapally, W. Xia, S. Ghorbani, J.H. Hansen, W. Xue, J. Huang, Skipconvnet: Skip convolutional neural network for speech dereverberation using optimally smoothed spectral mapping, arXiv preprint arXiv:2007.09131 (2020)
J. Yamagishi, C. Veaux, K. MacDonald, et al., Cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit (version 0.92) (2019).
A.W. Rix, J.G. Beerends, M.P. Hollier, A.P. Hekstra, in 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No. 01CH37221). Perceptual evaluation of speech quality (pesq)-a new method for speech quality assessment of telephone net- works and codecs, vol 2 (2001), pp. 749–752
C.H. Taal, R.C. Hendriks, R. Heusdens, J. Jensen, in 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. A short-time objective intelligibility measure for time-frequency weighted noisy speech (2010), pp. 4214–4217
K. Kinoshita, M. Delcroix, T. Nakatani, M. Miyoshi, Suppression of late reverberation effect on speech signal using long-term multiple-step linear prediction. IEEE Trans. Audio Speech Lang. Process. 17, 534 (2009)
S.-I. Amari, A. Cichocki, H.H. Yang, et al., in Advances in neural information processing systems. A new learning algorithm for blind signal separation (1996), pp. 757–763
S. Wold, K. Esbensen, P. Geladi, Principal component analysis. Chemom. Intell. Lab. Syst. 2, 37 (1987)
R. Gu, J. Wu, S. Zhang, L. Chen, Y. Xu, M. Yu, D. Su, Y. Zou, D. Yu, End-to-end multi-channel speech separation, arXiv preprint arXiv:1905.06286 (2019)
Y. Zhao, Z.-Q. Wang, D. Wang, in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). A two-stage algorithm for noisy and reverberant speech enhancement (2017), pp. 5580–5584
F. Chollet, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Xception: Deep learning with depthwise separable convolutions (2017)
K. He, X. Zhang, S. Ren, J. Sun, in Proceedings of the IEEE international conference on computer vision. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification (2015), pp. 1026–1034
Y. Bengio, J. Louradour, R. Collobert, J. Weston, in Proceedings of the 26th annual international conference on machine learning. Curriculum learning (2009), pp. 41–48
D.P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014)
F. Bahmaninezhad, J. Wu, R. Gu, S.-X. Zhang, Y. Xu, M. Yu, D. Yu, A comprehensive study of speech separation: spectrogram vs waveform separation, arXiv preprint arXiv:1905.07497 (2019)
J.B. Allen, D.A. Berkley, Image method for efficiently simulating small-room acoustics. J. Acoust. Soc. Am. 65, 943 (1979)
N. Zeghidour, D. Grangier, Wavesplit: End-to-end speech separation by speaker clustering, arXiv preprint arXiv:2002.08933 (2020)
Thanks to Dr. Mingsian Bai for his three-month visit to the LMS, FAU, Erlangen-Nuremberg, which made this research work possible.
The work was supported by the Add-on Grant for International Cooperation (MAGIC) of the Ministry of Science and Technology (MOST) in Taiwan, under the project number 107-2221-E-007-039-MY3.
Department of Power Mechanical Engineering/Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan
You-Siang Chen, Zi-Jie Lin & Mingsian R. Bai
You-Siang Chen
Zi-Jie Lin
Mingsian R. Bai
Model development: Y.S. Chen, Z.J. Lin, M. R. Bai. Design of the dataset and test cases: Y.S. Chen and Z.J. Lin. Experimental testing: Y.S. Chen and Z.J. Lin. Writing paper: Y.S. Chen. All the authors review and approved the final manuscript.
Correspondence to Mingsian R. Bai.
Chen, YS., Lin, ZJ. & Bai, M.R. A multichannel learning-based approach for sound source separation in reverberant environments. J AUDIO SPEECH MUSIC PROC. 2021, 38 (2021). https://doi.org/10.1186/s13636-021-00227-2
Source separation and dereverberation
Multichannel learning-based network
Time-dilated convolution network
U-net
Data-Based Spatial Audio Processing | CommonCrawl |
\begin{document}
\author{} \title{} \maketitle
\begin{center} \thispagestyle{empty} \pagestyle{myheadings} \markboth{\bf Yilmaz Simsek }{\bf New Generating Functions of the Stirling numbers, Frobenius-Euler and Related polynomials}
\textbf{{\Large Generating functions for generalized \textbf{Stirling type numbers, }Array type polynomials,\textbf{\ }Eulerian type polynomials and their applications}}
\textbf{Yilmaz Simsek}\\[0pt]
Department of Mathematics, Faculty of Science University of Akdeniz TR-07058 Antalya, Turkey [email protected]\\[0pt]
\textbf{{\large {Abstract}}}
\end{center}
\begin{quotation} The first aim of this paper is to construct new generating functions for the generalized $\lambda $-Stirling type numbers of the second kind, generalized array type polynomials and generalized Eulerian type polynomials and numbers, attached to Dirichlet character. We derive various functional equations and differential equations using these generating functions. The second aim is provide a novel approach to deriving identities including multiplication formulas and recurrence relations\textit{\ }for these numbers and polynomials using these functional equations and differential equations. Furthermore, by applying $p$-adic Volkenborn integral and Laplace transform, we derive some new identities for the generalized $\lambda $-Stirling type numbers of the second kind, the generalized array type polynomials and the generalized Eulerian type polynomials. We also give many applications related to the class of these polynomials and numbers. \end{quotation}
\noindent \textbf{2010 Mathematics Subject Classification.} 12D10, 11B68, 11S40, 11S80, 26C05, 26C10, 30B40, 30C15.
\noindent \textbf{Key Words.} Bernoulli polynomials; Euler polynomials; Apostol Bernoulli polynomials; generalized Frobenius Euler polynomials; Normalized Polynomials; Array polynomials; Stirling numbers of the second kind; $p$-adic Volkenborn integral; generating function; functional equation; Laplace transform.
\section{Introduction, Definitions and Preliminaries}
Throughout this paper, we use the following standard notations:
$\mathbb{N}=\{1,2,3,$\ldots $\}$, $\mathbb{N}_{0}=\{0,1,2,3,$\ldots $\}= \mathbb{N}\cup \{0\}$ and $\mathbb{Z}^{-}=\{-1,-2,-3,$\ldots $\}$. Here, $ \mathbb{Z}$ denotes the set of integers, $\mathbb{R}$ denotes the set of real numbers and $\mathbb{C}$ denotes the set of complex numbers. We assume that $\ln (z)$ denotes the principal branch of the multi-valued function $ \ln (z)$ with the imaginary part $\Im \left( \ln (z)\right) $ constrained by \begin{equation*} -\pi <\Im \left( \ln (z)\right) \leq \pi . \end{equation*} Furthermore, \begin{equation*} 0^{n}=\left\{ \begin{array}{cc} 1 & n=0 \\ & \\ 0 & n\in \mathbb{N}, \end{array} \right. \end{equation*} \begin{equation*} \left( \begin{array}{c} x \\ v \end{array} \right) =\frac{x(x-1)\cdots (x-v+1)}{v!} \end{equation*} and \begin{equation*} \left\{ z\right\} _{0}=1\text{ and }\left\{ z\right\} _{j}=\dprod\limits_{d=0}^{j-1}(z-d), \end{equation*} where $j\in \mathbb{N}$ and $z\in \mathbb{C}$ cf. (\cite{Comtet}, \cite {LuoSrivatava2010}).
The generating functions have various applications in many branches of Mathematics and Mathematical Physics. These functions are defined by linear polynomials, differential relations, globally referred to as \textit{ functional equations}. The functional equations arise in well-defined combinatorial contexts and they lead systematically to well-defined classes of functions (cf. see, for detail, \cite{Flajolet}). Although, in the literature, one can find extensive investigations related to the generating functions for the Bernoulli, Euler and Genocchi numbers and polynomials and also their generalizations, the $\lambda $-Stirling numbers of the second kind, the array polynomials and the Eulerian polynomials, related to nonnegative real parameters, have not been studied yet. Therefore, Section 2, Section 3 and Section 4 of this paper deal with new classes of generating functions which are related to generalized $\lambda $-Stirling type numbers of the second kind, generalized array type polynomials and generalized Eulerian polynomials, respectively. By using these generating functions, we derive many functional equations and differential equations. By using these equations, we investigate and introduce fundamental properties and many new identities for the generalized $\lambda $-Stirling type numbers of the second kind, the generalized array type polynomials and the generalized Eulerian type polynomials and numbers. We also derive multiplication formulas and recurrence relations for these numbers and polynomials.
The remainder of this study is organized as follows:
In section 5, we derive new identities related to the generalized Bernoulli polynomials, the generalized Eulerian type polynomials, generalized $\lambda $-Stirling type numbers and the generalized array polynomials.
In section 6, we give relations between generalized Bernoulli polynomials and generalized array polynomials.
In section 7, We give an application of the Laplace transform to the generating functions for the generalized Bernoulli polynomials and the generalized array type polynomials.
In section 8, by using the bosonic and the fermionic $p$-adic integral on $ \mathbb{Z}_{p}$, we find some new identities related to the Bernoulli polynomials, the generalized Eulerian type polynomials and Stirling numbers.
\section{Generating Function for generalized $\protect\lambda $-Stirling type numbers of the second kind}
The Stirling numbers are used in combinatorics, in number theory, in discrete probability distributions for finding higher order moments, etc. The Stirling number of the second kind, denoted by $S(n,k)$, is the number of ways to partition a set of $n$ objects into $k$ groups. These numbers occur in combinatorics and in the theory of partitions.
In this section, we construct a new generating function, related to nonnegative real parameters, for the generalized $\lambda $-Stirling type numbers of the second kind. We derive some elementary properties including recurrence relations of these numbers. The following definition provides a natural generalization and unification of the $\lambda $-Stirling numbers of the second kind:
\begin{definition} Let $a$,$~b\in \mathbb{R}^{+}$ ($a\neq b$), $\lambda \in \mathbb{C}$ and $ v\in \mathbb{N}_{0}$. The generalized $\lambda $-Stirling type numbers of the second kind $\mathcal{S}(n,v;a,b;\lambda )$\ are defined by means of the following generating function: \begin{equation} f_{S,v}(t;a,b;\lambda )=\frac{\left( \lambda b^{t}-a^{t}\right) ^{v}}{v!} =\sum_{n=0}^{\infty }\mathcal{S}(n,v;a,b;\lambda )\frac{t^{n}}{n!}. \label{s1} \end{equation} \end{definition}
\begin{remark} By setting $a=1$ and $b=e$ in (\ref{s1}), we have the $\lambda $-Stirling numbers of the second kind \begin{equation*} \mathcal{S}(n,v;1,e;\lambda )=S(n,v;\lambda ) \end{equation*} which are defined by means of the following generating function: \begin{equation*} \frac{\left( \lambda e^{t}-1\right) ^{v}}{v!}=\sum_{n=0}^{\infty }S(n,v;\lambda )\frac{t^{n}}{n!}, \end{equation*} cf. (\cite{LuoSrivatava2010}, \cite{Srivastava2011}). Substituting $\lambda =1$ into above equation, we have the Stirling numbers of the second kind \begin{equation*} S(n,v;1)=S(n,v), \end{equation*} cf. (\cite{Comtet}, \cite{LuoSrivatava2010}, \cite{Srivastava2011}). These numbers have the following well known properties: \begin{equation*} S(n,0)=\delta _{n,0}, \end{equation*} \begin{equation*} S(n,1)=S(n,n)=1 \end{equation*} and \begin{equation*} S(n,n-1)=\left( \begin{array}{c} n \\ 2 \end{array} \right) , \end{equation*} where $\delta _{n,0}$ denotes the Kronecker symbol (see \cite{Comtet}, \cite {LuoSrivatava2010}, \cite{Srivastava2011}). \end{remark}
By using (\ref{s1}), we obtain the following theorem:
\begin{theorem} \label{Theorem STnumber} \begin{equation} \mathcal{S}(n,v;a,b;\lambda )=\frac{1}{v!}\sum_{j=0}^{v}(-1)^{j}\left( \begin{array}{c} v \\ j \end{array} \right) \lambda ^{v-j}\left( j\ln a+(v-j)\ln b\right) ^{n} \label{as1} \end{equation} and \begin{equation} \mathcal{S}(n,v;a,b;\lambda )=\frac{1}{v!}\sum_{j=0}^{v}(-1)^{v-j}\left( \begin{array}{c} v \\ j \end{array} \right) \lambda ^{j}\left( j\ln b+(v-j)\ln a\right) ^{n}. \label{as1a} \end{equation} \end{theorem}
\begin{proof} By using (\ref{s1}) and the binomial theorem, we can easily arrive at the desired results. \end{proof}
By using the formula (\ref{as1}), we can compute some values of the numbers $ \mathcal{S}(n,v;a,b;\lambda )$ as follows: \begin{equation*} \mathcal{S}(0,0;a,b;\lambda )=1, \end{equation*} \begin{equation*} \mathcal{S}(0,0;a,b;\lambda )=1, \end{equation*} \begin{equation*} \mathcal{S}(1,0;a,b;\lambda )=0, \end{equation*} \begin{equation*} \mathcal{S}(1,1;a,b;\lambda )=\ln \left( \frac{b^{\lambda }}{a}\right) , \end{equation*} \begin{equation*} \mathcal{S}(2,0;a,b;\lambda )=0, \end{equation*} \begin{equation*} \mathcal{S}(2,1;a,b;\lambda )=\lambda \left( \ln b\right) ^{2}-\left( \ln a\right) ^{2}, \end{equation*} \begin{equation*} \mathcal{S}(2,2;a,b;\lambda )=\frac{\lambda ^{2}}{2}\left( \ln b^{2}\right) ^{2}-\lambda \ln \left( ab\right) +\left( \ln a^{2}\right) ^{2}, \end{equation*}
\begin{equation*} \mathcal{S}(3,0;a,b;\lambda )=0, \end{equation*} \begin{equation*} \mathcal{S}(3,1;a,b;\lambda )=\lambda \left( \ln b\right) ^{3}-\left( \ln a\right) ^{3}, \end{equation*} \begin{equation*} \mathcal{S}(0,v;a,b;\lambda )=\frac{\left( \lambda -1\right) ^{v}}{v!}, \end{equation*} \begin{equation*} \mathcal{S}(n,0;a,b;\lambda )=\delta _{n,0} \end{equation*} and \begin{equation*} \mathcal{S}(n,1;a,b;\lambda )=\lambda \left( \ln b\right) ^{n}-\left( \ln a\right) ^{n}. \end{equation*}
\begin{remark} By setting $a=1$ and $b=e$ in the assertions (\ref{as1}) of Theorem \ref {Theorem STnumber}, we have the following result: \begin{equation*} S(n,v;\lambda )=\frac{1}{v!}\sum_{j=0}^{v}\left( \begin{array}{c} v \\ j \end{array} \right) \lambda ^{v-j}(-1)^{j}\left( v-j\right) ^{n}. \end{equation*} The above relation has been studied by Srivastava \cite{Srivastava2011} and Luo \cite{LuoSrivatava2010}. By setting $\lambda =1$ in the above equation, we have the following result: \begin{equation*} S(n,v;\lambda )=\frac{1}{v!}\sum_{j=0}^{v}\left( \begin{array}{c} v \\ j \end{array} \right) (-1)^{j}\left( v-j\right) ^{n} \end{equation*} cf. (\cite{AgohDilcher}, \cite{cagic}, \cite{Carlitz}, \cite{Carlitz1953G}, \cite{Comtet}, \cite{T. Kim}, \cite{LuoSrivatava2010}, \cite{SimsekSpringer} , \cite{YsimsekStirling}, \cite{Srivastava2011}, \cite{SrivastawaGargeSC}). \end{remark}
By differentiating both sides of equation (\ref{s1}) with respect to the variable $t$, we obtain the following\textit{\ }differential equations: \begin{equation*} \frac{\partial }{\partial t}f_{S,v}(t;a,b;\lambda )=\left( \lambda (\ln b)b^{t}-(\ln a)a^{t}\right) f_{S,v-1}(t;a,b;\lambda ) \end{equation*} or \begin{equation} \frac{\partial }{\partial t}f_{S,v}(t;a,b;\lambda )=v\ln (b)f_{S,v}(t;a,b;\lambda )+\ln \left( \frac{b}{a}\right) a^{t}f_{S,v-1}(t;a,b;\lambda ). \label{s1a} \end{equation}
By using equations (\ref{s1}) and (\ref{s1a}), we obtain recurrence relations for the generalized $\lambda $-Stirling type numbers of the second kind by the following theorem:
\begin{theorem} \label{TE2} Let $n,v\in \mathbb{N}$. \begin{equation} \mathcal{S}(n,v;a,b;\lambda )=\sum_{j=0}^{n-1}\left( \begin{array}{c} n-1 \\ j \end{array} \right) \mathcal{S}(j,v-1;a,b;\lambda )\left( \lambda \left( \ln (b)\right) ^{n-j}-\left( \ln (a)\right) ^{n-j}\right) . \label{s4} \end{equation} or \begin{eqnarray*} \mathcal{S}(n,v;a,b;\lambda ) &=&v\ln (b)\mathcal{S}(n-1,v;a,b;\lambda ) \\ &&+\ln \left( \frac{b}{a}\right) \sum_{j=0}^{n-1}\left( \begin{array}{c} n-1 \\ j \end{array} \right) \mathcal{S}(j,v-1;a,b;\lambda )\left( \ln (a)\right) ^{n-1-j}. \end{eqnarray*} \end{theorem}
\begin{remark} By setting $a=1$ and $b=e$, Theorem \ref{TE2} yields the corresponding results which are proven by Luo and Srivastava \cite[Theorem 11] {LuoSrivatava2010}. Substituting $a=\lambda =1$ and $b=e$ into Theorem \ref {TE2}, we obtain the following known results: \begin{equation*} S(n,v)=\sum_{j=0}^{n-1}\left( \begin{array}{c} n-1 \\ j \end{array} \right) S(j,v-1), \end{equation*} and \begin{equation*} S(n,v)=vS(n-1,v)+S(n-1,v-1), \end{equation*} cf. (\cite{AgohDilcher}, \cite{Carlitz1976}, \cite{Comtet}, \cite {LuoSrivatava2010}, \cite{SimsekSpringer}, \cite{YsimsekStirling}). \end{remark}
The generalized $\lambda $-Stirling type numbers of the second kind can also be defined by equation (\ref{s5}):
\begin{theorem} \label{T3}Let $k\in \mathbb{N}_{0}$ and $\lambda \in \mathbb{C}$. \begin{equation} \lambda ^{x}\left( \ln b^{x}\right) ^{m}=\sum_{l=0}^{m}\sum_{j=0}^{\infty }\left( \begin{array}{c} m \\ l \end{array} \right) \left( \begin{array}{c} x \\ j \end{array} \right) j!\mathcal{S}(l,j;a,b;\lambda )\left( \ln \left( a^{(x-j)}\right) \right) ^{m-l}. \label{s5} \end{equation} \end{theorem}
\begin{proof} By using (\ref{s1}), we get \begin{equation*} \left( \lambda b^{t}\right) ^{x}=\sum_{j=0}^{\infty }\left( \begin{array}{c} x \\ j \end{array} \right) j!\sum_{m=0}^{\infty }\mathcal{S}(m,j;a,b;\lambda )\frac{t^{m}}{m!} \sum_{n=0}^{\infty }(\ln a^{x-j})^{n}\frac{t^{n}}{n!}. \end{equation*} From the above equation, we obtain \begin{equation*} \lambda ^{x}\sum_{m=0}^{\infty }\left( \ln b\right) ^{m}\frac{t^{m}}{m!} =\sum_{m=0}^{\infty }\sum_{j=0}^{\infty }\left( \begin{array}{c} x \\ j \end{array} \right) j!\mathcal{S}(m,j;a,b;\lambda )\frac{t^{m}}{m!}\sum_{n=0}^{\infty }(\ln a^{x-j})^{n}\frac{t^{n}}{n!}. \end{equation*} Therefore \begin{equation*} \lambda ^{x}\sum_{m=0}^{\infty }\left( \ln b\right) ^{m}\frac{t^{m}}{m!} =\sum_{m=0}^{\infty }\left( \sum_{l=0}^{m}\sum_{j=0}^{\infty }\left( \begin{array}{c} m \\ l \end{array} \right) \left( \begin{array}{c} x \\ j \end{array} \right) j!\mathcal{S}(l,j;a,b;\lambda )\left( \ln a^{(x-j)}\right) ^{m-l}\right) \frac{t^{m}}{m!}. \end{equation*} Comparing the coefficients of $\frac{t^{m}}{m!}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\begin{remark} For $a=0$ and $b=e$, the formula (\ref{s5}) can easily be shown to be reduced to the following result which is given by Luo and Srivastava \cite[ Theorem 9]{LuoSrivatava2010}: \begin{equation*} \lambda ^{x}x^{n}=\sum_{l=0}^{\infty }\left( \begin{array}{c} x \\ l \end{array} \right) l!S(n,l;\lambda ), \end{equation*} where $n\in \mathbb{N}_{0}$ and $\lambda \in \mathbb{C}$. For $\lambda =1$, the above formula is reduced to \begin{equation*} x^{n}=\sum_{v=0}^{n}\left( \begin{array}{c} x \\ v \end{array} \right) v!S(n,v) \end{equation*} cf. (\cite{AgohDilcher}, \cite{Carlitz1976}, \cite{Comtet}, \cite{T. Kim}, \cite{LuoSrivatava2010}). \end{remark}
\section{Generalized array type polynomials}
By using the same motivation with the $\lambda $-Stirling type numbers of the second kind, we also construct a novel generating function, related to nonnegative real parameters, of the \textit{generalized array type polynomials}. We derive some elementary properties including recurrence relations of these polynomials. The following definition provides a natural generalization and unification of the array polynomials:
\begin{definition} Let $a$, $b\in \mathbb{R}^{+}$ ($a\neq b$), $x\in \mathbb{R}$, $\lambda \in \mathbb{C}$ and $v\in \mathbb{N}_{0}$. The generalized array type polynomials $\mathcal{S}_{v}^{n}(x;a,b;\lambda )$\ can be defined by \begin{equation} \mathcal{S}_{v}^{n}(x;a,b;\lambda )=\frac{1}{v!}\sum_{j=0}^{v}(-1)^{v-j} \left( \begin{array}{c} v \\ j \end{array} \right) \lambda ^{j}\left( \ln \left( a^{v-j}b^{x+j}\right) \right) ^{n}. \label{as2} \end{equation} \end{definition}
By using the formula (\ref{as2}), we can compute some values of the polynomials $\mathcal{S}_{v}^{n}(x;a,b;\lambda )$ as follows: \begin{equation*} \mathcal{S}_{0}^{n}(x;a,b;\lambda )=\left( \ln \left( b^{x}\right) \right) ^{n}, \end{equation*} \begin{equation*} \mathcal{S}_{v}^{0}(x;a,b;\lambda )=\frac{\left( 1-\lambda \right) ^{v}}{v!} \end{equation*} and \begin{equation*} \mathcal{S}_{1}^{1}(x;a,b;\lambda )=-\ln (ab^{x})+\lambda \ln (b^{x+1}). \end{equation*}
\begin{remark} The polynomials $\mathcal{S}_{v}^{n}(x;a,b;\lambda )$ may be also called generalized $\lambda $-array type polynomials. By substituting $x=0$ into ( \ref{as2}), we arrive at (\ref{as1a}): \begin{equation*} \mathcal{S}_{v}^{n}(0;a,b;\lambda )=\mathcal{S}(n,v;a,b;\lambda ). \end{equation*} Setting $a=\lambda =1$ and $b=e$ in (\ref{as2}), we have \begin{equation*} S_{v}^{n}(x)=\frac{1}{v!}\sum_{j=0}^{v}(-1)^{v-j}\left( \begin{array}{c} v \\ j \end{array} \right) \left( x+j\right) ^{n}, \end{equation*} a result due to Chang and Ha \cite[Eq-(3.1)]{Chan}, Simsek \cite {SimsekSpringer}. It is easy to see that \begin{equation*} S_{0}^{0}(x)=S_{n}^{n}(x)=1, \end{equation*} \begin{equation*} S_{0}^{n}(x)=x^{n} \end{equation*} and for $v>n$, \begin{equation*} S_{v}^{n}(x)=0 \end{equation*} cf. \cite[Eq-(3.1)]{Chan}. \end{remark}
Generating functions for the polynomial $\mathcal{S}_{v}^{n}(x;a,b,c;\lambda )$ can be defined as follows:
\begin{definition} Let $a$, $b\in \mathbb{R}^{+}$ ($a\neq b$), $\lambda \in \mathbb{C}$ and $ v\in \mathbb{N}_{0}$. The generalized array type polynomials $\mathcal{S} _{v}^{n}(x;a,b;\lambda )$\ are defined by means of the following generating function: \begin{equation} g_{v}(x,t;a,b;\lambda )=\sum_{n=0}^{\infty }\mathcal{S}_{v}^{n}(x;a,b; \lambda )\frac{t^{n}}{n!}. \label{ab1} \end{equation} \end{definition}
\begin{theorem} Let $a$, $b\in \mathbb{R}^{+}$, ($a\neq b$), $\lambda \in \mathbb{C}$ and $ v\in \mathbb{N}_{0}$. \begin{equation} g_{v}(x,t;a,b;\lambda )=\frac{1}{v!}\left( \lambda b^{t}-a^{t}\right) ^{v}b^{xt}. \label{ab0} \end{equation} \end{theorem}
\begin{proof} By substituting (\ref{as2}) into the right hand side of (\ref{ab1}), we obtain \begin{equation*} \sum_{n=0}^{\infty }\mathcal{S}_{v}^{n}(x;a,b;\lambda )\frac{t^{n}}{n!} =\sum_{n=0}^{\infty }\left( \frac{1}{v!}\sum_{j=0}^{v}(-1)^{v-j}\left( \begin{array}{c} v \\ j \end{array} \right) \lambda ^{j}\left( \ln \left( a^{v-j}b^{x+j}\right) \right) ^{n}\right) \frac{t^{n}}{n!}. \end{equation*} Therefore \begin{equation*} \sum_{n=0}^{\infty }\mathcal{S}_{v}^{n}(x;a,b;\lambda )\frac{t^{n}}{n!}= \frac{1}{v!}\sum_{j=0}^{v}(-1)^{v-j}\left( \begin{array}{c} v \\ j \end{array} \right) \lambda ^{j}\sum_{n=0}^{\infty }\left( \ln \left( a^{v-j}b^{x+j}\right) \right) ^{n}\frac{t^{n}}{n!}. \end{equation*} The right hand side of the above equation is the Taylor series for $e^{(\ln \left( a^{v-j}b^{x+j}\right) )t}$, thus we get \begin{equation*} \sum_{n=0}^{\infty }\mathcal{S}_{v}^{n}(x;a,b;\lambda )\frac{t^{n}}{n!} =\left( \frac{1}{v!}\sum_{j=0}^{v}(-1)^{v-j}\left( \begin{array}{c} v \\ j \end{array} \right) \lambda ^{j}a^{\left( v-j\right) t}b^{jt}\right) b^{xt}. \end{equation*}
By using (\ref{s1}) and binomial theorem in the above equation, we arrive at the desired result. \end{proof}
\begin{remark} If we set $\lambda =1$ in (\ref{ab0}), we arrive a new special case of the array polynomials given by \begin{equation*} f_{S,v}(t;a,b)b^{tx}=\sum_{n=0}^{\infty }\mathcal{S}_{v}^{n}(x;a,b)\frac{ t^{n}}{n!}. \end{equation*} In the special case when \begin{equation*} a=\lambda =1\text{ and }b=e, \end{equation*} the generalized array polynomials $\mathcal{S}_{v}^{n}(x;a,b;\lambda )$ defined by (\ref{ab0}) would lead us at once to the classical array polynomials $\mathcal{S}_{v}^{n}(x)$, which are defined by means of the following generating function: \begin{equation*} \frac{\left( e^{t}-1\right) ^{v}}{v!}e^{tx}=\sum_{n=0}^{\infty }S_{v}^{n}(x) \frac{t^{n}}{n!}, \end{equation*} which yields to the generating function for the array polynomials $ S_{v}^{n}(x)$ studied by Chang and Ha \cite{Chan} see also cf. (\cite{cagic} , \cite{SimsekSpringer}). \end{remark}
The polynomials $\mathcal{S}_{v}^{n}(x;a,b;\lambda )$\ defined by (\ref{ab0} ) have many interesting properties which we give in this section.
We set \begin{equation} g_{v}(x,t;a,b;\lambda )=b^{xt}f_{S,v}(t;a,b;\lambda ). \label{1Sse} \end{equation}
\begin{theorem} The following formula holds true: \begin{equation} \mathcal{S}_{v}^{n}(x;a,b;\lambda )=\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \mathcal{S}(j,v;a,b;\lambda )\left( \ln b^{x}\right) ^{n-j}. \label{1Ssc} \end{equation} \end{theorem}
\begin{proof} By using (\ref{1Sse}), we obtain \begin{equation*} \sum_{n=0}^{\infty }\mathcal{S}_{v}^{n}(x;a,b;\lambda )\frac{t^{n}}{n!} =\sum_{n=0}^{\infty }\mathcal{S}(n,v;a,b;\lambda )\frac{t^{n}}{n!} \sum_{n=0}^{\infty }\left( \ln b^{x}\right) ^{n}\frac{t^{n}}{n!}. \end{equation*}
From the above equation, we get \begin{equation*} \sum_{n=0}^{\infty }\mathcal{S}_{v}^{n}(x;a,b;\lambda )\frac{t^{n}}{n!} =\sum_{n=0}^{\infty }\left( \sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \mathcal{S}(j,v;a,b)\left( \ln b^{x}\right) ^{n-j}\right) \frac{t^{n} }{n!}. \end{equation*}
Comparing the coefficients of $t^{n}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\begin{remark} In the special case when $a=\lambda =1$ and $b=e$, equation (\ref{1Ssc}) is reduced to \begin{equation*} S_{v}^{n}(x)=\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) x^{n-j}S(j,v) \end{equation*} cf. \cite[Theorem 2]{SimsekSpringer}. \end{remark}
By differentiating $j$ times both sides of (\ref{ab0}) with respect to the variable $x$, we obtain the following differential equation: \begin{equation*} \frac{\partial ^{j}}{\partial x^{j}}g_{v}(x,t;a,b;\lambda )=t^{j}\left( \ln b\right) ^{j}g_{v}(x,t;a,b;\lambda ). \end{equation*}
From this equation, we arrive at higher order derivative of \ the array type polynomials by the following theorem:
\begin{theorem} \label{TEo2} Let $n$, $j\in \mathbb{N}$ with $j\leq n$. Then we have \begin{equation*} \frac{\partial ^{j}}{\partial x^{j}}\mathcal{S}_{v}^{n}(x;a,b;\lambda )=\left\{ n\right\} _{j}\left( \ln (b)\right) ^{j}\mathcal{S} _{v}^{n-j}(x;a,b;\lambda ). \end{equation*} \end{theorem}
\begin{remark} By setting $a=\lambda =j=1$ and $b=e$ in Theorem \ref{TEo2}, we have \begin{equation*} \frac{d}{dx}S_{v}^{n}(x)=nS_{v}^{n-1}(x) \end{equation*} cf. \cite{SimsekSpringer}. \end{remark}
From (\ref{ab0}), we get the following functional equation: \begin{equation} g_{v_{1}}(x_{1},t;a,b;\lambda )g_{v_{2}}(x_{2},t;a,b;\lambda )=\left( \begin{array}{c} v_{1}+v_{2} \\ v_{1} \end{array} \right) g_{v_{1}+v_{2}}(x_{1}+x_{2},t;a,b;\lambda ). \label{as3} \end{equation} From this functional equation, we obtain the following identity:
\begin{theorem} \begin{equation*} \left( \begin{array}{c} v_{1}+v_{2} \\ v_{1} \end{array} \right) \mathcal{S}_{v_{1}+v_{2}}^{n}(x_{1}+x_{2};a,b;\lambda )=\dsum\limits_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \mathcal{S}_{v_{1}}^{j}(x_{1};a,b;\lambda )\mathcal{S} _{v_{2}}^{n-j}(x_{2};a,b;\lambda ). \end{equation*} \end{theorem}
\begin{proof} Combining (\ref{ab1}) and (\ref{as3}), we get \begin{eqnarray*} &&\sum_{n=0}^{\infty }\mathcal{S}_{v_{1}}^{n}(x_{1};a,b;\lambda )\frac{t^{n} }{n!}\sum_{n=0}^{\infty }\mathcal{S}_{v_{2}}^{n}(x_{2};a,b;\lambda )\frac{ t^{n}}{n!} \\ &=&\left( \begin{array}{c} v_{1}+v_{2} \\ v_{1} \end{array} \right) \sum_{n=0}^{\infty }\mathcal{S}_{v_{1}+v_{2}}^{n}(x_{1}+x_{2};a,b; \lambda )\frac{t^{n}}{n!}. \end{eqnarray*} Therefore \begin{eqnarray*} &&\sum_{n=0}^{\infty }\left( \dsum\limits_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \mathcal{S}_{v_{1}}^{j}(x_{1};a,b;\lambda )\mathcal{S} _{v_{2}}^{n-j}(x_{2};a,b;\lambda )\right) \frac{t^{n}}{n!} \\ &=&\left( \begin{array}{c} v_{1}+v_{2} \\ v_{1} \end{array} \right) \sum_{n=0}^{\infty }\mathcal{S}_{v_{1}+v_{2}}^{n}(x_{1}+x_{2};a,b; \lambda )\frac{t^{n}}{n!}\text{.} \end{eqnarray*} Comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\section{Generalized Eulerian type numbers and polynomials}
In this section, we provide generating functions, related to nonnegative real parameters, for the generalized Eulerian type polynomials and numbers, that is, the so called \textit{generalized Apostol type Frobenius Euler polynomials} \textit{and numbers}. We derive fundamental properties, recurrence relations and many new identities for these polynomials and numbers based on the generating functions, functional equations and differential equations.
These polynomials and numbers have many applications in many branches of Mathematics.
The following definition gives us a natural generalization of the Eulerian polynomials:
\begin{definition} Let $a,$ $b\in \mathbb{R}^{+}$ $(a\neq b),$ $x\in \mathbb{R},$ $\lambda \in \mathbb{C}$ and $u\in \mathbb{C\diagdown }\left\{ 1\right\} $. The\ generalized Eulerian type polynomials $\mathcal{H}_{n}(x;u;a,b,c;\lambda )$ are defined by means of the following generating function: \begin{equation} F_{\lambda }(t,x;u,a,b,c)=\frac{\left( a^{t}-u\right) c^{xt}}{\lambda b^{t}-u }=\sum_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,c;\lambda )\frac{t^{n}}{n!}. \label{4ge1} \end{equation} \end{definition}
By substituting $x=0$ into (\ref{4ge1}), we obtain \begin{equation*} \mathcal{H}_{n}(0;u;a,b,c;\lambda )=\mathcal{H}_{n}(u;a,b,c;\lambda ), \end{equation*} where $\mathcal{H}_{n}(u;a,b,c;\lambda )$ denotes \textit{generalized Eulerian type numbers}.
\begin{remark} Substituting $a=1$ into (\ref{4ge1}), we have \begin{equation*} \frac{\left( 1-u\right) c^{xt}}{\lambda b^{t}-u}=\sum_{n=0}^{\infty } \mathcal{H}_{n}(x;u;1,b,c;\lambda )\frac{t^{n}}{n!} \end{equation*} a result due to Kurt and Simsek \cite{burakSimsek}. In their special case when $\lambda =1$ and $b=c=e$, the \textit{generalized }Eulerian type polynomials $\mathcal{H}_{n}(x;u;1,b,c;\lambda )$\ are reduced to the Eulerian polynomials or Frobenius Euler polynomials which are defined by means of the following generating function: \begin{equation} \frac{\left( 1-u\right) e^{xt}}{e^{t}-u}=\sum_{n=0}^{\infty }H_{n}(x;u)\frac{ t^{n}}{n!}, \label{mt2} \end{equation} with, of course, $H_{n}(0;u)=H_{n}(u)$ denotes the so-called Eulerian numbers cf. (\cite{Carlitz}, \cite{Carlitz1952}, \cite{Carlitz1953G}, \cite {Carlitz1976}, \cite{KimmskimlcjangJIA}, \cite{YsimsekKim}, \cite {KimSimskJKM}, \cite{SimsekBKMS}, \cite{SimsekJNT}, \cite{srivas18}, \cite {Tsumura}). Substituting $u=-1$, into (\ref{mt2}), we have \begin{equation*} H_{n}(x;-1)=E_{n}(x) \end{equation*} where $E_{n}(x)$\ denotes Euler polynomials which are defined by means of the following generating function: \begin{equation} \frac{2e^{xt}}{e^{t}+1}=\sum_{n=0}^{\infty }E_{n}(x)\frac{t^{n}}{n!} \label{1Ssf} \end{equation} where $\left\vert t\right\vert <\pi $\ cf. \cite{AgohDilcher}-\cite{walum}. \end{remark}
The following elementary properties of the generalized Eulerian type polynomials and numbers are derived from their generating functions in (\ref {4ge1}).
\begin{theorem} (\textit{Recurrence relation} for the generalized Eulerian type numbers): For $n=0$, we have \begin{equation*} \mathcal{H}_{0}(u;a,b;\lambda )=\left\{ \begin{array}{c} \frac{1-u}{\lambda -u}\text{ if }a=1, \\ \\ \frac{u}{\lambda -u}\text{ if }a\neq 1. \end{array} \right. \end{equation*} For $n>0$, following the usual convention of symbolically replacing $\left( \mathcal{H}(u;a,b;\lambda )\right) ^{n}$ by $\mathcal{H}_{n}(u;a,b;\lambda )$ , we have \begin{equation*} \lambda \left( \ln b+\mathcal{H}(u;a,b;\lambda )\right) ^{n}-u\mathcal{H} _{n}(u;a,b;\lambda )=\left( \ln a\right) ^{n}. \end{equation*} \end{theorem}
\begin{proof} By using (\ref{4ge1}), we obtain \begin{equation*} \sum_{n=0}^{\infty }\left( \ln a\right) ^{n}\frac{t^{n}}{n!} -u=\sum_{n=0}^{\infty }\left( \lambda \left( \ln b+\mathcal{H}(u;a,b;\lambda )\right) ^{n}-u\mathcal{H}_{n}(u;a,b;\lambda )\right) \frac{t^{n}}{n!}. \end{equation*} Comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
By differentiating both sides of equation (\ref{4ge1}) with respect to the variable $x$, we obtain the following higher order differential equation: \begin{equation} \frac{\partial ^{j}}{\partial x^{j}}F_{\lambda }(t,x;u,a,b,c)=\left( \ln \left( c^{t}\right) \right) ^{j}F_{\lambda }(t,x;u,a,b,c). \label{F1} \end{equation} From this equation, we arrive at higher order derivative of \ the generalized Eulerian type polynomials by the following theorem:
\begin{theorem} \label{TEo3} Let $n$, $j\in \mathbb{N}$ with $j\leq n$. Then we have \begin{equation*} \frac{\partial ^{j}}{\partial x^{j}}\mathcal{H}_{n}(x;u;a,b,c;\lambda )=\left\{ n\right\} _{j}\left( \ln \left( c\right) \right) ^{j}\mathcal{H} _{n-j}(x;u;a,b,c;\lambda ). \end{equation*} \end{theorem}
\begin{proof} Combining (\ref{4ge1}) and (\ref{F1}), we have \begin{equation*} \sum_{n=0}^{\infty }\frac{\partial ^{j}}{\partial x^{j}}\mathcal{H} _{n}(x;u;a,b,c;\lambda )\frac{t^{n}}{n!}=\left( \ln c\right) ^{j}\sum_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,c;\lambda )\frac{t^{n+j}}{n!} . \end{equation*} From the above equation, we get \begin{equation*} \sum_{n=0}^{\infty }\frac{\partial ^{j}}{\partial x^{j}}\mathcal{H} _{n}(x;u;a,b,c;\lambda )\frac{t^{n}}{n!}=\left( \ln c\right) ^{j}\sum_{n=0}^{\infty }\left\{ n\right\} _{j}\mathcal{H}_{n-j}(x;u;a,b,c; \lambda )\frac{t^{n}}{n!}. \end{equation*} Comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\begin{remark} Setting $j=1$ in Theorem \ref{TEo3}, we have \begin{equation*} \frac{\partial }{\partial x}\mathcal{H}_{n}(x;u;a,b,c;\lambda )=n\mathcal{H} _{n-1}(x;u;a,b,c;\lambda )\ln \left( c\right) . \end{equation*} In their special case when \begin{equation*} a=\lambda =1\text{ and }b=c=e, \end{equation*} Theorem \ref{TEo3}\ is reduced to the following well known result: \begin{equation*} \frac{\partial ^{j}}{\partial x^{j}}H_{n}(x;u)=\frac{n!}{(n-j)!}H_{n-j}(x;u) \end{equation*} cf. \cite[Eq-(3.5)]{Carlitz}. Substituting $j=1$ into the above equation, we have \begin{equation*} \frac{\partial }{\partial x}H_{n}(x;u)=nH_{n-1}(x;u) \end{equation*} cf. (\cite[Eq-(3.5)]{Carlitz}, \cite{burakSimsek}). \end{remark}
\begin{theorem} \label{t8} The following explicit representation formula holds true: \begin{eqnarray*} &&\left( x\ln c+\ln a\right) ^{n}-ux^{n}\left( \ln c\right) ^{n} \\ &=&\lambda \left( x\ln c+\ln b+\mathcal{H}(u;a,b;\lambda )\right) ^{n}-u\left( x\ln c+\mathcal{H}(u;a,b;\lambda )\right) ^{n}. \end{eqnarray*} \end{theorem}
\begin{proof} By using (\ref{4ge1}) and the \textit{umbral calculus convention}, we obtain \begin{equation*} \frac{a^{t}-u}{\lambda b^{t}-u}=e^{H\left( u;a,b;\lambda \right) t}. \end{equation*} From the above equation, we get \begin{eqnarray*} &&\sum_{n=0}^{\infty }\left( \left( \ln a+x\ln c\right) ^{n}-u\left( x\ln c\right) \right) \frac{t^{n}}{n!} \\ &=&\sum_{n=0}^{\infty }\left( \lambda \left( \mathcal{H}\left( u;a,b;\lambda \right) +\ln b+x\ln c\right) ^{n}-u\left( \mathcal{H}_{n}\left( u;a,b;\lambda \right) +x\ln c\right) ^{n}\right) \frac{t^{n}}{n!}. \end{eqnarray*} Comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\begin{remark} By substituting $a=\lambda =1$ and $b=c=e$ into Theorem \ref{t8}, we have \begin{equation} \left( 1-u\right) x^{n}=H_{n}(x+1;u)-uH_{n}(x;u) \label{at8} \end{equation} cf. (\cite[Eq-(3.3)]{Carlitz}, \cite{Tsumura}). By setting $u=-1$ in the above equation, we have \begin{equation*} 2x^{n}=E_{n}(x+1)+E_{n}(x) \end{equation*} a result due to Shiratani \cite{K. Shiratani}. By using (\ref{at8}), Carlitz \cite{Carlitz} studied on the \textbf{Mirimonoff polynomial} $f_{n}(0,m)$ which is defined by \begin{eqnarray*} f_{n}(x,m) &=&\dsum\limits_{j=0}^{m-1}(x+j)^{n}u^{m-j-1} \\ &=&\frac{H_{n}(x+m;u)-u^{m}H_{n}(x;u)}{1-u}. \end{eqnarray*} By applying Theorem \ref{t8}, one may generalize the Mirimonoff polynomial. \end{remark}
\begin{theorem} The following explicit representation formula holds true: \begin{equation} \mathcal{H}_{n}(x;u;a,b,c;\lambda )=\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \left( x\ln c\right) ^{n-j}\mathcal{H}_{j}(u;a,b,c;\lambda ). \label{Te} \end{equation} \end{theorem}
\begin{proof} By using (\ref{4ge1}), we get \begin{equation*} \sum_{n=0}^{\infty }\mathcal{H}_{n}(u;a,b,c;\lambda )\frac{t^{n}}{n!} \sum_{n=0}^{\infty }\left( \ln c\right) ^{n}\frac{t^{n}}{n!} =\sum_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,c;\lambda )\frac{t^{n}}{n!}. \end{equation*} From the above equation, we obtain \begin{equation*} \sum_{n=0}^{\infty }\left( \sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \left( x\ln c\right) ^{n-j}\mathcal{H}_{j}(u;a,b,c;\lambda )\right) \frac{t^{n}}{n!}=\sum_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,c;\lambda ) \frac{t^{n}}{n!}. \end{equation*} Comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\begin{remark} Substituting $a=\lambda =1$ and $b=c=e$ into (\ref{Te}), we have \begin{equation*} H_{n}(x;u)=\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) x^{n-j}H_{j}(u) \end{equation*} cf. (\cite{Carlitz}, \cite{Carlitz1952}, \cite{Carlitz1953G}, \cite {Carlitz1976}, \cite{KimmskimlcjangJIA}, \cite{YsimsekKim}, \cite {KimSimskJKM}, \cite{burakSimsek}, \cite{SimsekBKMS}, \cite{SimsekJNT}, \cite {srivas18}, \cite{Tsumura}). \end{remark}
\begin{remark} From (\ref{Te}), we easily get \begin{equation*} \mathcal{H}_{n}(x;u;a,b,c;\lambda )=\left( \mathcal{H}(u;a,b,c;\lambda )+x\ln c\right) ^{n}, \end{equation*} where after expansion of the right member, $\mathcal{H}^{n}(u;a,b,c;\lambda ) $ is replaced by $\mathcal{H}_{n}(u;a,b,c;\lambda )$, we use this convention frequently throughout of this paper. \end{remark}
\begin{theorem} \begin{equation} \mathcal{H}_{n}(x+y;u;a,b,c;\lambda )=\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \left( y\ln c\right) ^{n-j}\mathcal{H}_{j}(x;u;a,b,c;\lambda ). \label{1Ssa} \end{equation} \end{theorem}
\begin{proof} By using (\ref{4ge1}), we have \begin{equation*} \sum_{n=0}^{\infty }\mathcal{H}_{n}(x+y;u;a,b,c;\lambda )\frac{t^{n}}{n!} =\sum_{n=0}^{\infty }\left( y\ln c\right) ^{n}\frac{t^{n}}{n!} .\sum_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,c;\lambda )\frac{t^{n}}{n!}. \end{equation*} Therefore \begin{equation*} \sum_{n=0}^{\infty }\mathcal{H}_{n}(x+y;u;a,b,c;\lambda )\frac{t^{n}}{n!} =\sum_{n=0}^{\infty }\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \left( y\ln c\right) ^{n-j}\mathcal{H}_{j}(x,u;a,b,c;\lambda )\frac{ t^{n}}{n!}. \end{equation*} Comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\begin{remark} In the special case when $a=\lambda =1$ and $b=c=e$, equation (\ref{1Ssa})\ is reduced to the following result: \begin{equation*} H_{n}(x+y)=\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) y^{n-j}H_{j}(x,u) \end{equation*} cf. \cite[Eq-(3.6)]{Carlitz}. Substituting $u=-1$ into the above equation, we get the following well-known result: \begin{equation} E_{n}(x+y)=\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) y^{n-j}E_{j}(x). \label{4Eq} \end{equation} \end{remark}
By using (\ref{4ge1}), we define the following functional equation: \begin{equation} F_{\lambda ^{2}}(t,x;u^{2},a^{2},b^{2},c)c^{yt}=F_{\lambda }(t,x;u,a,b,c)F_{\lambda }(t,y;-u,a,b,c). \label{4EqY} \end{equation}
\begin{theorem} \begin{equation} \mathcal{H}_{n}(x+y;u^{2};a,b,c;\lambda ^{2})=\left( \mathcal{H} (x;u;a,b,c;\lambda )+\mathcal{H}(y;-u;a,b,c;\lambda )\right) ^{n}. \label{4Eqy1} \end{equation} \end{theorem}
\begin{proof} Combining (\ref{4EqY}) and (\ref{1Ssa}), we easily arrive at the desired result. \end{proof}
\begin{remark} In the special case when $a=\lambda =1$ and $b=c=e$, equation (\ref{4Eqy1})\ is reduced to the following result: \begin{equation*} H_{n}(x+y;u^{2})=\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) H_{j}(x;u)H_{n-j}(y;-u) \end{equation*} cf. \cite[Eq-(3.17)]{Carlitz}. \end{remark}
\begin{theorem} \label{TeoE} \begin{equation*} (-1)^{n}\mathcal{H}_{n}(1-x;u^{-1};a,b,c;\lambda ^{-1})=\lambda \sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \left( \ln \left( \frac{b}{a}\right) \right) ^{n-j}\mathcal{H} _{j}(x-1,u;a,b,c;\lambda ). \end{equation*} \end{theorem}
\begin{proof} By using (\ref{4ge1}), we obtain \begin{equation*} \frac{\left( a^{-t}-u^{-1}\right) c^{-\left( 1-x\right) t}}{\lambda ^{-1}b^{-t}-u^{-1}}=\lambda \left( \frac{b}{a}\right) ^{t}\sum_{n=0}^{\infty }\mathcal{H}_{n}(x-1;u;a,b,c;\lambda )\frac{t^{n}}{n!}. \end{equation*} From the above equation, we get \begin{eqnarray*} &&\sum_{n=0}^{\infty }\mathcal{H}_{n}(1-x;u^{-1};a,b,c;\lambda ^{-1})\frac{ (-1)^{n}t^{n}}{n!} \\ &=&\lambda \left( \sum_{n=0}^{\infty }\mathcal{H}_{n}(x-1;u;a,b,c;\lambda ) \frac{t^{n}}{n!}\right) \left( \sum_{n=0}^{\infty }\left( \ln \left( \frac{b }{a}\right) \right) ^{n}\frac{t^{n}}{n!}\right) . \end{eqnarray*} Therefore \begin{eqnarray*} &&\sum_{n=0}^{\infty }(-1)^{n}\mathcal{H}_{n}(1-x;u^{-1};a,b,c;\lambda ^{-1}) \frac{t^{n}}{n!} \\ &=&\sum_{n=0}^{\infty }\left( \lambda \sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \left( \ln \left( \frac{b}{a}\right) \right) ^{n-j}\mathcal{H} _{j}(x-1,u;a,b,c;\lambda )\right) \frac{t^{n}}{n!}. \end{eqnarray*} Comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\begin{remark} In their special case when $a=\lambda =1$ and $b=c=e$, Theorem \ref{TeoE}\ is reduced to the following result: \begin{equation*} (-1)^{n}H_{n}(1-x;u^{-1})=H_{n}(x-1,u) \end{equation*} cf. \cite[Eq-(3.7)]{Carlitz}. Substituting $u=-1$ into the above equation, we get the following well-known result: \begin{equation*} (-1)^{n}E_{n}(1-x)=E_{n}(x) \end{equation*} cf. (\cite[Eq-(3.7)]{Carlitz}, \cite{http}, \cite{Parashar}, \cite{K. Shiratani}, \cite{Srivastava2011}). \end{remark}
\begin{theorem} \label{TeoE-1} \begin{equation*} \mathcal{H}_{n}\left( \frac{x+y}{2};u^{2};a,b,c;\lambda ^{2}\right) =\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \frac{\mathcal{H}_{j}(x;u;a,b,c;\lambda )\mathcal{H} _{n-j}(y;-u;a,b,c;\lambda )}{2^{n}}. \end{equation*} \end{theorem}
\begin{proof} By using (\ref{4ge1}), we get \begin{eqnarray*} &&\sum_{n=0}^{\infty }\mathcal{H}_{n}\left( \frac{x+y}{2};u^{2};a,b,c; \lambda ^{2}\right) \frac{2^{n}t^{n}}{n!} \\ &=&\sum_{n=0}^{\infty }\left( \sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \mathcal{H}_{j}(x;u;a,b,c;\lambda )\mathcal{H}_{n-j}(y;-u;a,b,c; \right) \frac{t^{n}}{n!}. \end{eqnarray*} Comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\begin{remark} When $a=\lambda =1$ and $b=c=e$, Theorem \ref{TeoE-1}\ is reduced to the following result: \begin{equation*} \mathcal{H}_{n}\left( \frac{x+y}{2};u^{2}\right) =2^{-n}\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \mathcal{H}_{j}(x;u)\mathcal{H}_{n-j}(y;-u), \end{equation*} cf. \cite[Eq-(3.17)]{Carlitz}. \end{remark}
\subsection{Multiplication formulas for normalized polynomials}
In this section, using generating functions, we derive \textit{ multiplication formulas} in terms of the normalized polynomials which are related to the generalized Eulerian type polynomials, the Bernoulli and the Euler polynomials.
\begin{theorem} \label{T11}(Multiplication formula) Let $y\in \mathbb{N}$. Then we have \begin{eqnarray} &&\mathcal{H}_{n}(yx;u;a,b,b;\lambda ) \label{mMF} \\ &=&y^{n}\dsum\limits_{k=0}^{n}\dsum\limits_{j=0}^{y-1}\left( \begin{array}{c} n \\ k \end{array} \right) \frac{\lambda ^{j}\left( \ln a\right) ^{n-k}}{u^{j+1-y}-u^{j+1}} \mathcal{H}_{k}\left( x+\frac{j}{y};u^{y};a,b,b;\lambda ^{y}\right) \notag \\ &&\times \left( H_{n-k}\left( \frac{1}{y};u^{y}\right) -uH_{n-k}\left( u^{y}\right) \right) , \notag \end{eqnarray} where $H_{n}\left( x;u\right) $ and $H_{n}\left( u\right) $ denote the Eulerian polynomials and numbers, respectively. \end{theorem}
\begin{proof} Substituting $c=b$ into (\ref{4ge1}), we have \begin{equation} \dsum\limits_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,b;\lambda )\frac{t^{n}}{ n!}=\frac{\left( a^{t}-u\right) b^{xt}}{\lambda b^{t}-u}=\left( \frac{a^{t}-u }{-u}\right) \frac{\left( a^{t}-u\right) b^{xt}}{1-\frac{\lambda b^{t}}{u}}. \label{MT1} \end{equation} By using the following finite geometric series \begin{equation*} \dsum\limits_{j=0}^{y-1}\left( \frac{\lambda b^{t}}{u}\right) ^{j}=\frac{ 1-\left( \frac{\lambda b^{t}}{u}\right) ^{y}}{1-\frac{\lambda b^{t}}{u}}, \end{equation*} on the right-hand side of (\ref{MT1}), we obtain \begin{equation*} \dsum\limits_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,b;\lambda )\frac{t^{n}}{ n!}=\frac{\left( a^{t}-u\right) b^{xt}}{-u\left( 1-\left( \frac{\lambda b^{t} }{u}\right) ^{y}\right) }\dsum\limits_{j=0}^{y-1}\left( \frac{\lambda b^{t}}{ u}\right) ^{j}. \end{equation*} From this equation, we get \begin{equation*} \dsum\limits_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,b;\lambda )\frac{t^{n}}{ n!}=\frac{\left( a^{t}-u\right) }{\left( a^{yt}-u^{y}\right) } \dsum\limits_{j=0}^{y-1}\frac{\lambda ^{j}}{u^{j+1-y}}\frac{\left( a^{yt}-u^{y}\right) b^{yt\left( \frac{x+j}{y}\right) }}{\left( \lambda b^{yt}-u^{y}\right) }. \end{equation*} Now by making use of the generating functions (\ref{4ge1}) and (\ref{mt2}) on the right-hand side of the above equation, we obtain \begin{eqnarray*} &&\dsum\limits_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,b;\lambda )\frac{t^{n} }{n!} \\ &=&\frac{1}{1-u^{y}}\dsum\limits_{j=0}^{y-1}\frac{\lambda ^{j}}{u^{j+1-y}} \left( \dsum\limits_{n=0}^{\infty }\mathcal{H}_{n}\left( \frac{x+j}{y} ;u^{y};a,b,b;\lambda ^{y}\right) \frac{y^{n}t^{n}}{n!}\right) \\ &&\times \left( \dsum\limits_{n=0}^{\infty }\left( H_{n}\left( \frac{1}{y} ;u^{y}\right) -uH_{n}\left( u^{y}\right) \right) \frac{\left( y\ln a\right) ^{n}t^{n}}{n!}\right) . \end{eqnarray*} Therefore \begin{eqnarray*} &&\dsum\limits_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,b;\lambda )\frac{t^{n} }{n!} \\ &=&\dsum\limits_{n=0}^{\infty }\dsum\limits_{k=0}^{n}\dsum\limits_{j=0}^{y-1}\left( \begin{array}{c} n \\ k \end{array} \right) \frac{y^{n}\lambda ^{j}\left( \ln a\right) ^{n-k}}{u^{j+1-y}-u^{j+1}} \mathcal{H}_{k}\left( \frac{x+j}{y};u^{y};a,b,b;\lambda ^{y}\right) \\ &&\times \left( H_{n-k}\left( \frac{1}{y};u^{y}\right) -uH_{n-k}\left( u^{y}\right) \right) \frac{t^{n}}{n!}. \end{eqnarray*} By equating the coefficients of $\frac{t^{n}}{n!}$\ on both sides, we get \begin{eqnarray*} &&\mathcal{H}_{n}(x;u;a,b,b;\lambda ) \\ &=&\dsum\limits_{k=0}^{n}\dsum\limits_{j=0}^{y-1}\left( \begin{array}{c} n \\ k \end{array} \right) \frac{y^{n}\lambda ^{j}\left( \ln a\right) ^{n-k}}{u^{j+1-y}-u^{j+1}} \mathcal{H}_{k}\left( \frac{x+j}{y};u^{y};a,b,b;\lambda ^{y}\right) \\ &&\times \left( H_{n-k}\left( \frac{1}{y};u^{y}\right) -uH_{n-k}\left( u^{y}\right) \right) . \end{eqnarray*} Finally, by replacing $x$ by $yx$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\begin{remark} By substituting $a=1$ into Theorem \ref{T11}, for $n=k$, we obtain \begin{equation} \mathcal{H}_{n}(yx;u;1,b,b;\lambda )=y^{n}u^{y-1}\frac{1-u}{1-u^{y}} \dsum\limits_{j=0}^{y-1}\frac{\lambda ^{j}}{u^{j}}\mathcal{H}_{n}\left( x+ \frac{j}{y};u^{y};1,b,b;\lambda ^{y}\right) . \label{mmF} \end{equation} By substituting $b=e$ and $\lambda =1$ into the above equation, we arrive at the multiplication formula for the Eulerian polynomials \begin{equation} H_{n}(yx;u)=y^{n}u^{y-1}\frac{\left( 1-u\right) }{1-u^{y}} \dsum\limits_{j=0}^{y-1}\frac{1}{u^{j}}H_{n}\left( x+\frac{j}{y} ;u^{y}\right) , \label{MF-0} \end{equation} cf. (\cite{Carlitz1953G}, \cite[Eq-(3.12)]{Carlitz}). If $u=-1$, then the above equation reduces to the well known multiplication formula for the Euler polynomials: for $y$ is an odd positive integer, we have \begin{equation} E_{n}(yx)=y^{n}\dsum\limits_{j=0}^{y-1}(-1)^{j}E_{n}\left( x+\frac{j}{y} \right) , \label{MF} \end{equation} where $E_{n}(x)$ denotes the Euler polynomials in the usual notation. If$\ y$ is an even positive integer, we have \begin{equation} E_{n}(yx)=\frac{2y^{n-1}}{n}\dsum\limits_{j=0}^{y-1}(-1)^{j}B_{n}\left( x+ \frac{j}{y}\right) , \label{MF2} \end{equation} where $B_{n}(x)$ and $E_{n}(x)$ denote the Bernoulli polynomials and Euler polynomials, respectively, cf. (\cite{Carlitz1952}, \cite {SrivastavaKurtSimsek}). \end{remark}
To prove the multiplication formula of the generalized Apostol Bernoulli polynomials, we need the following generating function which is defined by Srivastava et al. \cite[pp. 254, Eq. (20)]{SrivastawaGargeSC}:
\begin{definition} \label{DefBER}Let $a,b,c\in \mathbb{R}^{+}$ with $a\neq b,$ $x\in \mathbb{R}$ and $n\in \mathbb{N}_{0}$. Then the generalized Bernoulli polynomials $ \mathfrak{B}_{n}^{(\alpha )}(x;\lambda ;a,b,c)$ of order $\alpha \in \mathbb{ C}$ are defined by means of the following generating functions: \begin{equation} f_{B}(x,a,b,c;\lambda ;\alpha )=\left( \frac{t}{\lambda b^{t}-a^{t}}\right) ^{\alpha }c^{xt}=\sum_{n=0}^{\infty }\mathfrak{B}_{n}^{(\alpha )}(x;\lambda ;a,b,c)\frac{t^{n}}{n!}, \label{1S} \end{equation} where \begin{equation*} \left\vert t\ln (\frac{a}{b})+\ln \lambda \right\vert <2\pi \end{equation*} and \begin{equation*} 1^{\alpha }=1. \end{equation*} \end{definition}
Observe that if we set $\lambda =1$ in (\ref{1S}), we have \begin{equation} \left( \frac{t}{b^{t}-a^{t}}\right) ^{\alpha }c^{xt}=\sum_{n=0}^{\infty } \mathfrak{B}_{n}^{(\alpha )}(x;a,b,c)\frac{t^{n}}{n!}. \label{9} \end{equation} If we set $x=0$ in (\ref{9}), we obtain \begin{equation} \left( \frac{t}{b^{t}-a^{t}}\right) ^{\alpha }=\sum_{n=0}^{\infty }\mathfrak{ B}_{n}^{(\alpha )}(a,b)\frac{t^{n}}{n!}, \label{8} \end{equation} with of course, $\mathfrak{B}_{n}^{(\alpha )}(x;a,b,c)=\mathfrak{B} _{n}^{(\alpha )}(a,b)$, cf. (\cite{luo14}-\cite{lou15}, \cite{KimSimskJKM}, \cite{YsimsekKim}, \cite{kurtSimsek}, \cite{Mali}, \cite{OzdenAML}, \cite {OzdenSrivastava}, \cite{srivas18}, \cite{srivastava11}, \cite {Srivastava2011}, \cite{SrivastawaGargeSC}). If we set $\alpha =1$ in (\ref {8}) and (\ref{9}), we have \begin{equation} \frac{t}{b^{t}-a^{t}}=\sum_{n=0}^{\infty }\mathfrak{B}_{n}(a,b)\frac{t^{n}}{ n!} \label{4} \end{equation} and \begin{equation} \left( \frac{t}{b^{t}-a^{t}}\right) c^{xt}=\sum_{n=0}^{\infty }\mathfrak{B} _{n}(x;a,b,c)\frac{t^{n}}{n!}, \label{5} \end{equation} which have been studied by Luo et al. \cite{luo14}-\cite{lou15}. Moreover, by substituting $a=1$ and $b=c=e$ into (\ref{1S}), then we arrive at the Apostol-Bernoulli polynomials $\mathcal{B}_{n}(x;\lambda )$, which are defined by means of the following generating function \begin{equation*} \left( \frac{t}{\lambda e^{t}-1}\right) e^{xt}=\sum_{n=0}^{\infty }\mathcal{B }_{n}(x;\lambda )\frac{t^{n}}{n!}, \end{equation*} These polynomials $\mathcal{B}_{n}(x;\lambda )$ have been introduced and investigated by many Mathematicians cf. (\cite{apostol}, \cite{KimChiARXIV}, \cite{KimkimJang}, \cite{KimSimskJKM}, \cite{burakSimsek}, \cite{luo13}, \cite{OzdenSrivastava}, \cite{SimsekJMAA}, \cite{srivas18}). When $a=\lambda =1$ and $b=c=e$ into (\ref{4}) and (\ref{5}), $\mathfrak{B}_{n}(a,b)$ and $ \mathfrak{B}_{n}(x;a,b,c)$ reduce to the classical Bernoulli numbers and the classical Bernoulli polynomials, respectively, cf. \cite{AgohDilcher}-\cite {walum}.
\begin{remark} The constraints on $\left\vert t\right\vert $, which we have used in Definition \ref{DefBER} and (\ref{1Ssf}), are meant to ensure that the generating function in (\ref{9})and (\ref{1Ssf}) are analytic throughout the prescribed open disks in complex $t$-plane (centred at the origin $t=0$) in order to have the corresponding convergent Taylor-Maclaurin series expansion (about the origin $t=0$) occurring on the their right-hand side (with a positive radius of convergence) cf. \cite{srivastava11}. \end{remark}
\begin{theorem} \label{T11a}Let $y\in \mathbb{N}$. Then we have \begin{equation*} \mathfrak{B}_{n}(yx;\lambda ;a,b,b)=\dsum\limits_{l=0}^{n}\dsum\limits_{j=0}^{y-1}\left( \begin{array}{c} n \\ l \end{array} \right) \lambda ^{j}y^{l-1}\left( (y-1-j)\ln a\right) ^{n-l}\mathfrak{B} _{l}\left( x+\frac{j}{y};\lambda ^{y};a,b,b\right) . \end{equation*} \end{theorem}
\begin{proof} Substituting $c=b$ and $\alpha =1$ into (\ref{1S}), we get \begin{equation*} \sum_{n=0}^{\infty }\mathfrak{B}_{n}(x;\lambda ;a,b,c)\frac{t^{n}}{n!}=\frac{ 1}{y}\sum_{j=0}^{y-1}\lambda ^{j}\frac{yt}{\lambda ^{y}b^{yt}-a^{yt}} b^{\left( \frac{x+j}{y}\right) yt}a^{t(y-j-1)}. \end{equation*} Therefore \begin{eqnarray*} &&\sum_{n=0}^{\infty }\mathfrak{B}_{n}(x;\lambda ;a,b,c)\frac{t^{n}}{n!} \\ &=&\sum_{n=0}^{\infty }\dsum\limits_{l=0}^{n}\dsum\limits_{j=0}^{y-1}\left( \begin{array}{c} n \\ l \end{array} \right) \lambda ^{j}\left( (y-1-j)\ln a\right) ^{n-l}y^{l-1}\mathfrak{B} _{l}\left( \frac{x+j}{y};\lambda ^{y};a,b,b\right) \frac{t^{n}}{n!}. \end{eqnarray*} Comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides of the above equation, we get \begin{equation*} \mathfrak{B}_{n}^{(\alpha )}(x;\lambda ;a,b,c)=\dsum\limits_{l=0}^{n}\dsum\limits_{j=0}^{y-1}\left( \begin{array}{c} n \\ k \end{array} \right) \lambda ^{j}\left( (k-1-j)\ln a\right) ^{n-l}y^{l-1}\mathfrak{B} _{l}\left( \frac{x+j}{k};\lambda ^{y};a,b,b\right) . \end{equation*} By replacing $x$ by $yx$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\begin{remark} Kurt and Simsek \cite{kurtSimsek} proved multiplication formula for the generalized Bernoulli polynomials of order $\alpha $. When $a=\lambda =1$ and $b=c=e$ into Theorem \ref{T11a}, we have the multiplication formula for the Bernoulli polynomials given by \begin{equation} B_{n}(yx)=y^{n-1}\dsum\limits_{j=0}^{y-1}B_{n}\left( x+\frac{j}{y}\right) , \label{MF1} \end{equation} cf. (\cite{apostol}, \cite{Carlitz1952}, \cite{Carlitz}, \cite{DERE}, \cite {KimSimskJKM}, \cite{lou15}, \cite{luo13}, \cite{luo2003}, \cite{Mali}, \cite {LuoSrivatava2010}, \cite{srivas18}, \cite{SrivastawaGargeSC}). \end{remark}
If $f$ is a \textit{normalized} polynomial such that it satisfies the formula \begin{equation} f_{n}(yx)=y^{n-1}\dsum\limits_{j=0}^{y-1}f_{n}\left( x+\frac{j}{y}\right) , \label{w} \end{equation} then $f$ is the $y$th degree Bernoulli polynomial due to (\ref{MF1}) cf. ( \cite{Carlitz1952}, \cite{walum}). According to Nielsen \cite{Carlitz1952}, if a normalized polynomial satisfies (\ref{MF1}) for a single value of $y>1$ , then it is identical with $B_{m}(x)$. Consequently, if a normalized polynomial satisfies (\ref{mmF}) for a single value of $y>1$, then it is identical with $\mathcal{H}_{n}(x;u;1,b,b;\lambda )$. The formula (\ref{MF2} ) is different. Therefore, for $y$ is an even positive integer, Carlitz \cite [Eq-(1.4)]{Carlitz1952} considered the following equation: \begin{equation*} g_{n-1}(yx)=-\frac{2y^{n-1}}{n}\dsum\limits_{j=0}^{y-1}(-1)^{j}f_{n}\left( x+ \frac{j}{y}\right) , \end{equation*} where $g_{n-1}(x)$\ and $f_{n}(x)$ denote the normalized polynomials of degree $n-1$ and $n$, respectively. More precisely, as Carlitz has pointed out \cite[p. 184]{Carlitz1952}, if $y$ is a fixed even integer $\geq 2$ and $ f_{n}(x)$\ is an arbitrary normalized polynomial of degree $n$, then (\ref {MF2}) determines $g_{n-1}(x)$\ as a normalized polynomial of degree $n-1$. Thus, for a single value$\ y$, (\ref{MF2}) does not suffice to determine the normalized polynomials $g_{n-1}(x)$\ and $f_{n}(x)$.
\begin{remark} According to (\ref{w}), the set of normalized polynomials $\left\{ f_{n}(x)\right\} $ is an Appell set, cf. \cite{Carlitz1952}. \end{remark}
We now modify (\ref{4ge1}) as follows: \begin{equation} \frac{\left( a^{t}-\xi \right) c^{xt}}{\lambda b^{t}-\xi } =\sum_{n=0}^{\infty }\mathcal{H}_{n}(x;\xi ;a,b,c;\lambda )\frac{t^{n}}{n!} \label{M1b} \end{equation} where \begin{equation*} \xi ^{r}=1,\text{ }\xi \neq 1. \end{equation*}
The polynomial $\mathcal{H}_{n}(x;\xi ;a,b,c;\lambda )$ is a normalized polynomial of degree $m$ in $x$. The polynomial \QTR{cal}{H}$_{n}(x;\xi ;1,e,e;1)$ may be called Eulerian polynomials with parameter $\xi $. In particular we note that \begin{equation*} \mathcal{H}_{n}(x;-1;1,e,e;1)=E_{n}(x) \end{equation*} since for $a=\lambda =1$, $b=c=e$, equation (\ref{M1b}) reduces to the generating function for the Euler polynomials.
By means of equation (\ref{mMF}), it is easy to verify the following multiplication formulas:
If $y$ is an odd positive integer, then we have \begin{eqnarray} \mathcal{H}_{n-1}(yx;\xi ;a,b,b;\lambda ) &=&\frac{y^{n-1}}{n} \dsum\limits_{j=0}^{y-1}\left( \frac{\lambda }{\xi }\right) ^{j}\mathfrak{B} _{n}\left( x+\frac{j}{y};b;\lambda ^{y}\right) \label{M1c} \\ &&-\frac{1}{\xi n}\dsum\limits_{k=0}^{n}\dsum\limits_{j=0}^{y-1}\left( \frac{ \lambda }{\xi }\right) ^{j}y^{k-1}(\ln a)^{n-k}\mathfrak{B}_{k}\left( x+ \frac{j}{y};b;\lambda ^{y}\right) , \notag \end{eqnarray} where \begin{equation*} \mathcal{H}_{k}\left( x+\frac{j}{y};\xi ^{y};1,b,b;\lambda ^{y}\right) = \mathfrak{B}_{n}\left( x+\frac{j}{y};b;\lambda ^{y}\right) \text{.} \end{equation*} If $y$ is an even positive integer, then we have \begin{eqnarray} \mathcal{H}_{n}(yx;\xi ;a,b,b;\lambda ) &=&\frac{y^{n}}{2} \dsum\limits_{j=0}^{y-1}\left( \frac{\lambda }{\xi }\right) ^{j}\mathfrak{E} _{n}\left( x+\frac{j}{y};b;\lambda ^{y}\right) \label{M1d} \\ &&-\frac{1}{2\xi }\dsum\limits_{k=0}^{n}\dsum\limits_{j=0}^{y-1}\left( \frac{ \lambda }{\xi }\right) ^{j}y^{k}(\ln a)^{n-k}\mathfrak{E}_{k}\left( x+\frac{j }{y};b;\lambda ^{y}\right) , \notag \end{eqnarray} where \begin{equation*} \mathcal{H}_{k}\left( x+\frac{j}{y};\xi ^{y};1,b,b;\lambda ^{y}\right) = \mathfrak{E}_{n}\left( x+\frac{j}{y};b;\lambda ^{y}\right) \text{,} \end{equation*} where $\mathfrak{E}_{n}(x;a,b,c)$ denotes the generalized Euler polynomials, which are defined by means of the following generating function: \begin{equation*} \left( \frac{t}{b^{t}-a^{t}}\right) c^{xt}=\sum_{n=0}^{\infty }\mathfrak{E} _{n}(x;a,b,c)\frac{t^{n}}{n!} \end{equation*} cf. (\cite{luo14}-\cite{lou15}, \cite{Kim Jang}, \cite{kurtSimsek}, \cite {OzdenSrivastava}, \cite{srivas18}, \cite{srivastava11}, \cite {Srivastava2011}, \cite{SrivastawaGargeSC}).
\begin{remark} If we set $a=\lambda =1$ and $b=e$, then (\ref{M1c}) and (\ref{M1d}) reduce to the following multiplication formulas, respectively: \begin{equation*} H_{n-1}(yx;\xi )=\frac{y^{n-1}}{n}\left( 1-\frac{1}{\xi }\right) \dsum\limits_{j=0}^{y-1}\frac{1}{\xi ^{j}}B_{n}\left( x+\frac{j}{y}\right) \end{equation*} cf. \cite[Eq. (3.3)]{Carlitz1952} and \begin{equation*} H_{n}(yx;\xi )=\frac{y^{n}}{2}\left( 1-\frac{1}{\xi }\right) \dsum\limits_{j=0}^{y-1}\frac{1}{\xi ^{j}}E_{n}\left( x+\frac{j}{y}\right) . \end{equation*} Let $f_{n}(x)$ and $g_{n}(x)$ be normalized polynomials in the usual way. Carlitz \cite[Eq. (3.4)]{Carlitz1952} defined the following equation: \begin{equation*} g_{n-1}(yx)=\frac{(1-\rho )y^{n-1}}{n}\dsum\limits_{j=0}^{y-1}\rho ^{j}f_{n}\left( x+\frac{j}{y}\right) , \end{equation*} where $\rho $ is a fixed primitive $r$th root of unity, $r>1$, $y\equiv 0( \func{mod}r)$. \end{remark}
\begin{remark} If we set $a=\lambda =1$, $b=c=e$ and $\xi =-1$, then (\ref{M1c}) and (\ref {M1d}) reduce to (\ref{MF2}) and (\ref{MF}). \end{remark}
\begin{remark} Walum \cite{walum} defined multiplication formula for periodic functions as follows: \begin{equation} \vartheta (y)f(yx)=\dsum\limits_{j(y)}f\left( x+\frac{j}{y}\right) , \label{w1} \end{equation} where $f$ is periodic with period $1$ and $j(y)$ under the summation sign indicates that $j$ runs through a complete system of residues $\func{mod}y$.
Formulas (\ref{w}), (\ref{w1}) and other multiplication formulas related to periodic functions and normalized polynomials occur in Franel's formula, in the theory of the Dedekind sums and Hardy-Berndt sums, in the theory of the zeta functions and $L$-functions and in the theory of periodic bounded variation, cf. (\cite{Berndt}, \cite{berndt2}, \cite{walum}). \end{remark}
\subsection{Generalized Eulerian type numbers and polynomials attached to Dirichlet character}
In this section, we construct generating function, related to nonnegative real parameters, for the generalized Eulerian type numbers and polynomials attached to Dirichlet character. We also give some properties of these polynomials and numbers.
\begin{definition} Let $\chi $ be the Dirichlet character of conductor $f\in \mathbb{N}$. Let $ x\in \mathbb{R}$, $a,b\in \mathbb{R}^{+},$ $(a\neq b),$ $\lambda \in \mathbb{ C}$ and $u\in \mathbb{C\diagdown }\left\{ 1\right\} $. The\ generalized Eulerian type polynomials $\mathcal{H}_{n,\chi }(x;u;a,b,c;\lambda )$ are defined by means of the following generating function: \begin{equation} \mathcal{F}_{\lambda ,\chi }(t,x;u,a,b,c)=\dsum\limits_{j=0}^{f-1}\frac{ \left( a^{ft}-u^{f}\right) \chi (j)u^{f-j-1}c^{\left( \frac{x+j}{f}\right) ft}}{\lambda ^{f}b^{ft}-u^{f}}=\sum_{n=0}^{\infty }\mathcal{H}_{n,\chi }(x;u;a,b,c;\lambda )\frac{t^{n}}{n!} \label{4ge2} \end{equation}
with, of course \begin{equation*} \mathcal{H}_{n,\chi }(0;u;a,b,c;\lambda )=\mathcal{H}_{n,\chi }(u;a,b,c;\lambda ), \end{equation*} where $\mathcal{H}_{n,\chi }(u;a,b,c;\lambda )$ denotes generalized Eulerian type numbers. \end{definition}
\begin{remark} In the special case when $a=\lambda =1$ and $b=c=e$, the generalized Eulerian type polynomials $\mathcal{H}_{n,\chi }(x;u;a,b,c;\lambda )$\ are reduced to the Frobenius Euler polynomials which are defined by means of the following generating function: \begin{equation*} \dsum\limits_{j=0}^{f-1}\frac{\left( 1-u^{f}\right) \chi (j)u^{f-j-1}e^{\left( \frac{x+j}{f}\right) ft}}{e^{ft}-u^{f}} =\sum_{n=0}^{\infty }H_{n,\chi }(x;u)\frac{t^{n}}{n!}, \end{equation*} cf. (\cite{Tsumura}, \cite{KimSimskJKM}, \cite{YsimsekKim}, \cite{SimsekBKMS} , \cite{SimsekJNT}, \cite{srivas18}). Substituting $u=-1$ into the above equation, we have generating function of the generalized Euler polynomials attached to Dirichlet character with odd conductor: \begin{equation*} 2\dsum\limits_{j=0}^{f-1}\frac{\chi (j)(-1)^{j}e^{\left( \frac{x+j}{f} \right) ft}}{e^{ft}+1}=\sum_{n=0}^{\infty }E_{n,\chi }(x)\frac{t^{n}}{n!}, \end{equation*} cf. (\cite{Tsumura}, \cite{SimsekBKMS}, \cite{SimsekJNT}, \cite{srivas18}). \end{remark}
Combining (\ref{4ge1}) and (\ref{4ge2}), we obtain the following functional equation: \begin{equation*} \mathcal{F}_{\lambda ,\chi }(t,x;u,a,b,c;)=\dsum\limits_{j=0}^{f-1}\chi (j)u^{f-j-1}F_{\lambda ^{f}}(ft,\frac{x+j}{f};u^{f},a,b,c). \end{equation*}
By using the above functional equation we arrive at the following Theorem:
\begin{theorem} \begin{equation*} \mathcal{H}_{n,\chi }(x;u;a,b,c;\lambda )=f^{n}\dsum\limits_{j=0}^{f-1}\chi (j)u^{f-j-1}\mathcal{H}_{n}(\frac{x+j}{f};u^{f};a,b,c;\lambda ^{f}). \end{equation*} \end{theorem}
\begin{theorem} \begin{equation*} \mathcal{H}_{n,\chi }(x;u;a,b,c;\lambda )=\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \left( x\ln c\right) ^{n-j}\mathcal{H}_{j,\chi }(u;a,b,c;\lambda ). \end{equation*} \end{theorem}
\begin{proof} By using (\ref{4ge2}), we get \begin{equation*} \sum_{n=0}^{\infty }\mathcal{H}_{n,\chi }(u;a,b,c;\lambda )\frac{t^{n}}{n!} \sum_{n=0}^{\infty }\left( x\ln c\right) ^{n}\frac{t^{n}}{n!} =\sum_{n=0}^{\infty }\mathcal{H}_{n,\chi }(x;u;a,b,c;\lambda )\frac{t^{n}}{n! }. \end{equation*} From the above equation, we obtain \begin{equation*} \sum_{n=0}^{\infty }\left( \sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \left( x\ln c\right) ^{n-j}\mathcal{H}_{j,\chi }(u;a,b,c;\lambda \right) \frac{t^{n}}{n!}=\sum_{n=0}^{\infty }\mathcal{H}_{n,\chi }(x;u;a,b,c;\lambda )\frac{t^{n}}{n!}. \end{equation*} Comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\subsection{\textbf{Recurrence relation for the }generalized Eulerian type polynomials}
In this section we are going to differentiate (\ref{4ge1}) with respect to the variable $t$ to derive a recurrence relation for the generalized Eulerian type polynomials. Therefore, we obtain the following differential equation: \begin{eqnarray*} \frac{\partial }{\partial t}F_{\lambda }(t,x;u,a,b,c) &=&\left( \ln a\right) F_{\lambda }(t,x;u,a,b,c)+\frac{\ln a}{t}f_{B}(x,1,b,c;\frac{\lambda }{u};1) \\ &&-\frac{\ln \left( b^{\lambda }\right) }{ut}F_{\lambda }(t,x;u,a,b,c)f_{B}(1,1,b,b;\frac{\lambda }{u};1) \\ &&+\ln \left( c^{x}\right) F_{\lambda }(t,x;u,a,b,c). \end{eqnarray*} By using this equation, we obtain a recurrence relation for the generalized Eulerian type polynomials by the following theorem:
\begin{theorem} Let $n\in \mathbb{N}$. We have \begin{eqnarray*} n\mathcal{H}_{n}(x;u;a,b,c;\lambda ) &=&\left( \ln a\right) \left( n\mathcal{ H}_{n-1}(x;u;a,b,c;\lambda )+\mathfrak{B}_{n}(x;\frac{\lambda }{u} ;a,b,c)\right) \\ &&-\frac{\lambda \ln b}{u}\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \mathcal{H}_{j}(x;u;a,b,c;\lambda )\mathfrak{B}_{n-j}(1;\frac{ \lambda }{u};1,b,b) \\ &&+\left( \ln \left( c^{nx}\right) \right) \mathcal{H}_{n-1}(x;u;a,b,c; \lambda ), \end{eqnarray*} where $\mathfrak{B}_{n}(x;\lambda ;a,b,c)$ denotes the generalized Bernoulli polynomials of order $1$. \end{theorem}
\begin{remark} When $a=\lambda =1$ and $b=c=e$, the recurrence relation for the generalized Eulerian type polynomials is reduced to \begin{equation*} nH_{n}(x;u)=nxH_{n-1}(x;u)-\frac{1}{u}\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) H_{j}(x;u)\mathcal{B}_{n-j}(1;\frac{1}{u}). \end{equation*} \end{remark}
\section{New identities involving families of polynomials}
In this section, we derive some new identities related to the generalized Bernoulli polynomials and numbers of order $1$, the Eulerian type polynomials and the generalized array type polynomials.
\begin{theorem} \label{T11b} The following relationship holds true: \begin{equation*} \mathfrak{B}_{n}(x;\lambda ;a,b,b)=\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \mathcal{H}_{j}(x;\lambda ^{-1};a,\frac{b}{a},\frac{b}{a};1) \mathfrak{B}_{n-j}(x-1;\lambda ;1,a,a). \end{equation*} \end{theorem}
\begin{proof} \begin{equation*} \sum_{n=0}^{\infty }\mathfrak{B}_{n}(x;\lambda ;a,b,b)\frac{t^{n}}{n!} =\left( \frac{ta^{(x-1)t}}{\lambda a^{t}-1}\right) \left( \frac{\left( a^{t}-\lambda ^{-1}\right) \left( \frac{b}{a}\right) ^{xt}}{\left( \frac{b}{a }\right) ^{t}-\lambda ^{-1}}\right) . \end{equation*} Combining (\ref{1S}) and (\ref{4ge1}) with the above equation, we get \begin{equation*} \sum_{n=0}^{\infty }\mathfrak{B}_{n}(x;\lambda ;a,b,b)\frac{t^{n}}{n!} =\sum_{n=0}^{\infty }\mathfrak{B}_{n}(x-1;\lambda ;1,a,a)\frac{t^{n}}{n!} \sum_{n=0}^{\infty }\mathcal{H}_{n}(x;\lambda ^{-1};a,\frac{b}{a},\frac{b}{a} ;1)\frac{t^{n}}{n!}. \end{equation*} Therefore \begin{equation*} \sum_{n=0}^{\infty }\mathfrak{B}_{n}(x;\lambda ;a,b,b)\frac{t^{n}}{n!} =\sum_{n=0}^{\infty }\left( \sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \mathcal{H}_{j}(x;\lambda ^{-1};a,\frac{b}{a},\frac{b}{a};1) \mathfrak{B}_{n-j}(x-1;\lambda ;1,a,a)\right) \frac{t^{n}}{n!}. \end{equation*} Comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
Relationship between the generalized Bernoulli numbers and the Frobenius Euler numbers is given by the following result:
\begin{theorem} The following relationship holds true: \begin{equation} \mathfrak{B}_{n}(\lambda ;a,b)=\frac{1}{\lambda -1}\sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) j\left( \ln a^{-1}\right) ^{n-j}\left( \ln \left( \frac{b}{a}\right) \right) ^{j}H_{j-1}\left( \lambda ^{-1}\right) . \label{1Ssd} \end{equation} \end{theorem}
\begin{proof} By using (\ref{1S}), we obtain \begin{equation*} \sum_{n=0}^{\infty }\mathfrak{B}_{n}(\lambda ;a,b)\frac{t^{n}}{n!}=\frac{ ta^{-t}}{\lambda -1}\left( \frac{1-\lambda ^{-1}}{e^{t\ln \left( \frac{b}{a} \right) }-\lambda ^{-1}}\right) . \end{equation*}
From the above equation, we get \begin{equation*} \sum_{n=0}^{\infty }\mathfrak{B}_{n}(\lambda ;a,b)\frac{t^{n}}{n!}=\frac{1}{ \lambda -1}\sum_{n=0}^{\infty }\left( \ln \left( \frac{1}{a}\right) \right) ^{n}\frac{t^{n}}{n!}\sum_{n=0}^{\infty }n\mathcal{H}_{n}(\lambda ^{-1})\left( \ln \left( \frac{b}{a}\right) \right) ^{n}\frac{t^{n}}{n!}. \end{equation*} Therefore \begin{equation*} \sum_{n=0}^{\infty }\mathfrak{B}_{n}(\lambda ;a,b)\frac{t^{n}}{n!} =\sum_{n=0}^{\infty }\left( \sum_{j=0}^{n}\left( \begin{array}{c} n \\ j \end{array} \right) \frac{j\left( \ln a^{-1}\right) ^{n-j}\left( \ln \left( \frac{b}{a} \right) \right) ^{j}}{\lambda -1}H_{j-1}\left( \lambda ^{-1}\right) \right) \frac{t^{n}}{n!}. \end{equation*}
Comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\begin{remark} By substituting $a=1$ and $b=e$ into (\ref{1Ssd}), we have \begin{equation*} \mathcal{B}_{n}(\lambda )=\frac{n}{\lambda -1}H_{n-1}(\lambda ^{-1}), \end{equation*} cf. \cite{KimSimskJKM}. \end{remark}
Relationship between the generalized Eulerian type polynomials and generalized array type polynomials are given by the following theorem:
\begin{theorem} The following relationship holds true: \begin{equation*} \mathcal{H}_{n}(x;u;a,b,b;\lambda )=\sum_{k=0}^{\infty }\sum_{m=0}^{\infty }\sum_{d=0}^{n}\left( \begin{array}{c} m+k-1 \\ m \end{array} \right) \left( \begin{array}{c} n \\ d \end{array} \right) \frac{k!\left( \ln a^{m}\right) ^{n-d}}{u^{m+k}}\mathcal{S} _{k}^{d}(x;a,b;\lambda ). \end{equation*} \end{theorem}
\begin{proof} From (\ref{4ge1}), we obtain \begin{equation*} \sum_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,c;\lambda )\frac{t^{n}}{n!} =\sum_{k=0}^{\infty }\left( \frac{\lambda b^{t}-a^{t}}{u-a^{t}}\right) ^{k}b^{xt}. \end{equation*} Combining (\ref{ab0}) with the above equation, we get \begin{equation*} \sum_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,b;\lambda )\frac{t^{n}}{n!} =\sum_{k=0}^{\infty }\frac{k!}{\left( u-a^{t}\right) ^{k}}\sum_{n=0}^{\infty }\mathcal{S}_{k}^{n}(x;a,b;\lambda )\frac{t^{n}}{n!}. \end{equation*} From the above equation, we get \begin{equation*} \sum_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,b;\lambda )\frac{t^{n}}{n!} =\sum_{n=0}^{\infty }\sum_{k=0}^{\infty }\frac{k!\mathcal{S} _{k}^{n}(x;a,b;\lambda )}{u^{k}\left( 1-\frac{a^{t}}{u}\right) ^{k}}\frac{ t^{n}}{n!}\text{.} \end{equation*} Now we assume $\left\vert \frac{a^{t}}{u}\right\vert <1$ in the above equation; thus we get \begin{eqnarray*} &&\sum_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,b;\lambda )\frac{t^{n}}{n!} \\ &=&\sum_{n=0}^{\infty }\sum_{k=0}^{\infty }\sum_{m=0}^{\infty }\left( \begin{array}{c} m+k-1 \\ m \end{array} \right) \frac{k!\mathcal{S}_{k}^{n}(x;a,b;\lambda )}{u^{k+m}}\frac{ a^{mt}t^{n}}{n!}. \end{eqnarray*} Therefore \begin{eqnarray*} &&\sum_{n=0}^{\infty }\mathcal{H}_{n}(x;u;a,b,b;\lambda )\frac{t^{n}}{n!} \\ &=&\sum_{n=0}^{\infty }\left( \sum_{k=0}^{\infty }\sum_{m=0}^{\infty }\sum_{d=0}^{n}\left( \begin{array}{c} m+k-1 \\ m \end{array} \right) \left( \begin{array}{c} n \\ d \end{array} \right) \frac{k!\left( \ln a^{m}\right) ^{n-d}}{u^{m+k}}\mathcal{S} _{k}^{d}(x;a,b;\lambda ).\right) \frac{t^{n}}{n!}. \end{eqnarray*} Comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\begin{remark} Substituting $a=1$ into the above Theorem and noting that $d=n$, we deduce the following identity: \begin{equation*} \mathcal{H}_{n}(x;u;1,b,b;\lambda )=\sum_{k=0}^{\infty }\frac{k!}{(u-1)^{k}} \mathcal{S}_{k}^{n}(x;1,b;\lambda ) \end{equation*} which upon setting $\lambda =1$ and $b=e$, yields \begin{equation*} H_{n}(x;u)=\sum_{k=0}^{n}\frac{k!}{(u-1)^{k}}\mathcal{S}_{k}^{n}(x) \end{equation*} which was found by Chang and Ha \cite[Lemma 1]{Chan}. \end{remark}
\section{Relationship between the generalized Bernoulli polynomials and the generalized array type polynomials}
In this section, we give some applications related to the generalized Bernoulli polynomials, generalized array type polynomials. We derive many identities involving these polynomials. By using same method with Agoh and Dilcher's \cite{AgohDilcher}, we give the following Theorem:
\begin{theorem} \label{L-AD} \begin{equation} \left( \frac{\lambda b^{t}-a^{t}}{t}\right) ^{k}b^{xt}=\dsum\limits_{n=0}^{\infty }\frac{\mathcal{S}_{k}^{n+k}(x;a,b; \lambda )}{\binom{n+k}{k}}\frac{t^{n}}{n!}. \label{w3} \end{equation} \end{theorem}
\begin{proof} Combining\ (\ref{ab0}) and (\ref{ab1}), we get \begin{eqnarray*} \left( \frac{\lambda b^{t}-a^{t}}{t}\right) ^{k}b^{xt} &=&\frac{1}{t^{k}} \dsum\limits_{n=0}^{\infty }\frac{k!}{n!}S_{k}^{n}\left( x,a,b;\lambda \right) t^{n} \\ &=&\dsum\limits_{n=0}^{\infty }\frac{k!}{n!}S_{k}^{n+k}\left( x,a,b;\lambda \right) t^{n-k}. \end{eqnarray*} From the above equation, we arrive at the desired result. \end{proof}
\begin{remark} By setting $x=0$, $a=\lambda =1$ and $b=e$, Theorem \ref{L-AD} yields the corresponding result which is proven by Agoh and Dilcher \cite{AgohDilcher}. \end{remark}
\begin{theorem} \label{Tw3} \begin{eqnarray*} &&\left( n+k\right) \frac{\mathcal{S}_{k}^{n+k}(x;a,b;\lambda )}{\binom{n+k}{ k}}-xn\frac{\mathcal{S}_{k}^{n+k-1}(x;a,b;\lambda )}{\binom{n+k-1}{k}} \\ &=&\sum_{j=0}^{n}\frac{\left( \begin{array}{c} n \\ j \end{array} \right) }{\left( \begin{array}{c} j+k-1 \\ k-1 \end{array} \right) }\mathcal{S}_{k-1}^{j+k-1}(x;a,b;\lambda )\left( \ln \left( b^{\lambda k}\right) \left( \ln (b)\right) ^{n-j}-\ln \left( a^{k}\right) \left( \ln (a)\right) ^{n-j}\right) . \end{eqnarray*} \end{theorem}
\begin{proof} By differentiating both sides of equation (\ref{w3}) with respect to the variable $t$, after some elementary calculations, we get the formula asserted by Theorem \ref{Tw3}. \end{proof}
\begin{theorem} \label{Teo19} The following relationship holds true: \begin{equation*} \mathcal{S}_{k-1}^{n}(x+y;a,b;\lambda )=\sum_{j=0}^{n}\frac{\left( \begin{array}{c} n \\ j \end{array} \right) \left( \begin{array}{c} n+k-1 \\ k-1 \end{array} \right) }{\left( \begin{array}{c} j+k \\ k \end{array} \right) }\mathcal{S}_{k}^{j+k}(x;a,b;\lambda )\mathfrak{B}_{n-j}(y;\lambda ;a,b,b). \end{equation*} \end{theorem}
\begin{proof} We set \begin{equation*} \left( \frac{\lambda b^{t}-a^{t}}{t}\right) ^{k}b^{xt}\left( \frac{tb^{yt}}{ \lambda b^{t}-a^{t}}\right) =\left( \frac{\lambda b^{t}-a^{t}}{t}\right) ^{k-1}b^{(x+y)t}. \end{equation*} Combining (\ref{w3}) and (\ref{5}) with the above equation, we get \begin{equation*} \sum_{n=0}^{\infty }\frac{\mathcal{S}_{k-1}^{n+k-1}(x+y;a,b;\lambda )}{ \binom{n+k-1}{k-1}}\frac{t^{n}}{n!}=\dsum\limits_{n=0}^{\infty }\mathfrak{B} _{n}(y;\lambda ;a,b,b)\frac{t^{n}}{n!}\sum_{n=0}^{\infty }\frac{\mathcal{S} _{k}^{n+k}(x;a,b;\lambda )}{\binom{n+k}{k}}\frac{t^{n}}{n!}. \end{equation*}
Therefore \begin{equation*} \sum_{n=0}^{\infty }\frac{\mathcal{S}_{k-1}^{n+k-1}(x+y;a,b;\lambda )}{ \binom{n+k-1}{k-1}}\frac{t^{n}}{n!}=\sum_{n=0}^{\infty }\left( \sum_{j=0}^{n} \frac{\left( \begin{array}{c} n \\ j \end{array} \right) }{\left( \begin{array}{c} j+k \\ k \end{array} \right) }\mathcal{S}_{k}^{j+k}(x;a,b;\lambda )\mathfrak{B}_{n-j}(y;\lambda ;a,b,b)\right) \frac{t^{n}}{n!}. \end{equation*} Comparing the coefficients of $\frac{t^{n}}{n!}$ on both sides of the above equation, we arrive at the desired result. \end{proof}
\begin{remark} By setting $x=y=0$, $a=\lambda =1$ and $b=e$, Theorem \ref{Teo19} yields the corresponding result which is proven by Agoh and Dilcher \cite{AgohDilcher}. \end{remark}
\begin{theorem} The following relationship holds true: \begin{equation*} \mathfrak{B}_{n}^{(u-v)}(x+y;\lambda ;a,b,b)=\sum_{j=0}^{n}\frac{\left( \begin{array}{c} n \\ j \end{array} \right) }{\left( \begin{array}{c} n+v \\ v \end{array} \right) }\mathcal{S}_{v}^{j+v}(x;a,b;\lambda )\mathfrak{B} _{n-j}^{(u)}(y;\lambda ;a,b,b). \end{equation*} \end{theorem}
\begin{proof} We set \begin{equation} \left( \frac{\lambda b^{t}-a^{t}}{t}\right) ^{v}b^{xt}\left( \frac{t}{ \lambda b^{t}-a^{t}}\right) ^{u}b^{yt}=\left( \frac{t}{\lambda b^{t}-a^{t}} \right) ^{u-v}b^{\left( x+y\right) t}. \label{wc3} \end{equation} Combining (\ref{w3}) and (\ref{1S}) with the above equation, by using same calculations with the proof of Theorem \ref{Teo19}, we arrive at the desired result. \end{proof}
\section{Application of the Laplace transform to the generating functions for the generalized Bernoulli polynomials and the generalized array type polynomials}
In this section, we give an application of the Laplace transform to the generating function for the generalized Bernoulli polynomials and the generalized array type polynomials. We obtain interesting series representation for the families of these polynomials.
By using (\ref{wc3}), we obtain \begin{eqnarray*} &&\dsum\limits_{n=0}^{\infty }\mathfrak{B}_{n}^{(u-v)}(\lambda ;a,b,b)\frac{ t^{n}}{n!}e^{-t(y-x)\ln b} \\ &=&\dsum\limits_{n=0}^{\infty }\left( \sum_{j=0}^{n}\frac{\left( \begin{array}{c} n \\ j \end{array} \right) }{\left( \begin{array}{c} n+v \\ v \end{array} \right) }\mathcal{S}_{v}^{j+v}(x;a,b;\lambda )\mathfrak{B} _{n-j}^{(u)}(\lambda ;a,b,b)\right) \frac{t^{n}}{n!}e^{-ty\ln b}. \end{eqnarray*} Integrate this equation (by parts) with respect to $t$ from $0$ to $\infty $ , we get \begin{eqnarray*} &&\dsum\limits_{n=0}^{\infty }\frac{\mathfrak{B}_{n}^{(u-v)}(\lambda ;a,b,b) }{n!}\dint\limits_{0}^{\infty }t^{n}e^{-t(y-x)\ln b}dt \\ &=&\dsum\limits_{n=0}^{\infty }\left( \frac{1}{n!}\sum_{j=0}^{n}\frac{\left( \begin{array}{c} n \\ j \end{array} \right) }{\left( \begin{array}{c} n+v \\ v \end{array} \right) }\mathcal{S}_{v}^{j+v}(x;a,b;\lambda )\mathfrak{B} _{n-j}^{(u)}(\lambda ;a,b,b)\right) \dint\limits_{0}^{\infty }t^{n}e^{-ty\ln b}dt. \end{eqnarray*} By using Laplace transform in the above equation, we arrive at the following Theorem:
\begin{theorem} \label{TeoL}The following relationship holds true: \begin{equation*} \dsum\limits_{n=0}^{\infty }\frac{\mathfrak{B}_{n}^{(u-v)}(\lambda ;a,b,b)}{ (\ln b^{y-x})^{n+1}}=\dsum\limits_{n=0}^{\infty }\sum_{j=0}^{n}\frac{\left( \begin{array}{c} n \\ j \end{array} \right) }{\left( \begin{array}{c} n+v \\ v \end{array} \right) }\frac{\mathcal{S}_{v}^{j+v}(x;a,b;\lambda )\mathfrak{B} _{n-j}^{(u)}(\lambda ;a,b,b)}{\left( \ln b^{y}\right) ^{n+1}}. \end{equation*} \end{theorem}
\begin{remark} When $a=\lambda =1$ and $b=e$, Theorem \ref{TeoL}\ is reduced to the following result: \begin{equation*} \dsum\limits_{n=0}^{\infty }\frac{B_{n}^{(u-v)}}{(y-x)^{n+1}} =\dsum\limits_{n=0}^{\infty }\sum_{j=0}^{n}\frac{\left( \begin{array}{c} n \\ j \end{array} \right) }{\left( \begin{array}{c} n+v \\ v \end{array} \right) }\frac{S_{v}^{j+v}(x)B_{n-j}^{(u)}}{y^{n+1}}. \end{equation*} \end{remark}
\section{Applications the $p$-adic integral to the family of the normalized polynomials and the generalized $\protect\lambda $-Stirling type numbers}
By using the $p$-adic integrals on $
\mathbb{Z}
_{p}$, we derive some new identities related to the Bernoulli numbers, the Euler numbers, the generalized Eulerian type numbers and the generalized $ \lambda $-Stirling type numbers.
In order to prove the main results in this section, we recall each of the following known results related to the $p$-adic integral.
Let $p$ be a fixed prime. It is known that \begin{equation*} \mu _{q}(x+p^{N}\mathbb{Z}_{p})=\frac{q^{x}}{\left[ p^{N}\right] _{q}} \end{equation*} is a distribution on $\mathbb{Z}_{p}$ for $q\in \mathbb{C}_{p}$ with $\mid 1-q\mid _{p}<1$, cf. \cite{T. Kim}. Let $UD\left( \mathbb{Z}_{p}\right) $ be the set of uniformly differentiable functions on $\mathbb{Z}_{p}$. The $p$ -adic $q$-integral of the function $f\in UD\left( \mathbb{Z}_{p}\right) $ is defined by Kim \cite{T. Kim} as follows: \begin{equation*} \int_{\mathbb{Z}_{p}}f(x)d\mu _{q}(x)=\lim_{N\rightarrow \infty }\frac{1}{ [p^{N}]_{q}}\sum_{x=0}^{p^{N}-1}f(x)q^{x}, \end{equation*} where \begin{equation*} \left[ x\right] =\frac{1-q^{x}}{1-q}. \end{equation*} From this equation, the\textit{\ bosonic} $p$-adic integral ($p$-adic Volkenborn integral) was considered from a physical point of view to the bosonic limit $q\rightarrow 1$, as follows (\cite{T. Kim}): \begin{equation} \int\limits_{\mathbb{Z}_{p}}f\left( x\right) d\mu _{1}\left( x\right) = \underset{N\rightarrow \infty }{\lim }\frac{1}{p^{N}}\sum_{x=0}^{p^{N}-1}f \left( x\right) , \label{M} \end{equation} where \begin{equation*} \mu _{1}\left( x+p^{N}\mathbb{Z}_{p}\right) =\frac{1}{p^{N}}. \end{equation*} The $p$-adic $q$-integral is used in many branch of mathematics, mathematical physics and other areas cf. (\cite{Amice}, \cite{T. Kim}, \cite {KimSimskJKM}, \cite{Schikof}, \cite{K. Shiratani}, \cite{SimsekJMAA}, \cite {SimsekADEA}, \cite{srivas18}, \cite{Volkenborn}).
By using (\ref{M}), we have the Witt's formula for the Bernoulli numbers $ B_{n}$ as follows: \begin{equation} \int\limits_{\mathbb{Z}_{p}}x^{n}d\mu _{1}\left( x\right) =B_{n} \label{M1} \end{equation} cf. (\cite{Amice}, \cite{T. Kim}, \cite{Kim2006TMIC}, \cite {KimmskimlcjangJIA}, \cite{Schikof}, \cite{Volkenborn}).
We consider the \textit{fermionic} integral in contrast to the convential bosonic, which is called the fermionic $p$-adic integral on $\mathbb{Z}_{p}$ cf. \cite{Kim2006TMIC}. That is \begin{equation} \int\limits_{\mathbb{Z}_{p}}f\left( x\right) d\mu _{-1}\left( x\right) = \underset{N\rightarrow \infty }{\lim }\sum_{x=0}^{p^{N}-1}\left( -1\right) ^{x}f\left( x\right) \label{Mm} \end{equation} where \begin{equation*} \mu _{1}\left( x+p^{N}\mathbb{Z}_{p}\right) =\frac{(-1)^{x}}{p^{N}} \end{equation*} cf. \cite{Kim2006TMIC}. By using (\ref{Mm}), we have the Witt's formula for the Euler numbers $E_{n}$ as follows: \begin{equation} \int\limits_{\mathbb{Z}_{p}}x^{n}d\mu _{-1}\left( x\right) =E_{n}, \label{Mm1} \end{equation} cf. (\cite{Kim2006TMIC}, \cite{KimmskimlcjangJIA}, \cite{SimsekADEA}, \cite {srivas18}).
The Volkenborn integral in terms of the Mahler coefficients is given by the following Theorem:
\begin{theorem} \label{TSHiR}Let \begin{equation*} f(x)=\sum_{j=0}^{\infty }a_{j}\left( \begin{array}{c} x \\ j \end{array} \right) \in UD\left( \mathbb{Z}_{p}\right) . \end{equation*} Then \begin{equation*} \int\limits_{\mathbb{Z}_{p}}f(x)d\mu _{1}\left( x\right) =\sum_{j=0}^{\infty }a_{j}\frac{(-1)^{j}}{j+1}. \end{equation*} \end{theorem}
Proof of Theorem \ref{TSHiR} was given by Schikhof \cite{Schikof}.
\begin{theorem} \label{L1} \begin{equation*} \int\limits_{\mathbb{Z}_{p}}\left( \begin{array}{c} x \\ j \end{array} \right) d\mu _{1}\left( x\right) =\frac{(-1)^{j}}{j+1}. \end{equation*} Proof of Theorem \ref{L1} was given by Schikhof \cite{Schikof}. \end{theorem}
\begin{theorem} The following relationship holds true: \begin{equation} B_{m}=\frac{1}{\ln ^{m}b}\sum_{j=0}^{m}(-1)^{j}\frac{j!}{j+1}\mathcal{S} (m,j;1,b;1). \label{1Ss} \end{equation} \end{theorem}
\begin{proof} If we substitute $a=\lambda =1$ in Theorem \ref{T3}, we have \begin{equation*} \left( \ln b^{x}\right) ^{m}=\sum_{j=0}^{m}\left( \begin{array}{c} x \\ j \end{array} \right) j!\mathcal{S}(m,j;1,b;1). \end{equation*} By applying the $p$-adic Volkenborn integral with Theorem \ref{L1}\ to the both sides of the above equation, we arrive at the desired result. \end{proof}
\begin{remark} By substituting $b=1$ into (\ref{1Ss}), we have \begin{equation*} B_{m}=\sum_{j=0}^{m}(-1)^{j}\frac{j!}{j+1}S(m,j) \end{equation*} where $S(m,j)$\ denotes the Stirling numbers of the second kind cf. (\cite {ChanManna}, \cite{http}, \cite{KimChiARXIV}). \end{remark}
\begin{theorem} \label{Teo14} The following relationship holds true: \begin{eqnarray*} &&\sum_{j=0}^{n}\binom{n}{j}\left( \ln a\right) ^{n-j}\left( \ln c\right) ^{j}B_{j}-u(\ln c)^{n}B_{n} \\ &=&\sum_{j=0}^{n}\binom{n}{j}\left( \ln c\right) ^{j}\left( \lambda \left( \mathcal{H}(u;a,b,c;\lambda )+\ln b\right) ^{n-j}-u\mathcal{H} _{n-j}(u;a,b,c;\lambda )\right) B_{j}. \end{eqnarray*} \end{theorem}
\begin{proof} By using Theorem \ref{t8}, we have \begin{eqnarray} &&\sum_{j=0}^{n}\binom{n}{j}\left( \ln a\right) ^{n-j}\left( \ln c\right) ^{j}x^{j}-u(\ln c)^{n}x^{n} \label{M1a} \\ &=&\sum_{j=0}^{n}\binom{n}{j}\left( \ln c\right) ^{j}x^{j}\left( \lambda \left( \mathcal{H}(u;a,b,c;\lambda )+\ln b\right) ^{n-j}-u\mathcal{H} _{n-j}(u;a,b,c;\lambda )\right) . \notag \end{eqnarray} By applying Volkenborn integral in (\ref{M}) to the both sides of the above equation, we get \begin{eqnarray*} &&\sum_{j=0}^{n}\binom{n}{j}\left( \ln a\right) ^{n-j}\left( \ln c\right) ^{j}\int\limits_{\mathbb{Z}_{p}}x^{j}d\mu (x)-u(\ln c)^{n}\int\limits_{ \mathbb{Z}_{p}}x^{n}d\mu (x) \\ &=&\sum_{j=0}^{n}\binom{n}{j}\left( \ln c\right) ^{j}\left( \lambda \left( \mathcal{H}(u;a,b,c;\lambda )+\ln b\right) ^{n-j}-u\mathcal{H} _{n-j}(u;a,b,c;\lambda )\right) \int\limits_{\mathbb{Z}_{p}}x^{j}d\mu (x). \end{eqnarray*} By substituting (\ref{M1}) into the above equation, we easily arrive at the desired result. \end{proof}
\begin{remark} By substituting $b=c=e$ and $a=\lambda =1$ into Theorem \ref{Teo14}, we arrive at the following nice identity: \begin{equation*} B_{n}=\frac{1}{1-u}\sum_{j=0}^{n}\binom{n}{j}\left( \left( H(u)+1\right) ^{n-j}-uH_{n-j}(u)\right) B_{j}. \end{equation*} \end{remark}
\begin{theorem} \label{Te15} The following relationship holds true: \begin{eqnarray*} &&\sum_{j=0}^{n}\binom{n}{j}\left( \ln a\right) ^{n-j}\left( \ln c\right) ^{j}E_{j}-u(\ln c)^{n}E_{n} \\ &=&\sum_{j=0}^{n}\binom{n}{j}\left( \ln c\right) ^{j}\left( \lambda \left( \mathcal{H}(u;a,b,c;\lambda )+\ln b\right) ^{n-j}-u\mathcal{H} _{n-j}(u;a,b,c;\lambda )\right) E_{j}. \end{eqnarray*} \end{theorem}
\begin{proof} Proof of Theorem \ref{Te15} is same as that of Theorem \ref{Teo14}. Combining (\ref{Mm}), (\ref{M1a}) and (\ref{Mm1}), we easily arrive at the desired result. \end{proof}
\begin{remark} By substituting $b=c=e$ and $a=\lambda =1$ into Theorem \ref{Te15}, we arrive at the following nice identity: \begin{equation*} E_{n}=\frac{1}{1-u}\sum_{j=0}^{n}\binom{n}{j}\left( \left( H(u)+1\right) ^{n-j}-uH_{n-j}(u)\right) E_{j}. \end{equation*} \end{remark}
\end{document} | arXiv |
Collier's New Encyclopedia (1921)/United States of America
From Wikisource
< Collier's New Encyclopedia (1921)
←United Provinces of Agra and Oudh
Collier's New Encyclopedia
United States Christian Commission→
sister projects: Wikipedia article.
Edition of 1921; disclaimer.
2354708Collier's New Encyclopedia — United States of America
UNITED STATES OF AMERICA, a Federal republic, composed of 48 States, the District of Columbia, the District of Alaska, the territories of Hawaii and Porto Rico, the Philippine Islands, Guam, Tutuila, the Panama Canal Zone, and the Virgin Islands; chiefly occupying the temperate portions of North America from lat. 24° 20′ to 49° N., and lon. 66° 48′ to 124° 32′ W.
Boundary.—The United States is bounded on the N. by British North America, the boundary line running through the Strait of Juan de Fuca to the S. of Vancouver's Island, but to the N. of the island of San Juan, striking the mainland at the 49th parallel and running along that parallel to the Lake of the Woods, and thence by a devious route through the Great Lakes and along the Laurentian water-shed to the St. John's and St. Croix rivers and Fundy Bay. The land boundary is a clearing 80 feet wide, with iron mile posts 4 feet high painted white. The E. and W. boundaries are formed by the Atlantic and Pacific Oceans respectively, the S. boundary by the Gulf of Mexico, the Rio Grande del Norte up to the 32d parallel, and a broken line drawn between the 31st and 33d parallels to the Pacific separating the United States from Mexico. These boundaries do not include Alaska. The ocean shore lines are as follows: North Atlantic coast, including bays, islands, etc., 6,150 miles; South Atlantic coast, 6,209; Mexican Gulf coast, 5,744; Pacific coast, 3,251—total, 21,354. The land, lake, and river boundary toward Canada is 3,700 miles, and the similar one toward Mexico, 2,105 miles; making the total ocean, land, lake, and river boundary, 11,075 miles. Excluding Alaska the greatest Continental extent E. and W. is 3,100 miles and N. and S., 1,780 miles.
Area.—The tables shown on pages 81 and 82 give the area of the continental territory by States and Territories.
State or Territory Date of Act of
or Admission Total
(Sq. Miles)
Original States:
New Hampshire 9,341
Massachusetts 8,266
Rhode Island 1,248
Connecticut 4,965
New York 49,204
New Jersey 8,224
Pennsylvania 45,126
Delaware 2,379
Maryland 12,327
Virginia 42,627
North Carolina 52,426
South Carolina 30,989
Georgia 59,265
States Without Previous Territorial Organization Admitted:
Kentucky 40,598
Tennessee 42,022
Maine 33,040
Texas 265,896
West Virginia 24,170
States With Previous Territorial Organization Admitted:
{ {\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}} Territory
Territories, etc.:
Total exclusive of Alaska and Hawaii 3,026,789
Total including Alaska and Hawaii 3,624,122
Authorities differ somewhat on some of these dates and areas: the above are from Census reports.
NONCONTIGUOUS TERRITORY OF THE UNITED STATES: DATES OF ACQUISITION AND ORGANIZATION, AND POPULATION AND AREA
Territory Date of
Acquisition or
Organization Area
(Sq. Miles) Population
Year Number
Alaska (District)
{ {\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \\\ \end{matrix}}\right.}} Acquired
} {\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}} 590,884
Guam Acquired
210 1919 14,969
} {\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}} 6,449
Panama Canal Zone Acquired
Philippine Islands Acquired
115,026 1919 9,101,427
Porto Rico Acquired
3,435 1919 1,262,158
Tutuila Group Acquired
77 1916 7,550
Virgin Islands Acquired
Topography.—The two great mountain systems of the United States are the Appalachians and the Rocky Mountains. The former extend from the mouth of the St. Lawrence to the mouth of the Mississippi—a distance of 1,300 miles—and at the S. bend inland, leaving the wide and rich seaboard of Virginia, the Carolinas, Georgia, Alabama, and Florida. This maritime region includes all the older States, and its inhabitants still amount to one-third of the whole. As far S. as the Hudson river it is hilly; thence, as far as the Alleghenies extend, its surface is divided between a plain and a mountain slope, the base of which appears to have been the shore of an ancient sea. The most fertile part of this slope is between Long Island and the Potomac. The coast to the Mississippi is sandy throughout; from Long Island to North Carolina it is marshy only close to the sea, but farther S. the seaward half of the plain is covered with swamps. The Appalachians form the watershed between the rivers draining into the Atlantic and the tributaries to the Mississippi, though some of the former may be said to rise on the inland side of the mountains, and to force a passage through them to the sea. The principal rivers falling into the Atlantic are the Penobscot, Kennebec, Merrimac, Connecticut, Hudson, Delaware, Susquehanna, Potomac, Rappahannock, James, Roanoke, Pedee, Santee, Savannah, and Altamaha. The Chattahoochee and the Flint river joining form the Appalachicola; the Alabama and Tombigbee, the Mobile; these drain into the Gulf of Mexico E. of the Mississippi.
The great central plains and prairies between the Appalachians and the Rocky Mountains are drained almost entirely by the Mississippi and its affluents, chief of which are the Ohio, Tennessee, Missouri, Arkansas, and Red river. The only other river of great importance flowing into the Gulf of Mexico is the great boundary river, the Rio Grande del Norte. The streams flowing N. are trifling, the principal being the Red river of the North, which flows into Lake Winnipeg. Almost the whole of the Mississippi basin consists of open, rolling prairies, while, on the other hand, almost all the country between the Appalachians and the Atlantic was originally more or less thickly wooded. Between the Rocky Mountains and the Pacific Alps, called Sierra Nevada, in California and Cascade Range farther N., lies a rainless region, mostly S. of lat. 45° N., with an average elevation of 5,000 feet above the ocean, great part of it communicating, not with the sea, but draining into salt lakes and marshes. Except where irrigated, this plateau is utterly unproductive. To the N. it is drained by the Columbia, with its tributary the Snake river, which forces its way through the Sierras to the Pacific; while in the S. portion the Colorado and its affluents, after flowing through frightful cañons 3,000 to 5,000 feet below the surface of the plateau for some 600 miles, forms a delta at the head of the Gulf of California. The Great Cañon of the Colorado is more than 300 miles long. Between the Sierras and the ocean stretches the comparatively narrow but rich and beautiful sea-coast known as the Pacific Slope, drained by the Columbia, the Klamath, the Sacramento, and the San Joaquin, along with numerous smaller streams. The "Great Divide," or watershed, is in Montana and Wyoming, whence flow the Missouri, Columbia, and Colorado. In this wild region Congress set apart in February, 1872, the Yellowstone National Park, a tract 62 by 54 miles in extent (3,312 sq. miles) in the N. W. of Wyoming. The region, while mostly unfit for agriculture and mining, contains more natural marvels than can be found elsewhere. There are hot springs with their basins incrusted with calcareous spar, steam jets, geysers, mud volcanoes, waterfalls, caves with stalactites and stalagmites, eroded columns, statues, castles, cathedrals, etc., and a large lake swarming with fish. The valley of the Upper Yellowstone abounds in these wonders. Further details of the topography of the country will be found in the articles on the several States and Territories.
Climate.—The vast area of the United States necessarily exhibits a great variety of climate. New York has the summer of Copenhagen and the winter of Rome, the minimum range of the mercury being 5° in winter, and the maximum 98° in summer. The States bordering on Canada exceed both of these extremes, but throughout the Middle States, lat. 37°-41°, the climate is agreeable and often delightful throughout most of the year. The main peculiarity of the North American seasons is the almost total absence of spring. Mason and Dixon's Line, with its W. extension along the Ohio, Mississippi, and Missouri, has a historical interest, but is also of climatic importance in the geography of the cis-Missouri States. N. of it, sleighs are in frequent use during winter; S. of it, they are seen rarely. To the N. the productions are those of the temperate zone, and the States were always free; to the S., the country becomes more and more tropical as one advances. From meridians 98° to 100° the climate is still variable from year to year, seasons of rain and plenty being followed by others in which drought is the forerunner of scarcity. But the planting of forest trees and the cultivation of the soil, at first by irrigation, has largely increased the amount of rainfall. Along the Pacific seaboard, especially in California, the climate resembles that of S. Europe. The isothermal lines, roughly stated, show a mean temperature of 72° for Florida, the Gulf Shores, and Arizona; of from 52° to 60° for S. of Pennsylvania, Virginia, the N. border of the Carolinas, Tennessee, Missouri, Kansas, S. of Utah and Nevada, and the greater part of California; from 44° to 52° for Massachusetts, New York, Michigan, northern Illinois, Nebraska, Oregon and Washington; and from 36° to 44° for Maine, parts of New Hampshire and Vermont, Wisconsin, Minnesota, the whole crest of the Rocky Mountains, and parts of Oregon and California along the Sierras. The annual rainfall ranges from 56 to 64 inches in the S. of Florida and along the N. W. Pacific coast; 44 to 56 inches over the New England coast and the greater part of the Southern States, while in New York, Pennsylvania, Illinois, etc., it is 32-44 inches. In Texas, Indian Territory, eastern Kansas and Nebraska, Dakota and Minnesota, and western California, it is 20-32 inches, while in the tract between 98° and 118° it ranges from 18 to 4 inches. Malarial diseases prevail in the lowlands of most of the Southern States, as also in the new and marshy portions of the Western States below lat. 40° N. Consumption and chest diseases prevail in New England and in the Middle States. Minnesota, Colorado, California, Arkansas, Georgia, and Florida are favorite resorts for persons with weak lungs. On the whole, the climate of the United States may be called healthy, malarious and deadly spots being very few; while certain districts, especially of Florida, the central plains, and the Pacific coast, are among the most salubrious in the world.
Geology and Mineralogy.—Geologically as well as geographically the United States is divided into two great sections by the Rocky Mountains, along whose whole extent, in a wide belt from N. to S., Cretaceous formations predominate, with occasional stretches of Carboniferous strata. Tertiary formations embrace almost the whole of the basin between the Rocky Mountains and the Coast Range, broken by igneous rocks in Washington and in Oregon, and by Metamorphic strata along the Sierras; in the E. section Tertiary formations stretch along the coast from the Rio Grande almost to the Hudson. Metamorphic, igneous, and Devonian rocks prevail in New England, and along the shores of the Great Lakes the Middle Devonian or Old Red Sandstone. Older Palæozoic groups occur in Wisconsin, Ohio, and Tennessee, and run side by side with Metamorphic strata along the Appalachians, while a large proportion of the interior is occupied by great Carboniferous deposits. Anthracite coal occurs in the basins of Pennsylvania, which embrace about 472 square miles, and extend to a depth of from 60 to 100 feet. The Eastern coal fields embrace an area of over 69,000 square miles; the interior, 132,000 square miles; the Gulf, 2,100; the Northern, 88,590; the Rocky Mountain, 37,000, and the Pacific coast, 1,900. (See Coal). The ores of iron abound in the States, and include all known ores. The ore beds most largely worked are in Minnesota, Michigan, Alabama, Wisconsin, New York, Tennessee, Virginia, and New Jersey. Copper ore is found chiefly in Arizona, Michigan, Montana, Utah, Nevada, New Mexico, California, Tennessee, Alaska, Illinois, Kansas and Oklahoma; lead ores (galena) in Missouri, Idaho, Utah, etc., quicksilver in California and Nevada. Gold and silver are widely distributed, 24 States and Territories reporting them; but California, Colorado, Nevada, Montana, Arizona, South Dakota, and Utah produce the larger part; Nevada alone about one-half. Nevada, Utah, and Arizona yield more silver than gold.
Flora and Fauna.—The indigenous plants of the United States are estimated at about 5,000 species, California alone producing at least 2,500. The potato, the tobacco plant, and maize, now so familiar in Europe, have all been introduced from the United States or Mexico. The United States is especially rich in valuable timber trees, of which no less than 120 species, growing in sufficient quantities to be of commercial importance, attain a height of 100 feet and upward. Of these 12 species reach an altitude of 200 feet, and 5 or 6 exceed 300 feet. Hickory, magnolia, liquidamber, sassafras, and sequoia trees (to which species belong the giant trees of California), found only in a fossil state in the Old World, abound in the United States, as well as palmetto, tulip tree, cypress, cottonwood, live oak, and other oaks, and a number of trees more or less closely resembling the common species of western Europe, to which the same names have been given.
Agriculture and Live Stock.—For the aggregate acreage, production, and values of the principal agricultural crops, see Agriculture, the several State and Territorial articles, and the individual crop articles. For the production and manufacture of Cotton see article thereon. Manufactures in respect to product constitute the leading industry of the United States, and their importance is increasing more rapidly than that of agriculture. The manufacturing section is situated mainly in the North Atlantic States, spreading with diminishing importance W., following closely the distribution of the urban population. About half of the manufactured product comes from the nine States included in the North Atlantic group, and about one-third from the North Central States.
Manufactures.—The following table presents a summary of the manufacturing interests of the United States in 1899, 1904, 1909 and 1914:
Group Year Establishments Average
Earners Cost of
Amount Value of
Products Value
Food and kindred products 1899 41,247 301,868 $1,782,863,000 $2,199,204,000 $416,341,000
1904 45,857 354,046 2,306,121,000 2,845,556,000 539,435,000
Textiles 1899 17,647 1,022,123 894,846,000 1,628,606,000 733,760,000
1904 17,042 1,166,305 1,246,562,000 2,147,441,000 900,879,000
1909 21,723 1,438,446 1,745,516,000 3,060,199,000 1,314,683,000
Iron and steel and their products 1899 14,082 745,235 1,000,949,000 1,819,478,000 818,529,000
1904 14,431 868,634 1,190,794,000 2,199,776,000 1,008,982,000
Lumber and its manufactures 1899 34,954 671,696 480,930,000 1,007,532,000 526,602,000
1904 32,501 734,136 517,501,000 1,219,749,000 702,248,000
Leather and its finished products 1899 5,625 248,626 396,633,000 582,048,000 185,415,000
1904 5,318 264,459 480,221,000 724,391,000 244,170,000
1914 6,758 307,060 753,135,000 1,104,595,000 351,460,000
Paper and printing 1899 26,627 298,744 214,566,000 607,907,000 393,341,000
1904 30,803 351,640 309,012,000 859,814,000 550,802,000
Liquors and beverages 1899 5,740 55,120 93,815,000 382,898,000 289,083,000
1904 6,379 68,338 1?9,849,000 501,254,000 361,405,000
1909 7,347 77,827 186,128,000 674,311,000 488,183,000
Chemicals and allied products 1899 8,928 196,538 451,457,000 761,691,000 310,234,000
Stone, clay, and glass products 1899 11,524 231,716 85,137,000 270,650,000 185,513,000
Metals and metal products, other than iron and steel 1899 5,041 161,463 $472,515,000 $690,974,000 $218,459,000
Tobacco manufactures 1899 14,959 132,526 92,867,000 263,713,000 170,846,000
Vehicles for land transportation 1899 7,338 133,663 153,254,000 277,485,000 124,231,000
Railroad repair shops 1899 1,400 180,620 113,809,000 227,485,000 113,676,000
Miscellaneous industries 1899 12,402 332,825 342,210,000 687,256,000 345,046,000
All industries 1899 207,514 4,712,763 6,575,851,000 11,406,927,000 4,831,076,000
1904 216,180 5,468,383 8,500,208,000 14,793,903,000 6,293,695,000
1909 268,491 6,615,046 12,142,791,000 20,672,052,000 8,529,261,000
Commerce.—The subjoined table is a summary of the foreign trade of the United States in the year ending June 30, 1920:
Groups Twelve Months
Ending June 30, 1920
Dollars Per
Cent.
Free of duty:
Crude materials for use in manufacturing 1,912,403,056 56.16
Foodstuffs in crude condition, and food animals 547,376,705 16.07
Foodstuffs partly or wholly manufactured 65,895,555 1.94
Manufactures for further use in manufacturing 518,921,062 15.24
Manufactures ready for consumption 331,090,664 9.72
Miscellaneous 29,762,752 .87
Total free of duty 3,405,449,794 100.00
Dutiable:
Crude materials for use in manufacturing 229,241,565 12.50
Foodstuffs in crude condition, and food animals 75,063,040 4.09
Foodstuffs partly or wholly manufactured 825,440,909 45.04
Manufactures ready for consumption 414,035,025 22.59
Miscellaneous 7,599,114 .41
Total dutiable 1,833,171,874 100.00
Free and dutiable:
Total Imports of merchandise 5,238,621,668 100.00
Per cent. free . . . . . . . . 65.01
Duties collected from customs 322,902,649 . . . . .
Average ad valorem rate of duty, based on import for consumption . . . . . . . . 6.31
Remaining in warehouse at the end of the month . . . . . . . . . . . . .
Domestic:
Foodstuffs in crude condition, and food animals 626,577,003 7.88
Foodstuffs partly or wholly manufactured 1,514,616,127 19.05
Manufactures ready for consumption 2,835,999,005 35.67
Total domestic 7,950,429,180 100.00
Foreign 160,610,553 . . . . .
Total exports 8,111,039,733 . . . . .
Excess of exports 2,872,418,065 . . . . .
WATER-BORNE COMMERCE
Imports:
In American vessels 1,836,026,959 39.01
In foreign vessels 2,870,930,209 60.99
Total (except in land vehicles) 4,706,957,168 100.00
Exports:
Imports 150,540,200 . . . . .
Exports 466,592,606 . . . . .
Silver:
TONNAGE OF VESSELS Net Tons
Entered:
American 26,242,330 50.06
Foreign 26,178,328 49.94
Total entered 52,420,658 100.00
Cleared:
Total cleared 56,072,387 100.00
Railroads.—The following table gives the railway mileage of the United States on Jan. 1, 1919:
Territory Total
Alabama 5,607.86
Arizona 2,390.56
Arkansas 5,418.62
California 8,499.01
Colorado 5,610.95
Connecticut 998.14
Delaware 340.51
Florida 5,299.53
Georgia 7,699.94
Idaho 2,913.13
Illinois 13,275.04
Indiana 7,707.60
Iowa 10,129.23
Kansas 9,556.74
Kentucky 4,073.29
Louisiana 5,645.84
Maine 2,352.49
Maryland 1,464.84
Massachusetts 2,134.16
Michigan 8,969.29
Minnesota 9,392.32
Mississippi 4,450.58
Missouri 8,810.56
Montana 5,094.12
Nebraska 6,254.61
Nevada 2,191.04
New Hampshire 1,250.35
New Jersey 2,513.63
New Mexico 3,060.27
New York 8,722.14
North Carolina 5,615.41
North Dakota 5,295.01
Ohio 9,665.38
Oklahoma 6,558.87
Oregon 3,404.72
Pennsylvania 12,476.15
Rhode Island 211.60
South Carolina 3,850.00
South Dakota 4,284.14
Tennessee 4,173.60
Texas 16,713.91
Utah 2,222.38
Vermont 1,081.06
Virginia 4,803.76
Washington 6,292.09
West Virginia 4,007.70
Wisconsin 7,736.18
Wyoming 1,929.62
Alaska ......
District of Columbia 53.57
Hawaii ......
Dec. 31, 1918 262,201.54
Dec. 31, 1917 263,928.65
June 30, 1916 264,024.77
Canals.—The principal canals in the United States, the year of opening, and their total length, are as follows:
Name of Canal, and State Year Miles
Cape Cod Ship Canal, Mass. 1914 13
Erie, and Branches, N. Y. 1825 340.4
Delaware and Raritan, N. J. 1834 44
Schuylkill Navigation Co., Pa. 1825 86.96
Chesapeake and Ohio, Md. 1850 184.5
Illinois and Michigan, Ill. 1848 95
Chicago Drainage and Ship Canal, Ill. 1900 38.6
Illinois and Mississippi, Ill. 1907 75
Galveston and Brazos, Tex. .... 36
Monongahela Canal, Pa. 1879 128
Ohio Canal, Pa. 1885 968.5
Muskingum, Ohio. 1840 91
Illinois Canal, Ill. 1889 223
Fox Canal, Wis. 1856 176
Cumberland Canal, Tenn. and Ky. 1905 326
Black Warrior, Ala. 1895 362
Coosa Canal, Ala. 1890 165
Trinity River, Texas 1909 330
Brazos River, Texas 1915 425
Religion and Education.—There is no State or officially recognized religion in the United States. Every form of religious belief is tolerated by National and State laws, but no sectarian distinctions are permitted to be considered in public legislation, the prevailing sentiment of the country being that each sect or denomination must maintain itself without any public aid. The Roman Catholic is the most powerful religious body. Its membership as reported represents the entire Roman Catholic population as compared with the communicant members of other denominations. It is derived from various sources spread widely over the country. In the Northeastern States it is made up largely of Irish and French-Canadian stock, while further W. along the shores of the Great Lakes the Roman Catholics are chiefly French Canadians by birth or extraction. The Methodist and Baptist denominations are strongest in the Southern States; the Presbyterian in the Middle and Southern States and the upper Mississippi valley; the Episcopalian in the Northeastern States; and the Congregational mostly in New England. The educational establishment is treated very fully under titles that will readily suggest themselves to the reader, covering the public or common schools, the secondary, and the advanced and professional institutions.
Banking and Insurance.—The progress and results of banking legislation, from the earliest period to the latest Act of Congress bearing thereon, are set forth under Bank, Banks in the United States, each title showing the latest official statistics available. Under the title of Insurance will be found mention of the kinds in operation, with an approximate view of present conditions.
Revenue and Expenditure.—The following table shows the receipts and disbursements of the Government for the fiscal year 1921:
COMPARATIVE ANALYSIS OF RECEIPTS AND DISBURSEMENTS
Customs $150,097,265.73
Internal revenue:
Income and profits tax 1,628,203,930.54
Miscellaneous 770,064,311.20
Miscellaneous revenue [1]415,452,127.16
Panama Canal tolls, etc. 3,701,642.85
Total ordinary $2,967,519,277.48
Excess of ordinary receipts over ordinary disbursements $459,504,944.43
Excess of ordinary disbursements over ordinary receipts . . . . . . . . . . . . . .
Liberty bonds and Victory notes 35,075.00
Certificates of indebtedness 4,613,223,450.00
War-savings securities 12,142,660.18
Postal Savings bonds 72,800.00
Deposits for retirement of National bank notes and Federal Reserve bank notes (acts of July 14, 1890, and Dec. 23, 1913) 7,548,147.50
Total 4,633,022,132.68
Grand total receipts 7,600,541,410.16
Checks and warrants paid (less balances repaid, etc.) 1,950,396,545.30
Interest on public debt paid 478,418,864.44
Panama Canal: Checks paid (less balances repaid, etc.) 6,028,931.76
Purchase of obligations of foreign Governments 57,201,633.53
Purchase of Federal farm loan bonds:
Principal 15,850,000.00
Accrued interest 118,358.08
Total ordinary 2,508,014.333.05
Bonds, interest-bearing notes, and certificates retired 4,937,738,624.14
National bank notes and Federal Reserve bank notes retired (acts of July 14, 1890, and Dec. 23, 1913) 7,538,741.00
Grand total disbursements $7,453,291,698.19
Debt.—The following is the official statement of the public debt Dec. 31, 1920:
Total gross debt, Nov. 30, 1920 $24,175,156,244.14
Public-debt receipts, Dec. 1 to 31, 1920 $1,412,328,847.46
Public-debt disbursements, Dec. 1 to 31, 1920 1,600,418,856.99
[2]Decrease in fractional currency outstanding 4,842,066.45
Decrease for period 192,932,075.98
Total gross debt, Dec. 31, 1920 $23,982,224,168.16
Note.—Total gross debt before deduction of the balance held by the Treasurer free of current obligations, and without any deduction on account of obligations of foreign Governments or other investments, was as follows:
Bonds:
Consols of 1930 $599,724,050.00
Loan of 1925 118,489,900.00
Panama's of 1916-1936 48,954,180.00
Panama's of 1918-1938 25,947,400.00
Panama's of 1961 50,000,000.00
Conversion Bonds 28,894,500.00
Postal Savings Bonds 11,612,160.00
$883,622,190.00
First Liberty Loan 1,952,368,450.00
Second Liberty Loan 3,323,137,800.00
Third Liberty Loan 3,646,868,400.00
Fourth Liberty Loan 6,363,733,163.00
15,286,107,813.00
Total bonds $16,169,730,003.00
Victory Liberty Loan $4,225,970,755.00
Treasury Certificates:
Tax $1,651,694,500.00
Loan 648,961,500.00
Pittman Act 259,375,000.00
Special Issues 32,854,450.00
2,592,885,450.00
War Savings Securities (net cash receipts) 760,953,780.53
Total interest-bearing debt 23,749,539,988.53
Debt on which interest has ceased 7,441,490.26
Noninterest-bearing debt 225,242,689.37
Total gross debt $23,982,224,168.16
Defenses.—See Army; Military Organization, United States; Navy.
Pensions.—The number of pensioners on the roll at the end of the fiscal year 1920 was 592,190, The number of Civil War pensioners was 243,520, or a decrease of 27,871 during the year. There were 290,100 Civil War widows on the pension rolls. Of the War of 1812, there were on June 30, 1920, 71 surviving widows, and of the war with Mexico, 148 survivors and 2,432 widows. The pensioners of the Spanish-American War numbered 30,432. The total amount disbursed for pensions throughout the year was $213,295,314,
Post Office.—The revenue of the postal service for the fiscal year ending June 30, 1919, amounted to $436,239,126. The Act of Congress passed on November 7, 1917, increasing the postage rates, expired by limitation on June 30, 1919. The expenditure for the year was $362,497,635. In 1919 and 1920 mail service by aeroplane was developed to a point of practical value. Service was maintained between New York and Chicago, and other large cities. There were in 1919 565,509 depositors in the postal savings banks, with deposits of $167,323,260.
Population.—The population of the United States from 1790 to 1890 was as follows:
1790 3,929,214
1830 12,866,020
The following table shows the population by States, compiled from the census reports for 1900, 1910, and 1920:
See also Census.
POPULATION OF THE UNITED STATES, BY STATES: 1920, 1910, AND 1900
State Population Increase[3]
1910-1920 Increase[3]
1920 1910 1900 Number Per
cent. Number Per
United States 105,710,620 91,972,266 75,994,575 13,736,505 14.9 15,977,691 21.0
Alabama 2,348,174 2,138,093 1,828,697 210,081 9.8 309,396 16.9
Arizona 334,162 204,354 122,931 129,808 63.5 81,423 66.2
Arkansas 1,752,204 1,574,449 1,311,564 177,755 11.3 262,885 20.0
California 3,426,861 2,377,549 1,485,053 1,049,312 44.1 892,496 60.1
Colorado 939,629 799,024 539,700 140,605 17.6 259,324 48.0
Connecticut 1,380,631 1,114,756 908,420 265,875 23.9 206,336 22.7
Delaware 223,003 202,322 184,735 20,681 10.2 17,587 9.5
District of Columbia 437,571 331,069 278,718 106,502 32.2 52,351 18.8
Florida 968,470 752,619 528,542 215,851 28.7 224,077 42.4
Georgia 2,895,832 2,609,121 2,216,331 286,711 11.0 392,790 17.7
Idaho 431,866 325,594 161,772 106,272 32.6 163,822 101.3
Illinois 6,485,280 5,638,591 4,821,550 846,689 15.0 817,041 16.9
Indiana 2,930,390 2,700,876 2,516,462 229,514 8.5 184,414 7.3
Iowa 2,404,021 2,224,771 2,231,853 179,250 8.1 −7,082 −0.3
Kansas 1,769,257 1,690,949 1,470,495 78,308 4.6 220,454 15.0
Kentucky 2,416,630 2,289,905 2,147,174 126,725 5.5 142,731 6.6
Louisiana 1,798,509 1,656,388 1,381,625 142,121 8.6 274,763 19.9
Maine 768,014 742,371 694,466 25,643 3.5 47,905 6.9
Maryland 1,449,661 1,295,346 1,188,044 154,315 11.9 107,302 9.0
Massachusetts 3,852,356 3,366,416 2,805,346 485,940 14.4 561,070 20.0
Michigan 3,668,412 2,810,173 2,420,982 858,239 30.5 389,191 16.1
Minnesota 2,387,125 2,075,708 1,751,394 311,417 15.0 324,314 18.5
Mississippi 1,790,618 1,797,114 1,551,270 −6,496 −0.4 245,844 15.8
Missouri 3,404,055 3,293,335 3,106,665 110,720 3.4 186,670 6.0
Montana 548,889 376,053 243,329 172,836 46.0 132,724 54.5
Nebraska 1,296,372 1,192,214 1,066,300 104,158 8.7 125,914 11.8
Nevada 77,407 81,875 42,335 −4,468 −5.5 39,540 93.4
New Hampshire 443,083 430,572 411,588 12,511 2.9 18,984 4.6
New Jersey 3,155,900 2,537,167 1,883,669 618,733 24.4 653,498 34.7
New Mexico 360,350 327,301 195,310 33,049 10.1 131,991 67.6
New York 10,386,227 9,113,614 7,268,894 1,271,613 14.0 1,844,720 25.4
North Carolina 2,559,123 2,206,287 1,893,810 352,836 16.0 312,477 16.5
North Dakota 646,872 577,056 319,146 69,816 12.1 257,910 80.8
Ohio 5,759,394 4,767,121 4,157,545 992,273 20.8 609,576 14.7
Oklahoma 2,028,283 1,657,155 790,391 371,128 22.4 866,764 109.7
Oregon 783,389 672,765 413,536 110,624 16.4 259,229 62.7
Pennsylvania 8,720,017 7,665,111 6,302,115 1,054,906 13.8 1,362,996 21.6
Rhode Island 604,397 542,610 428,556 61,787 11.4 114,054 26.6
South Carolina 1,683,724 1,515,400 1,340,316 168,324 11.1 175,084 13.1
South Dakota 636,547 583,888 401,570 52,659 9.0 182,318 45.4
Tennessee 2,337,885 2,184,789 2,020,616 153,096 7.0 164,173 8.1
Texas 4,663,228 3,896,542 3,048,710 766,686 19.7 847,832 27.8
Utah 449,396 373,351 276,749 76,045 20.4 96,602 34.9
Vermont 352,428 355,956 343,641 −3,528 −1.0 12,315 3.6
Virginia 2,309,187 2,061,612 1,854,184 247,575 12.0 207,428 11.2
Washington 1,356,621 1,141,990 518,103 214,631 18.8 623,887 120.4
West Virginia 1,463,701 1,221,119 958,800 242,582 19.9 262,319 27.4
Wisconsin 2,632,067 2,333,860 2,069,042 298,207 12.8 264,818 12.8
Wyoming 194,402 145,965 92,531 48,437 33.2 53,434 57.7
Government.—The form of government of the United States is based on the Constitution of Sept. 17, 1787, to which 10 amendments were added Dec. 15, 1791; an 11th amendment, Jan. 8, 1798; a 12th amendement, Sept. 25, 1804; a 13th amendment, Dec. 18, 1865; a 14th amendment, July 28, 1868; a 15th amendment March 30, 1870; a 16th amendment, Feb. 13, 1913; a 17th amendment, May 31, 1913; an 18th amendment, Jan. 16, 1920, a 19th amendment, Aug. 26, 1920. By the Constitution the government of the nation is intrusted to three separate authorities, the Executive, the Legislative, and the Judiciary. The executive power is vested in a President, who holds his office during the term of four years, and is elected, together with a Vice-President chosen for the same term, in the mode prescribed as follows: "Each State shall appoint, in such manner as the Legislature thereof may direct, a number of electors, equal to the whole number of senators and representatives to which the State may be entitled in the Congress; but no senator or representative, or person holding an office of trust or profit under the United States, shall be appointed an elector." The Constitution enacts that "the Congress may determine the time of choosing the electors, and the day on which they shall give their votes, which day shall be the same throughout the United States"; and further, that "no person except a natural-born citizen, or a citizen of the United States at the time of the adoption of this Constitution, shall be eligible to the office of President; neither shall any person be eligible to that office who shall not have attained the age of 35 years, and been 14 years a resident within the United States." The President is commander-in-chief of the army and navy and of the militia in the service of the Union. He has the power of a veto on all laws passed by Congress; but, notwithstanding his veto, any bill may become a law on its being afterward passed by each House of Congress by a two-thirds vote. The Vice-President is ex officio President of the Senate. The presidential succession is fixed by chapter 4 of the acts of the 49th Congress, 1st session. In case of the removal, death, resignation, or inability of both the President and Vice-President, then the Secretary of State shall act as President till the disability of the President or Vice-President is removed or a President is elected. If there be no Secretary of State, then the Secretary of the Treasury will act; and the remainder of the order of succession is: Secretary of War, Attorney-General, Postmaster-General, Secretary of the Navy, and Secretary of the Interior (the office of Secretary of Agriculture was created after the passage of the act). The acting President must, on taking office, convene Congress, if not at the time in session, in extraordinary session, giving 20 days' notice. This act applies only to such Cabinet officers as shall have been appointed by the advice and consent of the Senate and are eligible under the Constitution to the presidency. Following is a list of the Presidents, Vice-Presidents, and Cabinet officers since the inauguration of the Government:
No. Name Qualified
1 George Washington April 30, 1789
George Washington March 4, 1793
2 John Adams March 4, 1797
3 Thomas Jefferson March 4, 1801
Thomas Jefferson March 4, 1805
4 James Madison March 4, 1809
James Madison March 4, 1813
5 James Monroe March 4, 1817
James Monroe March 5, 1821
6 John Quincy Adams March 4, 1825
7 Andrew Jackson March 4, 1829
Andrew Jackson March 4, 1833
8 Martin Van Buren March 4, 1837
9 William H. Harrison March 5, 1841
10 John Tyler April 6, 1841
11 James K. Polk March 4, 1845
12 Zachary Taylor March 5, 1849
13 Millard Fillmore July 9, 1850
14 Franklin Pierce March 4, 1853
15 James Buchanan March 4, 1857
16 Abraham Lincoln March 4, 1861
Abraham Lincoln March 4, 1865
17 Andrew Johnson April 15, 1865
18 Ulysses S. Grant March 4, 1869
Ulysses S. Grant March 4, 1873
19 Rutherford B. Hayes March 5, 1877
20 James A. Garfield March 4, 1881
21 Chester A. Arthur Sept. 20, 1881
22 Grover Cleveland March 4, 1885
23 Benjamin Harrison March 4, 1889
25 William McKinley March 4, 1897
William McKinley March 4, 1901
26 Theodore Roosevelt Sept. 14, 1901
Theodore Roosevelt March 4, 1905
27 William H. Taft March 4, 1909
28 Woodrow Wilson March 4, 1913
Woodrow Wilson March 4, 1917
29 Warren G. Harding March 4, 1921
VICE-PRESIDENTS OF THE UNITED STATES
1 John Adams June 3, 1789
John Adams Dec. 3, 1793
3 Aaron Burr March 4, 1801
4 George Clinton March 4, 1805
George Clinton March 4, 1809
William H. Crawford April 10, 1812
5 Elbridge Gerry March 4, 1813
John Gaillard Nov. 25, 1814
6 Daniel D. Tompkins March 4, 1817
Daniel D. Tompkins March 5, 1821
7 John C. Calhoun March 4, 1825
John C. Calhoun March 4, 1829
Hugh L. White Dec. 28, 1832
9 Richard M. Johnson March 4, 1837
10 John Tyler March 5, 1841
Samuel L. Southard April 6, 1841
Willie P. Mangum May 31, 1842
11 George M. Dallas March 4, 1845
12 Millard Fillmore March 5, 1849
13 William R. King July 11, 1850
William R. King March 4, 1853
David R. Atchison April 18, 1853
Jesse D. Bright Dec. 5, 1854
14 John C. Breckenridge March 4, 1857
15 Hannibal Hamlin March 4, 1861
16 Andrew Johnson March 4, 1865
Lafayette S. Foster April 15, 1865
Benjamin F. Wade March 2, 1867
17 Schuyler Colfax March 4, 1869
18 Henry Wilson March 4, 1873
Thomas W. Ferry Nov. 22. 1875
19 William A. Wheeler March 5, 1877
20 Chester A. Arthur March 4, 1881
Thomas F. Bayard Oct. 10, 1881
David Davis Oct. 13, 1881
George F. Edmunds March 3, 1883
21 Thomas A. Hendricks March 4, 1885
John Sherman Dec. 7, 1885
22 Levi P. Morton March 4, 1889
23 Adlai Stevenson March 4, 1893
24 Garret A. Hobart March 4, 1897
25 Theodore Roosevelt March 4, 1901
26 Charles W. Fairbanks March 4, 1905
27 James S. Sherman March 4, 1909
28 Thomas R. Marshall March 4, 1913
Thomas R. Marshall March 4, 1917
29 Calvin Coolidge March 4, 1921
SECRETARIES OF STATE
Name Residence Appointed
Thomas Jefferson Va. 1789
Edmund Randolph Va. 1794
Timothy Pickering Mass. 1795
John Marshall Va. 1800
James Madison Va. 1881
Robert Smith Md. 1809
James Monroe Va. 1811
John Quincy Adams Mass. 1817
Henry Clay Ky. 1825
Martin Van Buren N. Y. 1829
Edward Livingston La. 1831
Louis McLane Del. 1833
John Forsyth Ga. 1834
Daniel Webster Mass. 1841
Hugh S. Legaré S. C. 1843
Abel P. Upshur Va. 1843
John C. Calhoun S. C. 1844
James Buchanan Pa. 1845
John M. Clayton Del. 1849
Edward Everett Mass. 1852
William L. Marcy N. Y. 1853
Lewis Cass Mich. 1857
Jeremiah S. Black Pa. 1860
William H. Seward N. Y. 1861
Elihu B. Washburne Ill. 1869
Hamilton Fish N. Y. 1869
William M. Evarts N. Y. 1877
James G. Blaine Me. 1881
F. T. Frelinghuysen N. J. 1881
Thomas F. Bayard Del. 1885
John W. Foster Ind. 1892
Walter Q. Gresham Ill. 1893
Richard Olney Mass. 1895
John Sherman Ohio 1897
William R. Day Ohio 1897
John Hay Ohio 1898
Elihu Root N. Y. 1905
Robert Bacon N. Y. 1909
Philander C. Knox Pa. 1909
William J. Bryan Neb. 1913
Robert Lansing N. Y. 1915
Bainbridge Colby N. J. 1921
Charles B. Hughes N. Y. 1921
SECRETARIES OF THE TREASURY
Alexander Hamilton N. Y. 1789
Oliver Walcott Conn. 1795
Samuel Dexter Mass. 1801
Albert Gallatin Pa. 1801
George W. Campbell Tenn. 1814
Alexander J. Dallas Pa. 1814
William H. Crawford Ga. 1816
William H. Crawford Ga. 1817
Richard Rush Pa. 1825
Samuel D. Ingham Pa. 1829
William J. Duane Pa. 1833
Roger B. Taney Md. 1833
Levi Woodbury N. H. 1834
Thomas Ewing Ohio 1841
Walter Forward Pa. 1841
John C. Spencer N. Y. 1843
George M. Bibb Ky. 1844
Robert J. Walker Miss. 1845
William M. Meredith Pa. 1849
Thomas Corwin Ohio 1850
James Guthrie Ky. 1853
Howell Cobb Ga. 1857
Philip F. Thomas Md. I860
John A. Dix N. Y. 1861
Salmon P. Chase Ohio 1861
Williaim P. Fessenden Me. 1864
Hugh McCulloch Ind. 1865
George S. Boutwell Mass. 1869
Wm. A. Richardson Mass. 1873
Benjamin H. Bristow Ky. 1874
Lot M. Morrill Me. 1876
William Windom Minn. 1881
Charles J. Folger N. Y. 1881
Walter Q. Gresham Ind. 1884
Daniel Manning N. Y. 1885
Charles S. Fairchild N. Y. 1887
Charles Foster Ohio 1891
John G. Carlisle Ky. 1893
Lyman J. Gage Ill. 1897
Leslie M. Shaw Iowa 1902
George B. Cortelyou N. Y. 1907
Franklin MacVeagh Ill. 1909
William G. McAdoo N. Y. 1913
Carter Glass Va. 1918
David P. Houston Mo. 1920
Andrew W. Mellon Pa. 1921
SECRETARIES OF WAR
Henry Knox Mass. 1789
James McHenry Md. 1796
Roger Griswold Conn. 1801
Henry Dearborn Mass. 1801
William Eustis Mass. 1809
John Armstrong N. Y. 1813
William H. Crowford Ga. 1815
Isaac Shelby Ky. 1817
Geo. Graham (ad in.) Va. 1817
James Barbour Va. 1825
Peter B. Porter N. Y. 1828
John H. Eaton Tenn. 1829
Lewis Cass Ohio 1831
Benjamin F. Butler N. Y. 1837
Joel R. Poinzett S. C. 1837
John Bell Tenn. 1841
John McLean Ohio 1841
James M. Porter Pa. 1843
William Wilkins Pa. 1844
George W. Crawford Ga. 1849
Edward Bates Mo. 1850
Charles M. Conrad La. 1850
Jefferson Davis Miss. 1853
John B. Floyd Va. 1857
Joseph Holt Ky. 1861
Simon Cameron Pa. 1861
Edwin M. Stanton Ohio 1862
U. S. Grant (ad in.) Ill. 1867
Lorenzo Thomas (ad in.) D. C. 1868
John M. Schofield N. Y. 1868
John A. Rawlins Ill. 1869
William T. Sherman Ohio 1869
William W. Belknap Iowa 1869
Alphonso Taft Ohio 1876
James Donald Cameron Pa. 1876
George W. McCrary Iowa 1877
Alexander Ramsey Minn. 1879
Robert T. Lincoln Ill. 1881
William C. Endicott Mass. 1885
Redfield Proctor Vt. 1889
Stephen B. Elkins W. Va. 1891
Daniel S. Lamont N. Y. 189S
Russell A. Alger Mich. 1897
William H. Taft Ohio 1904
Luke E. Wright Tenn. 1908
Jacob M. Dickinson Tenn. 1909
Henry L. Stimson Ill. 1911
Lindley M. Garrison N. J. 1913
Newton D. Baker Ohio 1916
John W. Weeks Mass. 1921
SECRETARIES OF THE INTERIOR
James A. Pearce Md. 1850
Thos. M. T. McKernon Pa. 1850
Alexander H. H. Stuart Va. 1850
Robert McClelland Mich. 1853
Jacob Thompson Miss. 1857
Caleb B. Smith Ind. 1861
John P. Usher Ind. 1863
James Harlan Iowa 1865
Orville H. Browning Ill. 1866
Jacob D. Cox Ohio 1869
Columbus Delamo Ohio 1870
Zachariah Chandler Mich. 1875
Carl Schurz Mo. 1877
Samuel J. Kirkwood Iowa 1881
Henry M. Teller Col. 1882
Lucius Q. Lamar Miss. 1885
William F. Vilas Wis. 1888
John W. Noble Mo. 1889
Hoke Smith Ga. 1893
David R. Francis Mo. 1896
Cornelius N. Bliss N. Y. 1897
Ethan A. Hitchcock Mo. 1899
James R. Garfield Ohio 1907
Richard A. Ballinger Wash 1908
Walter L. Fisher Ill. 1911
Franklin K. Lane Cal. 1913
John B. Payne Va. 1920
Albert B. Fall N. M. 1921
SECRETARIES OF THE NAVY
George Cabot[4] Mass. 1798
Benjamin Stoddert Md. 1798
Jacob Crowninshield Mass. 1805
Paul Hamilton S. C. 1809
William Jones Pa. 1813
B. W. Crowninshield Mass. 1814
B. W. Crowninshield Mass. 1817
Smith Thompson N. Y. 1818
Samuel L. Southard N. J. 1823
John Branch N. C. 1829
Mahlon Dickerson N. J. 1834
Mahlon Dickenson N. J. 1837
James K. Paulding N. Y. 1838
George E. Badger N. C. 1841
David Henshaw Mass. 1843
Thomas W. Gilmer Va. 1844
John Y. Mason Va. 1844
George Bancroft Mass. 1845
William B. Preston Va. 1849
William A. Graham N. C. 1850
John P. Kennedy Md. 1852
James C. Dobbin N. C. 1853
Isaac Toucey Conn. 1857
Gideon Welles Conn. 1861
Adolph E. Borie Pa. 1869
George M. Robeson N. J. 1869
Richard W. Thompson Ind. 1877
Nathan Goff, Jr. W. V. 1881
William H. Hunt La. 1881
William E. Chandler N. H. 1882
William C. Whitney N. Y. 1885
Benjamin F. Tracy N. Y. 1885
Hilary A. Herbert Ala. 1893
John D. Long Mass. 1897
William H. Moody Mass. 1902
Paul Morton Ill. 1904
Charles J. Bonaparte Md. 1905
Victor H. Metcalf Cal. 1907
Truman H. Newberry Mich. 1908
George von L. Meyer Mass. 1909
Josephus Daniels N. C. 1913
Edwin C. Denby Mich. 1921
SECRETARIES OF AGRICULTURE
Norman J. Colman Mo. 1889
Jeremiah M. Rusk Wis. 1889
J. Sterling Morton Neb. 1893
James Wilson Iowa 1897
David F. Houston Mo. 1913
Edward T. Meredith Iowa 1920
Henry C. Wallace Iowa 1921
POSTMASTERS-GENERAL
Samuel Osgood Mass. 1789
Joseph Habersham Ga. 1795
Gideon Granger Ga. 1801
Joseph Habersham Conn. 1801
Gideon Granger Conn. 1809
Return J. Meigs, Jr. Ohio 1814
William T. Barry Ky. 1829
Amos Kendall Ky. 1835
John M. Niles Conn. 1840
Francis Granger N. Y. 1841
Charles A. Wickliffe Ky. 1841
Cave Johnson Tenn. 1845
Jacob Collamer Vt. 1849
Nathan K. Hall N. Y. 1850
Samuel D. Hubbard Conn. 1852
James Campbell Pa. 1853
Aaron V. Brown Tenn. 1857
Horatio King Me. 1861
Montgomery Blair Md. 1861
William Dennison Ohio 1864
Alexander W. Randall Wis. 1866
John A. J. Creswell Md. 1869
James W. Marshall Va. 1874
Marshall Jewell Conn. 1874
James N. Tyner Ind. 1876
David McK. Key Tenn. 1877
Horace Maynard Tenn. 1880
Thomas L. James N. Y. 1881
Timothy O. Howe Wis. 1881
Frank Hatton Iowa 1884
Don M. Dickinson Mich. 1888
John Wanamaker Pa. 1889
Wilson S. Bissell N. Y. 1893
William L. Wilson W. Va. 1895
James A. Gary Md. 1897
Charles Emory Smith Pa. 1898
Henry C. Payne Wis. 1901
Robert J. Winne Pa. 1904
George von L. Meyer Mass. 1907
Frank H. Hitchcock Mass. 1909
Albert S. Burleson Tex. 1913
Will H. Hays Ind. 1921
The Postmaster-General was not considered a Cabinet officer until 1829.
ATTORNEYS-GENERAL
William Bradford Pa. 1794
Charles Lee Va. 1795
Theophilus Parsons Mass. 1801
Levi Lincoln Mass. 1801
John Breckinridge Ky. 1805
Caesar A. Rodney Del. 1807
William Pinkney Md. 1811
William Wirt Va. 1817
John McP. Berrien Ga. 1829
Bennjamin F. Butler N. Y. 1833
Felix Grundy Tenn. 1838
Henry D. Gilpin Pa. 1840
John J. Crittenden Ky. 1841
John Nelson Md. 1843
Nathan Clifford Me. 1846
Reverdy Johnson Md. 1849
Caleb Gushing Mass. 1853
Edward M. Stanton Ohio 1860
Titian J. Coffey (ad in.) Pa. 1863
James Speed Ky. 1864
Henry Stanbery Ohio 1868
Ebenezer R. Hoar Mass. 1869
Amos T. Ackerman Ga. 1870
George H. Williams Ore. 1871
Edward Pierrepont N. Y. 1875
Charles Devens Mass. 1877
Wayne MacVeagh Pa. 1881
Benjamin H. Brewster Pa. 1881
Augustus H. Garland Ark. 1885
William H. H. Miller Ind. 1889
Judson Harmon Ohio 1895
Joseph McKenna Cal. 1897
John W. Griggs N. J. 1897
George W. Wickersham N. Y. 1909
James C. McReynolds Tenn. 1913
Thomas W. Gregory Tex. 1914
A. M. Palmer Penn. 1919
H. M. Daugherty Ohio 1921
SECRETARIES OF COMMERCE AND LABOR
George B. Cortelyou N. Y. 1903
Oscar S. Straus N. Y. 1907
Charles Nagel Mo. 1909
SECRETARIES OF COMMERCE
W. C. Redfield N. Y. 1913
J. W. Alexander Mo. 1919
Herbert Hoover Cal. 1921
SECRETARIES OF LABOR
William B. Wilson Pa. 1913
James J. Davis Pa. 1921
The Congress.—The whole legislative power is vested by the Constitution in a Congress, consisting of a Senate and House of Representatives. The Senate consists of two members from each State, chosen by the State Legislatures for six years. Senators must be not less than 30 years of age; must have been citizens of the United States for nine years; and be residents in the State from which they are chosen. Besides its legislative capacity the Senate is invested with the power of confirming or rejecting all appointments to office made by the President, and its members constitute a High Court of Impeachment. The judgment in the latter case extends only to the removal from office and disqualification. Representatives have the sole power of impeachment. The House of Representatives is composed of members elected every second year, by the vote of all citizens over the age of 21 of the several States of the Union, who are qualified in accordance with the laws of their respective States. By the 15th Amendment to the Constitution, neither race nor color affects the right of citizens. The franchise is not absolutely universal; residence for at least one year in most States (in Michigan and Maine three months) is necessary; in some States the payment of taxes, in others registration.
For Judiciary, see Judiciary; Supreme Court.
History.—The territories now occupied by the United States of America, though they were probably visited on their N. E. coast by Norse navigators about the year 1000, continued in the sole possession of numerous tribes of Indians till the rediscovery of America by Columbus in 1492. In 1498 an English expedition, under the command of Sebastian Cabot, explored the E. coast of America, from Labrador to Virginia, and perhaps to Florida. In 1513 Juan Ponce de Leon landed near St. Augustine, Fla. In 1520 some Spanish vessels from San Domingo were driven upon the coast of Carolina. In 1521, by the conquests of Cortez and his followers, Mexico, including Texas, New Mexico and California, became a province of Spain. In 1539-1542 Ferdinand de Soto led a Spanish expedition from the coast of Florida across Alabama, and discovered the Mississippi river. In 1584-1585 Sir Walter Raleigh sent two expeditions to the coast of North Carolina and attempted to form settlements on Roanoke island. A Spanish settlement was made at St. Augustine, Fla., in 1565; Jamestown, Va., was settled in 1607; New York, then called New Netherlands, in 1613; Plymouth, Mass., in 1620. A large part of the country on the Great Lakes and on the Mississippi was explored by La Salle in 1682; and settlements were made by the French.
The first effort at a union of colonies was in 1643, when the settlements in Massachusetts, New Hampshire, Rhode Island, and Connecticut formed a confederacy for mutual defense against the French, Dutch, and Indians under the title of "The United Colonies of New England." In 1761 the enforcement of the Navigation Act against illegal traders, by general search warrants, caused a strong excitement against the English Government, especially in Boston. The British admiralty enforced the law; and many vessels were seized, and the colonial trade with the West Indies was annihilated. In 1765 the passing of an act of Parliament for collecting a colonial revenue by stamps caused general indignation, and led to riots. Patrick Henry, in the Virginia Assembly, denied the right of Parliament to tax America, and eloquently asserted the dogma, "No taxation without representation." The first impulse was to unite against a common danger; and the first Colonial Congress of 29 delegates, representing nine colonies, made a statement of grievances and a declaration of rights. In 1766 the Stamp Act was repealed, but the principle of colonial taxation was not abandoned. In 1773 the duties were repealed, excepting 3d. a pound on tea.
It was now a question of principle, and from N. to S. it was determined that this tax should not be paid. Some cargoes were stored in damp warehouses and spoiled; some sent back, and in Boston a mob disguised as Indians threw it into the harbor. England then determined to enforce the Government of the crown and Parliament over the colonies, and a fleet with 10,000 troops was sent to America, which led to the battle of Lexington, and the beginning of the Revolutionary War, April 19, 1775. The news that the British troops had been compelled to beat a hasty retreat summoned 20,000 men to the vicinity of Boston. A Congress of the colonies assembled at Philadelphia, and appointed George Washington Commander-in-Chief of an army of 20,000 men. The battle of Bunker Hill was fought at Charlestown, June 17, 1775, between 1,500 Americans, who had hastily intrenched themselves, and 2,000 British soldiers. When the Americans had exhausted their ammunition they were ordered to retreat; but as they had only lost 115 killed, 305 wounded, and 32 prisoners, while the loss on the British side was at least 1,054, the encounter had all the moral effect of a victory. After a winter of great privations the British were compelled to evacuate Boston, carrying away in their fleet to Halifax 1,500 loyal families. An army of 55,000 men, including 17,000 German mercenaries (Hessians), was sent under the command of Sir William Howe to put down this "wicked rebellion."
On June 7, 1776, Richard Henry Lee, of Virginia, offered a resolution in Congress, declaring that the united colonies are, and ought to be, free and independent States; that they are absolved from all allegiance to the British crown; and that all political connection between them and the state of Great Britain is, and ought to be, totally dissolved. This resolution was adopted by the votes of 9 out of 13 colonies, and brought about the celebrated "Declaration of Independence," which on July 4, 1776, received the assent of the delegates of the colonies. They adopted the general title of the "United States of America," with a population of about 2,500,000. From the battle of Lexington, April 19, 1775, to the surrender of Yorktown, Oct. 19, 1781, in 24 engagements, including the surrender of two armies, the British losses in the field were not less than 25,000 men, while those of the Americans were about 8,000.
After the peace, concluded Sept. 3, 1783, the independence of the United States was acknowledged by foreign powers, and in 1787 the present Constitution was ratified. George Washington and John Adams, standing at the head of the Federalist party, were elected President and Vice-President of the United States. The War of 1812 grew out of the fact that England declined to put a stop to the abuse of impressing American citizens into the British navy, the attention of Congress having been called to 6,000 instances in 1811. In 1814 the Federalists of New England held a convention at Hartford in opposition to the war and the administration of President Madison, and threatened a secession of the New England States, as having to defend themselves as it was. The war was terminated by the treaty of Ghent, Dec. 24, 1814, though the English suffered a disastrous defeat at New Orleans, Jan. 8, 1815, nearly a month after peace had been concluded between England and America.
At the period of the Revolution slavery existed in all the States except Massachusetts, but it had gradually been abolished in the Northern and Middle States, except Delaware, and excluded from the new States between the Ohio and the Mississippi rivers by the terms on which the territory had been surrendered by Virginia to the Union. The two sections had already entered on a struggle to maintain the balance of power against each other. After an exciting contest in 1820 Missouri was admitted with a resolution (the "Missouri Compromise") that in future no slave State should exist N. of the parallel of lat. 36° 30′ N. In 1826 two of the founders of the republic, John Adams and Thomas Jefferson, died on July 4, the anniversary of the Declaration of Independence. In 1732 an Indian war, called the Black Hawk War, broke out in Wisconsin; but the passing of a high protective tariff act by Congress caused a more serious trouble. The State of South Carolina declared the act unconstitutional.
A collision seemed imminent, when the affair was settled by a compromise bill, introduced by Henry Clay, providing for a gradual reduction of duties till 1843, when they should not exceed 20 per cent., ad valorem. In 1835 the Seminole War broke out in Florida, and a tribe of Indians, insignificant in numbers, under the crafty leadership of Osceola, kept up hostilities for years at a cost to the United States of several thousand men, and some $50,000,000.
In 1837 Martin Van Buren succeeded General Jackson in the presidency. His term was a stormy one, from the great financial crisis of 1837, which followed a period of currency expansion and wild speculation. All the banks suspended payment, and the great commercial cities threatened insurrection. In 1840 Gen. William H. Harrison was elected President, but died in 1841, a month after his inauguration. He was succeeded by John Tyler, during whose administration the N. E. boundary question, which nearly occasioned a war with England, was settled by Daniel Webster, Secretary of State, and Lord Ashburton, In 1845 Texas was formally annexed to the United States, and James K. Polk, of Tennessee, succeeded Mr. Tyler in the presidency. M. Almonte, the Mexican minister at Washington, protested against the annexation of Texas as an act of warlike aggression, which brought about the Mexican War in 1846.
In 1847 the Mexicans were defeated by General Taylor at Buena Vista; Vera Cruz was taken by storm, and General Scott won the great battle of Cerro Gordo. In 1848 peace was signed, and by the treaty of Guadaloupe the United States obtained the cession of New Mexico and Upper California, the United States paying Mexico $15,000,000, and assuming the payment of the claims of American citizens against Mexico. In 1849 General Taylor, the "Rough and Ready" victor of Buena Vista, became President, with Millard Fillmore as Vice-President. In September of the same year California adopted a constitution which prohibited slavery. The election of Franklin Pierce in 1852 against General Scott was a triumph of the Democratic States' Rights and Southern party. A brutal assault on Charles Sumner, United States Senator from Massachusetts, by Preston Brooks, in consequence of a violent speech on Southern men and institutions, increased the excitement of both sections. In 1856 the Republicans, composed of the Northern Free-soil and Abolition parties, nominated John C. Fremont for the presidency, but James Buchanan, the Democratic candidate, received the election, with John C. Breckenridge as Vice-President. In Oct., 1859, John Brown, known in Kansas as "Ossawatomie Brown," who planned and led an expedition for freeing the negroes in Virginia, was captured, and executed Dec. 2, by the authorities of Virginia.
In 1860 the Southern delegates withdrew from the convention at Charleston, and two Democratic candidates were nominated, Stephen A. Douglas and John C. Breckinridge. The Republicans nominated Abraham Lincoln, and at the election of November, 1860, Mr. Lincoln received every Northern vote in the electoral college, except three of New Jersey, 180 votes. The South lost no time in acting on what her statesmen had declared would be the signal of their withdrawal from the Union. Four years of civil war ended in their being compelled to remain in it. In 1864 Mr. Lincoln was re-elected, and on March 4, 1865, commenced his second term, with Andrew Johnson as Vice-President. On April 14, 1865, while the North was rejoicing over the capture of Richmond and the surrender of the Confederate armies, the President was assassinated at a theater in Washington by John Wilkes Booth. The assassin was pursued and killed, and several of his accomplices were tried and executed. Andrew Johnson became President. Jefferson Davis, President of the Confederacy, fled after the surrender of Richmond; he was captured in Georgia, and released without trial in 1867.
An amendment to the Constitution, forever abolishing slavery in the States and Territories of the Union, was declared ratified by two-thirds of the States, Dec. 18, 1865. The vast change in the organization of the republic made by this new fundamental law was completed by the 14th and 15th Amendments, passed in 1868 and 1870, which gave to the former slaves all the rights and privileges of citizenship. The seceded States were readmitted to the Union on condition of their adhesion to the Constitution as thus amended. Owing to the reconstruction policy after the Civil War differences arose between President Johnson and the Republican leaders in both houses of Congress. This antagonism finally led to the resolution of the House of Representatives, passed Feb. 24, 1868, to impeach the President "of high crimes and misdemeanors." President Johnson, however, was acquitted, as the prosecution lacked one vote of the two-thirds vote necessary for conviction. Gen. Ulysses S. Grant was elected President in 1868, and inaugurated March 4, 1869, with Schuyler Colfax as Vice-President. He was re-elected, in 1872, with Henry Wilson as Vice-President. The Geneva Court of Arbitration gave its decree in the "Alabama" controversy in favor of the United States in 1872, while the San Juan Boundary dispute with Great Britain was settled in favor of the United States by the Emperor of Germany in the same year. The outrages of a secret organization known as the Ku-Klux-Klan, in the Southern States, necessitated the passing of an act in 1871 giving cognizance of such offenses to the United States courts.
The year 1876, memorable in the annals of the republic as the 100th anniversary of the Declaration of Independence, was celebrated by a great Centennial Exhibition at Philadelphia. The presidential election of the same year was so closely contested that Congress appointed a special tribunal, selected from the Senate, the House of Representatives, and the justices of the Supreme Court, to examine the election returns. The decision was in favor of Rutherford B. Hayes, the Republican candidate, who was declared to have been elected President, and inaugurated March 5, 1877. In 1879 specie payments were resumed throughout the United States, after a suspension of 17 years. In 1880 the Republican National Convention at Chicago nominated Gen. James A. Garfield, of Ohio, and Chester A. Arthur, of New York, for President and Vice-President. The Democratic National Convention was held in Cincinnati, O., and Gen. Winfield S. Hancock and William H. English, of Indiana, were selected as candidates. The result of the election was in favor of the Republicans. General Garfield was inaugurated, March 4, 1881. On July 2, 1881, he was shot by a disappointed office seeker, Charles J. Guiteau, and after more than two months of suffering died from the effects of the wound at Elberon, N. J., Sept. 19, 1881. His loss was lamented by the whole nation. He was succeeded by Vice-President Chester A. Arthur, who served the remainder of the term.
In 1884 the Democratic party nominated Grover Cleveland and Thomas A. Hendricks for the presidency and vice-presidency, while the Republicans put up James G. Blaine and John A. Logan. The election resulted in the choice of Grover Cleveland and Thomas A. Hendricks, who were inaugurated March 4, 1885. The death of General Grant on July 23, 1885, was a notable event, and one that profoundly moved the whole nation. Mr. Hendricks died Nov. 25, 1885, and John Sherman, by virtue of his election as president pro tem. of the Senate, became his successor. Mr. Cleveland's administration was in the main uneventful, though the country was disturbed by widespread and obstinate conflicts between labor and capital. The silver coinage question, the reform of the civil service, the Mormon question, the labor problem, and the Pan-Electric controversy were the issues of the hour. The presidential campaign of 1888 had the tariff question for its main issue. Mr. Cleveland was renominated by the Democracy, with Allan G. Thurman for Vice-President, and Benjamin Harrison, of Indiana, grandson of the ninth President of the United States, and Levi P. Morton, for Vice-President, were nominated by the Republicans. The latter were elected, the electoral vote standing 233 to 168. In 1889 four new States were added to the Union, namely, Montana, North Dakota, South Dakota, and Washtington, and the Territory of Oklahoma was carved out of the Indian Territory. In 1890 Wyoming and Idaho were admitted to statehood.
In 1892 Mr. Harrison was renominated by the Republicans for President, and Whitelaw Reid, of New York, for Vice-President. The Democrats nominated Mr. Cleveland for President, and Adlai E. Stevenson, of Illinois, for Vice-President. Cleveland and Stevenson were elected by an electoral vote of 277 for the ticket, against 145 for Harrison and Reid, and 22 for Weaver, the candidate of the People's party. The year 1893 was memorable for the monetary depression and hard times throughout the United States, and, to some extent, all over the world. Many thousands of men were out of employment; many financial institutions and business enterprises failed. Almost every form of security depreciated. A great railway strike, accompanied by great destruction of property and some loss of life, occurred on roads centering in Chicago; and others of less magnitude elsewhere. An army of unemployed men made a demonstration by marching across the country, subsisting on popular charity as they went, to the city of Washington, where they hoped to influence legislation by Congress, and action by the executive, to relieve the unemployed. This condition of things was popularly attributed to the administration, and to the Democratic tariff bill that had not yet been substituted for the McKinley bill, but was sure to be passed. As a consequence, in the State and Congressional elections of 1894 the Republicans obtained sweeping victories, and came into power in Congress. The administration was otherwise marked by its maintenance of friendly relations with Spain against the belligerent urgency of a large anti-Spanish party, friendly to Cuban independence; by the extension of the Civil Service; and by the Arbitration Treaty of 1897.
The presidential campaign of 1896 was an unusually exciting one, with seven tickets in the field: Republican, William McKinley and Garret A. Hobart; Democratic, William J. Bryan and Arthur Sewall; People's, William J. Bryan and Thomas E. Watson; Prohibition;, Joshua Levering and Hale Johnson; National Democratic, John M. Palmer and Simon B. Buckner; Social Labor, Charles H. Matchett and Matthew Maguire; and National (Free-Silver Prohibition), Charles E. Bentley and James H. Southgate. In the election the Republican candidates received 7,104,779 popular and 271 electoral votes, and the fused Democratic and Peoples' candidates 6,502,925 popular and 176 electoral votes. This campaign was characterized by a remarkable revolt in the Democratic party and a fusion of that party with the Populist. See Bryan, William Jennings; McKinley, William.
The great event of this administration was the war successfully waged by the United States against Spain in 1898; the freeing of Cuba from Spanish dominion; the acquisition by the United States, as a result of the war, of Porto Rico, the Philippine Islands, and Guam, and, by treaty, of Hawaii and the Samoan island of Tutuila; and the formation of a considerable party, known as Anti-Expansionists and Anti-Imperialists. The details of the war are given under Cuba; Manila Bay; Philippine Islands; Porto Rico; Santiago; Spanish-American War; and the various names of persons and places that became prominent in the war.
In the presidential campaign of 1900 there were eight tickets in the field: Republican, William McKinley and Theodore Roosevelt; Democratic, William J. Bryan and Adlai E. Stevenson; Prohibition, John G. Woolley and Henry B. Metcalf; Middle-of-the-Road or Anti-Fusion Peoples', Wharton Barker and Ignatius Donnelly; Social Democratic, Eugene V. Debs and Job Harriman; Social Labor, Joseph F. Malloney and Valentine Remmel; United Christian, J. F. R. Leonard and John G. Woolley; and Union Reform, Seth H. Ellis and Samuel T. Nicholas. The election gave the Republican candidates 7,208,224 popular and 292 electoral votes, and the Democratic candidates, 6,358,789 popular and 155 electoral votes. On Sept. 6, 1901, while attending the Pan-American Exposition in Buffalo, N. Y., President McKinley was shot twice by Leon Czolgosz, an anarchist, and died from his injuries on the 14th. Immediately thereafter Vice-President Roosevelt took the oath of office as President. In February-March, 1902, Prince Henry of Prussia, brother of the Emperor of Germany and an admiral in the German navy, visited the United States. In 1904 the Republican ticket, led by President McKinley's Vice-President, Mr. Roosevelt, was triumphantly elected with a popular majority of 2,500,000.
The administration of President Roosevelt was marked by the passage of many important measures through Congress. The Federal Government, under the guidance of the President, was especially active against combinations in restraint of trade, discriminations by railroads and the payment by them of rebates to favored shippers. As the result of an investigation carried on by various Government commissions, suits were brought against the Northern Securities Co., a holding company for the Great Northern and the Northern Pacific railroads, and this combination was declared illegal and was dissolved by the Supreme Court in 1904. The beef trust was prosecuted and declared illegal in the following year. During the 59th Congress, many important measures were passed along the lines indicated above. These included a bill for the regulation of railways, a rigid meat inspection law, and a pure food bill. Other measures provided for the establishment of the Bureau of Immigration, the restriction of Japanese immigration, and the passage of the Aldrich-Vreeland Act, making provision for a monetary commission.
The great fire and earthquake in San Francisco occurred in April, 1906. In the year previous, through the good offices of President Roosevelt, the meeting of the Russian and Japanese peace commissioners, at Portsmouth, N. H., resulted in a treaty of peace between the two countries, on September 5, 1905. The United States was obliged to intervene in Cuba owing to an insurrection in that country, and a provisional government was established on September 29, 1906. A customs treaty with Santo Domingo was ratified in 1907. Threatened friction with Japan over conditions in the Orient, especially in China, was averted by an agreement between Elihu Root, Secretary of State, and the Japanese Minister. This agreement provided for the continuance of the "open door" in China, and pledged both governments to consultation before these policies should be changed.
The President's aggressive attitude in favor of reform measures brought about sharp opposition, especially in the Senate, on the part of the leaders of the conservative element. This resulted in the ignoring by Congress of many of the policies advocated by the President.
President Roosevelt had plainly indicated that he favored William H. Taft, Secretary of War, as his successor. As a result of this support and the popular approval of Taft, he was easily nominated in the Republican National Convention. The Democrats nominated William J. Bryan for president and J. W. Kern of Indiana for vice-president. In the voting, Taft was elected by a popular vote of 7,690,006 to 6,409,106 for Bryan. Taft received 321 electoral votes, with 162 for Bryan.
The first action of Congress under the administration of President Taft, was the revision of the tariff. Long consideration resulted in the passage, on August 5, 1909, of the Payne-Aldrich Law, which was approved by the President in spite of strong opposition.
A notable change in the rules of the House of Representatives was brought about in 1910 through a coalition of Democrats and insurgent Republicans. This resulted in depriving the Speaker of some of his most important powers. During this session of Congress, the most important measures were those for the establishment of a Commerce Court, for a postal savings bank system, the Mann "White Slave" Act, and a measure providing for limitation on contributions to campaign funds.
The progressive element of the Republican party had become greatly dissatisfied with President Taft's alleged reactionary stand on important measures, and this feeling was intensified when Theodore Roosevelt returned from a trip to Africa on March 10, 1910, and expressed himself strongly dissatisfied with President Taft's administration. As a result of these conditions, the Democrats in the election of 1910, carried the House of Representatives by a majority of 66 and increased their membership in the Senate. President Taft in 1911 attempted to bring about the passage of the Reciprocity Treaty with Canada. Congress, in special session, passed the bill on July 22. It was, however, rejected by Canada. Largely as a result of a scandal in the election of senators, a constitutional amendment providing for their direct election was submitted to the people in 1912 and was ratified in 1913. In the same year the States ratified the 16th amendment to the Constitution which was submitted in 1911, granting authority to Congress to enact income tax laws. During the session of the 61st Congress, acts were passed for the government of the Panama Canal Zone, and provided for the exemption from tolls of American ships engaged in coastwise trade. An act providing for civil government of Alaska; acts providing for New Mexico and Arizona as separate States; a measure creating the Department of Commerce; and an immigration law containing a literacy test, which, however, was vetoed by the President, were also passed.
Foreign relations during these years had many important phases. The forces of occupation were withdrawn from Cuba in 1909. In the same year long-standing differences with Venezuela were peacefully settled.
In 1910 Philander C. Knox, Secretary of State, proposed to various nations the establishment of a permanent court of arbitration at The Hague. At the same time treaties of arbitration were negotiated with the principal European countries. Many of these were signed in 1911. The conditions in Mexico from 1910 to 1913 provided difficult problems for President Taft. Large forces of American soldiers were detailed to control the border during the Madero revolution and following. While the administration was opposed to intervention, it sought to protect American interests and lives. In March, 1912, an embargo was placed on the shipment of arms across the border to Mexico. President Taft declined to recognize the government of President Huerta, which succeeded that of Madero in February, 1913.
There were three prominent Republican candidates for the Presidency in 1912. These were President Taft, Theodore Roosevelt, and Senator La Follette of Wisconsin. Mr. Roosevelt did not enter the campaign until Senator La Follette was withdrawn. Preferential primaries for presidential candidates were used in many States for the first time prior to the convention. At the National Convention held in Chicago, the contested seats were decided chiefly in favor of Taft delegates. Roosevelt supporters declared the decisions wrongly made and the greater part of them declined to take part in the balloting. President Taft was recommended on the first ballot and James S. Sherman was nominated for the vice-presidency. In the Democratic party there were also several strong candidates. These included Champ Clark, the Speaker of the House; Judson Harmon, Governor of Ohio; Woodrow Wilson, Governor of New Jersey; and Oscar W. Underwood, member of Congress from Alabama. At the convention held in Baltimore, there was a strong contest between the Conservatives, led by Alton B. Parker of New York, and the Progressives, led by William J. Bryan. Forty-six ballots were required for the nomination, and Woodrow Wilson was nominated on this ballot, largely through the personal support of Bryan. Thomas R. Marshall of Indiana was nominated for the vice-presidency.
Following the nomination of President Taft, President Roosevelt left the Republican party and organized another, called the Progressive party. In August, 1912, delegates of this party met in Chicago and nominated Theodore Roosevelt for president, and Hiram W. Johnson of California for vice-president. The campaign was one of the bitterest ever waged in the history of the country. In the election on November 12, Woodrow Wilson received 6,286,214 popular votes; Theodore Roosevelt, 4,126,020; William H. Taft, 3,483,922. Thus the split in the Republican party resulted in the election of Wilson. A remarkable feature of the voting was the increase in the strength of the Socialist party. This party nearly doubled its vote in 1908. The electoral vote was 435 for Wilson, 88 for Roosevelt, and 8 for Taft. The Democrats also secured the control of the House by a large majority, and of the Senate by 7 votes.
Shortly after his inauguration, President Wilson called Congress in special session to revise the tariff. He revived the custom of Washington and Adams and delivered his message to Congress in person. A new tariff act was at once drafted and was passed on October 3, 1913. The bill, in general, greatly reduced the duties, and the loss of revenue was made up by an income tax law which was made a part of the tariff law. Congress at this session also considered the problem of currency reform. This resulted in the establishment of the Federal Reserve Bank.
Foreign relations occupied the greater part of the attention of President Wilson during 1913. The President followed President Taft's action in refusing to recognize General Huerta as president, chiefly on account of the belief that he had connived in the murder of Madero. Relations with Japan also became serious as the result of the passage by the legislature of California of laws prohibiting the ownership of land by aliens who could not be naturalized. The Japanese Government made a strong protest and William J. Bryan, Secretary of State, went to California in an effort to secure a change in the State Legislature. In this he failed. An agreement was arrived at, however, between the United States and Japan, by the terms of which Japan promised to restrict the immigration to the United States. The republic of China was recognized on May 2, 1913. Conditions in Nicaragua resulted in the establishment of a practical American protectorate in that country. On December, 1913, the President delivered a special message on the Mexican situation in which he declared that he saw no reason to "alter our policy of watchful waiting." A bill was passed by Congress providing for an emergency army of 240,000 men.
Congress passed many important measures, including the Interstate Trade Commission Bill on December 8, 1914, and the Clayton Anti-Trust Act on October 8, 1914. Largely through the efforts of the President, the Panama Canal Tolls Measure was repealed by Congress, as the result of protest made by Great Britain that its terms violated the Hay-Pauncefote Treaty.
Early in 1914 the Mexican situation grew more acute. The President lifted the embargo on the shipment of arms into Mexico, and large amounts of munitions were purchased by revolutionists against Huerta. On April 9, a number of American marines were arrested at Tampico by an officer of Huerta. Their surrender and an apology were demanded by Rear-Admiral Mayo, who also insisted on a salute of the United States flag. Huerta refused to yield to this demand, and on April 20 the President requested authority of Congress to employ the forces of the United States to exact reparation. There followed the bombardment and occupation of Vera Cruz, with a loss of 18 American marines. Americans were warned to leave Mexico. While plans were being made for actual hostilities. President Wilson accepted the offer of Argentina, Brazil and Chile to arbitrate the question at issue. The commissioners of these countries met at Niagara Falls. While they were in session, Huerta, having been defeated, resigned, and the government was relinquished to Carranza. The United States forces were withdrawn from Vera Cruz in November, 1914. A treaty was negotiated with Colombia by which the United States agreed to pay $25,000,000 to that country for the loss of Panama. This treaty was not ratified by the Senate. Eighteen peace and arbitration treaties were negotiated during this year.
United States in the World War.—The policy which was proclaimed by the United States at the outset of the World War was one of strict neutrality. For nearly three years, this attitude was officially maintained, often under circumstances of great difficulty. Both the Entente and the Central Powers, in their effort to gain a real or fancied advantage, violated the letter and spirit of international law, to the prejudice of our undoubted rights. The State Department was constantly busied with correspondence addressed to Great Britain and Germany, calling them to account for these violations, and demanding that the offending practices be abandoned. It was realized, however, that both of the warring powers were under great strain, and the diplomatic representations of this country were marked by patience and self-restraint. But from the beginning there was a difference between the injuries we suffered from the belligerents. Only property losses were incurred from the encroachments of Great Britain, as in the case of the British blockade regulations and the blacklist, while Germany's infractions of the law of nations involved the loss of American lives. Financial losses could have been made good at the end of the war; the loss of life was irreparable.
Apart from these direct grievances, the tide of popular feeling ran strongly against Germany, because of the violation of Belgian neutrality and the atrocities that marked her conduct of the war. This sentiment was heightened by the propaganda that had its center in the German Embassy at Washington and the ever increasing obstruction, arson and outrage in American plants, in the effort to hinder supplies from being shipped overseas. Moreover the utterances of responsible German statesmen as to German aims in the war created the impression that she was seeking the hegemony not only of Europe and Asia but of the world, and that if successful in Europe, the United States might be the next object of attack. It began to be felt that the cause of the Entente was the cause of freedom and of civilization.
That impression became a conviction, when the news came of the sinking of the "Lusitania," May 7, 1915. The details of that tragedy are narrated elsewhere (see Lusitania) and need only the barest mention here. This great ocean liner was torpedoed without warning off the Old Head of Kinsale on her journey from New York to Liverpool. She carried 1,257 passengers and a crew of 702. She sank in twenty-three minutes, carrying down 1,150, of whom 124 were Americans, including many women and children.
The nation was stunned by the shock. Then came a tremendous outburst of rage and grief, and for a while the country was perilously near the verge of war. It was not the first time that American lives had been lost through submarine operations. One American citizen had perished when the British liner, "Falaba," had been torpedoed and sunk March 28, 1914, off Milford, England. Two others had been killed when the American ship "Gulflight" was attacked off the Scilly Islands, May 1, 1915. These casualties, however, had been explained by the German Government as due to a mistake in the "Gulflight" ease, while the "Falaba", it was charged, had tried to escape after having been summoned to stop. Reparation had been promised for the attack on the former. These instances had aroused American indignation, but the feeling occasioned by them was nothing compared to the horror evoked by the wholesale massacre of the "Lusitania's" passengers and crew.
A series of three notes was despatched to the German Government, the first bearing the date of May 13, 1915, declaring that the United States Government expected disavowal, reparation and immediate measures to prevent the repetition of the outrage. The reply of the German Foreign Secretary, Von Jagow, dated May 28, declared that the "Lusitania" was an auxiliary cruiser, that it had guns concealed beneath its decks, that it was transporting Canadian troops and munitions of war, and that the rapidity with which it sank was due to the explosion of the munitions carried. Further correspondence was invited. A second American note, despatched June 9, denied that the Lusitania had carried troops or was armed for offense, and asserted that "whatever be the other facts regarding the 'Lusitania,' the principal fact is that a great steamer, primarily and chiefly a conveyance for passengers and carrying more than a thousand souls that had no part or lot in the conduct of the war was torpedoed and sunk without so much as a challenge or a warning, and that men, women and children were sent to their death in circumstances unparalleled in modern warfare." The note called upon the German Government to adopt such principles in its submarine warfare as should safeguard American lives and American ships. The answer to this note was evasive and unsatisfactory. It elicited from the American Government a third and sharper note, which concluded with the phrase that "repetition by the commanders of German naval vessels of acts in contravention of American rights must be regarded by the Government of the United States as deliberately unfriendly." The last phrase was a diplomatic way of saying that war would follow.
Pending the interchange of these notes, the German Ambassador, Von Bernstorff, had offered on behalf of his Government to cease submarine warfare, provided that the United States secured certain concessions for Germany from England and should guarantee that vessels coming from American ports should carry no contraband of war. The United States Government refused thus to purchase immunity for its citizens.
The correspondence secured no satisfaction for the "Lusitania" massacre, and even during its continuance, similar attacks were made on the "Nebraskan," May 25, the "Orduna," July 9, while on Aug. 19, two American citizens were drowned in the sinking of the British steamer "Arabic." On Sept. 1, 1915, however, Count von Bernstorff informed Secretary Lansing that passenger liners would not thenceforth be sunk by German submarines without warning and without taking measures to assure the safety of non-combatants, on condition that the steamers would not try to escape or offer resistance. A message of the same tenor was received from Von Jagow on Sept. 21. On Oct. 5, the sinking of the "Arabic" was disavowed by Von Berstorff in the name of his Government, which expressed regret, promised indemnity, and declared that orders had been issued to submarines that were so rigorous that the recurrence of such incidents was considered impossible.
The protests of the United States against lawless submarine attacks were not confined to Germany alone. The Italian liner "Ancona" was destroyed by an Austrian submarine in the Mediterranean Sea, Nov. 7, 1915. Of 507 persons on board, 308 were lost, of whom 9 were Americans. The submarine shelled the helpless passengers, as they were trying to get away in the lifeboats. Correspondence ensued with the Austrian Government, which finally, on Dec. 29, announced that the commander of the submarine had been punished, and promised, with some reservations, to indemnify the families of the victims.
Another item in the account with Austria was an attack by an Austrian submarine, Dec. 5, 1915, on the American oil steamer "Petrolite," off the coast of Tripoli. A sailor was wounded, and the submarine still kept on firing, even after the "Petrolite" had swung broadside to, so that the submarine commander could see her name printed on her side and the American flag flying at her mast. Stores were also taken from the vessel before she was allowed to proceed. Representations made by this Government were met by a flat denial of the facts. An attack was made on the British passenger liner "Persia" by a submarine in the Mediterranean southeast of Crete, Dec. 30, 1915. 335 lives were lost, including two Americans, of whom one was an American consul on his way to his post at Aden. The wake of the torpedo that destroyed the ship was clearly seen, but as the submarine itself was not visible, Germany, Austria, and Turkey denied responsibility.
The winter of 1915-1916 was comparatively quiet, but with the coming of spring there was a revival of submarine outrages. March 1, 1916, the French liner "Patria" with Americans on board was attacked without warning, but escaped. On March 9, a Norwegian ship, the "Silius" was sunk in Havre Roads, and one American in the crew was injured. The Dutch steamer "Tubantia" was torpedoed on the night of March 15, 1916, in the North Sea. Americans were on board but were saved. On March 18, the "Berwindvale" with four Americans on board was torpedoed off Bantry, Ireland, but no lives were lost.
A wanton attack, and one that provoked a new crisis, was that on the French Channel steamer "Sussex" on its way from Folkestone to Dieppe, March 24, 1916. Eighty passengers, including some Americans were killed or wounded. This flagrant case brought this country to the very edge of hostilities. The German authorities declared that the "Sussex" must have struck a British mine. It was admitted that a long, black steamer was torpedoed in the Channel by one of their submarines, but it was declared that it was a British warship or mine layer. Irrefutable proofs were furnished by this Government of the falsity of these statements. On April 18, Secretary Lansing despatched a note to the German Government, which expressed regret that that Government did not understand the gravity of the situation resulting not only from the "Sussex" attack, but from the whole German method of submarine warfare. The note recalled Germany's promise to respect passenger ships, and asserted that the commanders of her submarines had violated that promise, with the result that the list of Americans who had thus lost their lives had been steadily lengthening until it had now reached 100. The patience of the United States Government was adverted to, and the note went on to say that it had now "become painfully evident that the position that the American Government took at the very outset had been justified, namely, that the use of submarines for the destruction of an enemy's commerce was, of necessity, because of the very character of the vessels employed and the very methods of attack which their employment of course involved, utterly incompatible with the principles of humanity, the long-established and incontrovertible rights of neutrals and the sacred immunities of non-combatants." At the end of the note, Germany is warned that if it was still her purpose to persist in prosecuting relentless and indiscriminate warfare, the American Government would have no choice but to sever diplomatic relations.
The German reply, though delayed until May 4, showed that the Imperial Government was beginning to recognize that American patience had nearly reached the breaking point. It still protested that many of the offenses charged against her commanders were due to mistakes, such as occurred in all wars, and it was also contended that the German submarine warfare was only a response to British violations of international law that virtually condemned millions of women and children to starvation. But a pregnant concession was made in the following announcement: "German naval forces have received this order: In accordance with the general principles of visit and search and the destruction of vessels recognized by international law, such vessels, both within and without the area declared a naval war zone, shall not be sunk without warning and without saving human lives, unless the ship attempt to escape or offer resistance." A loophole for escape from this categorical promise was left, however, in the expression of hope that the United States Government would forthwith secure from the British Government a stricter observance of the rules of international law and the statement that "should steps taken by the United States not obtain the object it deserves, to have the laws of humanity followed by all the belligerent nations, the German Government would then be facing a new situation, in which it must reserve to itself complete liberty of decision."
In its reply, taking cognizance of the German promise, the United States Government was careful to disclaim any obligation to offer a quid pro quo for the concession. "In order to avoid any possible misunderstanding," the note declares, "the Government of the United States notifies the Imperial Government that it cannot for a moment entertain, much less discuss, a suggestion that respect by German naval authorities for the rights of citizens of the United States upon the high seas should in any way or in the slightest degree be made contingent upon the conduct of any other Government affecting the rights of neutrals and non-combatants. Responsibility in such matters is single, not joint; absolute, not relative."
For a time after this promise was given, it was generally respected, and it began to seem as if actual participation by America in the war might be avoided. Although from British sources came the statement that by Oct. 1, 15 vessels had been sunk without the warning that Germany had explicitly promised to give, the American State Department had no satisfactory evidence to support the statement.
On July 9, 1916, the German submarine "Deutschland," a commercial vessel entered the port of Norfolk and proceeded to Baltimore. No attempt was made by this Government to discriminate between it and any other commercial ship in the matter of port facilities. This led to a remonstrance on the part of the Allied nations, directed to all neutral nations, but primarily directed at the United States with the "Deutschland" incident in view. The Entente view was that the peculiar characteristics of submarines are such that they ought not to be allowed the same port privileges as other merchant vessels. Among these characteristics were the ability to dive, by which they could avoid control and identification, so that their character as neutral or belligerent, as naval or merchant vessel, could not be ascertained. The United States, however, refused to accept this reasoning as a rule of action.
Much less peaceful was the visit of the German war submarine, the "U-53," which unannounced entered Newport harbor, Oct. 7, 1916, and after delivering mail for the German Embassy, departed after a few hours' stay. Within the next two days, the "U-53" had sunk in swift succession one Dutch, one Norwegian and three British ships, within sight of the American coast. Legal warning was given in each case, and the crews permitted to escape, some of the latter being picked up by American destroyers in the vicinity. The bringing of the submarine war to this side of the ocean created considerable excitement, and the question was raised whether the action of the submarine did not constitute a blockade of the American coast and an infringement upon American rights. The matter, however, was permitted to stay in abeyance.
On Oct. 30 the British ship "Marina" was torpedoed while on her way to this country and six Americans of fifty who were on board were killed. Then came an attack upon the American steamer "Chemung" and that on the steamer "Russian" with a loss of 17 American lives. No adequate explanation was forthcoming.
While the two countries were by these occurrences being brought nearer the brink of war, efforts were being made by the President of the United States to find some common grounds on which, peace might be secured, or negotiations at least opened, between the Entente and the Central Powers. The occasion was offered by the German announcement on Dec. 12, 1916, that the Imperial Government was ready to enter into peace negotiations. The terms were couched, however, so much in the spirit of a victor magnanimously offering peace to the vanquished that they were emphatically, almost curtly, refused by all the Allied nations. Despite this refusal, the time (Dec. 18), seemed auspicious for the President of the greatest of the neutrals to act as mediator, although he stated that the plan had been conceived long before the issuance of Germany's offer. What the President sought to obtain was a concrete statement of terms, on which negotiations might be initiated. The crux of his note was contained in the passage: "The leaders of the several belligerents have stated those objects (i. e. of the war), in general terms. But stated in general terms, they seem the same on both sides. Never yet have the authoritative spokesmen on either side avowed the precise objects which would, if attained, satisfy them and their people that the war had been fought out. The world has been left to conjecture what definite results, what actual guarantees, what political or territorial changes or readjustments, what stage of military success even, would bring the war to an end."
The President's views found an echo in the United States Senate, which passed a resolution approving it. The note also created a profound sensation in the nations at war. By the peoples of the Central Powers it was in the main approved, largely because of the favorable military situation in which at the moment they found themselves. By the Allies, however, much of whose territory was occupied by German forces, the note was received without enthusiasm and in some quarters with thinly veiled resentment. The Allies, however, seized the occasion to present to the world a detailed statement of the principal aims they had in view in the war they were waging. Those aims are narrated in full in another place (see World War). To this statement, the German Government made rejoinder on Jan. 11, in a note which scouted the demands of the Entente, declared that Germany had made a sincere attempt to open negotiations for peace and placed all blame for the war's continuance upon the shoulders of their enemies. The net result of the President's effort was nil.
A significant episode in connection with the speech was a statement issued by Secretary Lansing on Dec. 20, two days later. He stated that the note had been prompted by the fact that "we ourselves are drawing nearer the verge of war." This official statement created great alarm, so great indeed that the Secretary felt impelled later in the same day to explain away the indiscreet utterance. His efforts were only effectual in part, however, and uneasiness persisted. It was felt that more was going on behind the veil of diplomatic exchanges than had hitherto been suspected.
Undeterred by the failure of his first effort, the President again, on Jan. 22, took upon himself the rôle of mediator. This time it was in the form of an address before the Senate. The avowed object of the speech was to specify the conditions under which the United States might conceivably join a league to enforce peace throughout the world, but the real reason for its delivery was to bring the conflict then in being to an end. The effect of the speech, which in the main was admirable in spirit and form, was measurably diminished by the phrase "peace without victory," which aroused keen resentment among the Allied nations and met with marked disapproval on the part of a large body of influential opinion in America.
At this juncture came the announcement of Germany's determination to embark on ruthless submarine warfare—a most momentous announcement that spelled the doom of the German cause. It burst upon the neutral world with stunning effect. On Jan. 31, 1917, Von Bernstorff handed the text of the German note on submarine warfare to the American Secretary of State. At the same time an identical note was delivered to all the neutral governments. It stated that beginning on the following day, Feb. 1, all merchant ships bound to or from allied ports, found in a prohibited zone, would be sunk without warning. This revoked the promise that had been made to the United States in the "Sussex" case. The prohibited zone included the waters bordering France, England and Holland, and certain sections of the Mediterranean. The one exception allowed to the United States was that once a week she could despatch a ship to Falmouth, England, and have one sail from Falmouth to the United States, provided that the ship bore certain markings, followed a specified route and carried no contraband. The justification for the step was given in the statement that since the attempt to come to an understanding with the Entente Powers had been answered by them by the announcement of an intensified continuation of the war, the Imperial Government was compelled to continue the struggle for existence by the full employment of all weapons that lay in its power.
At the same hour that the note was handed to the neutral powers, the German Chancellor, Von Bethmann-Hollweg, in the Reichstag, amplified the substance of the note, explaining why he had previously opposed ruthless submarine war and the steps by which he in common with the German military authorities had come to determine upon its prosecution, and declared in conclusion that "in now deciding to employ the best and sharpest weapon, we are guided solely by a sober consideration of all the circumstances that come into question, and by a firm determination to help our people out of the distress and disgrace which our enemies contemplate for them."
The sensation produced by this determination was prodigious. It deepened the conviction that when the Secretary of State had let slip the statement, previously referred to, about this country's being "near the verge of war," he must have had some intimation, either from Ambassador Gerard in Berlin or from the chancelleries of the Allied nations, that ruthless warfare was contemplated. For some days following the delivery of the submarine note, the country was in a fever of excitement. No intimation was given as to what the President would do, although it was known, on Feb. 2, that he had reached a decision of some kind. On that date, he conferred with the Cabinet, and late in the afternoon consulted with a number of senators at the Capitol. Early on Feb. 3, he announced that he would on that day address both houses of Congress. Before he made the address, however, he informed Secretary of State Lansing that he had determined to break off diplomatic relations with Germany. At two o'clock that afternoon, he appeared before the joint gathering of Congress. Floors and galleries were packed with members and spectators, in a tense attitude of repressed excitement and expectation. The address lasted half an hour, and was listened to with the most profound attention.
The President reviewed the details of the "Sussex" case that had ended with the assurance of the German Government that it would not henceforth sink merchant ships without warning and without taking precautions for the safety of their passengers and crews. He recalled that this Government had threatened to break off diplomatic relations unless such promise should be given. That promise had now been broken, and the only course left that was consistent with the honor and dignity of the United States was to make good its threat. The President had therefore directed the Secretary of State to announce to the German Ambassador that all diplomatic relations between the two Governments were severed, to hand him his passports, and at the same time to recall the American Ambassador from Berlin.
In concluding, the President expressed the hope that, despite Germany's declaration, she would not actually embark upon ruthless submarine warfare, and stated that only actual overt acts on her part would make him believe it. If, however, this hope should prove unfounded, and if American ships and American lives should be destroyed by such acts on the part of her submarine commanders, in contravention of international law and the dictates of humanity, the President stated that he would again take the liberty of coming before Congress to ask of it authority to take whatever measures might be necessary to protect our seamen and our people in the prosecution of their legitimate errands on the high seas.
The speech received the immediate and hearty indorsement of the American people, regardless of party. It was felt that no other course could possibly be followed without the loss of national self-respect. There was no delusion as to what was implied in the breaking off of diplomatic relations. Almost invariably in modern times, such an act had been the prelude to war, and in the state of popular feeling there was little reason to think that this would prove an exception. But as between war and national degradation the nation had decided on its course.
Now that the decision had been actually reached there was no delaying or hesitation. In fact, at the precise moment that the President began his address, the German Ambassador received his passports from Secretary Lansing. Steps were instantly taken to receive a guarantee of safe conduct out of the country, and this was granted by Great Britain and France within forty-eight hours. The Scandinavian-American liner, "Frederick VIII.," was placed at his disposal; and on this vessel, accompanied by his suite and many German consuls and propagandists, he left the port of New York, Feb. 14, 1917.
It was a regrettable fact that similar courtesy was not extended to Ambassador Gerard at Berlin. In defiance of all the amenities that usually attend such departures, he was kept in the German capital for an indefensible length of time by something closely resembling force.
The official instructions from the United States Government did not reach Ambassador Gerard until Feb. 5, and immediately upon their receipt he asked the German Foreign Office for his passports. At the same time, he committed American interests to the legations of Spain and Holland. But although he was promised his passports they were not forthcoming, and he was subjected to a host of annoyances. His mail was withheld, his telephone service cut off and his telegrams were not sent. He was unable to communicate with United States consuls in Germany, and in fact, if not in name, was a prisoner, kept under constant surveillance. During this period, repeated attempts were made by the German authorities to secure a reaffirmation of the old treaty between the United States and Prussia, whose terms, it was thought, would safeguard the German ships in American ports. This, however, was emphatically refused by Mr. Gerard, as it was later by the American Government, when overtures were made to it directly. The Ambassador finally succeeded in leaving the German capital on Feb. 10, and reached the Swiss frontier the following afternoon. While the German Government had contemplated the possibility of diplomatic relations being severed with America, as the result of its pronouncement regarding ruthless warfare, there was no doubt that it had cherished hope that such a step would not be taken. On Feb. 12, Secretary Lansing gave out a memorandum that had been presented to him by Dr. Paul Ritter, the Swiss Minister to this country, in whose care Von Bernstorff had left German interests. The memorandum intimated that the submarine order might be modified in favor of the United States, providing that the commercial blockade against England were not thereby affected. The American Government refused to discuss the matter, unless and until the German Government renewed the assurance given in the Sussex case, and acted upon that assurance. Chagrined at its failure, the German Government reiterated that unrestricted war against all vessels in the barred zones was under full swing and would under no circumstances be abandoned.
Coincident with the breaking off of diplomatic relations, was the extensive sabotage carried out by the crews of German ships interned in the harbors of this country. There were 91 of such ships, totaling 594,696 gross tons. Of these, 31 were in New York harbor, their value estimated at $29,000,000. During the three days from Jan. 31, to Feb. 2, parts of the engines of the ships were either destroyed or removed, so that in the event of their seizure by this Government they would be unavailable for cargo or passenger purposes for months. The precision and thoroughness with which this work was done indicated that it was the result of orders from the German Embassy or Government. Under international law, this could not be prevented, as long as war had not been declared, and the captains and crews were left in undisturbed possession of their vessels, the Government contenting itself with the establishment of armed guards on the piers at which the ships were moored, to prevent any attempt that might be made to sink them and thus obstruct navigation.
Other military precautions were taken. The public were forbidden access to navy yards and government buildings. Arsenals, bridges, subway entrances, aqueducts, reservoirs and government plants were placed under strict guard. The Panama Canal was carefully watched. The erection of a new fort was begun at Rockaway Point, in order to strengthen the defense of New York harbor. Legislative action was also taken looking toward preparedness. The House, on Feb. 12, passed the largest Naval Appropriation bill in the history of the nation, carrying over $368,000,000. The President was authorized to commandeer shipyards and munition plants in case of war or national emergency.
Almost immediately after the diplomatic break with Germany, the United States Government addressed a note to the other neutral nations, advising them of the act and the reasons that prompted it, and expressing the hope that they would see their way clear to taking similar action. None, however, went that far, though protests varying in force were sent by all of them to Germany.
Ruthless submarine warfare had been carried on with vigor for nearly four weeks, when, on Feb. 26, President Wilson addressed a joint session of Congress, and asked that he be given authority to supply guns and ammunition to American merchant ships, and employ any other instrumentalities that might be necessary to protect American citizens and interests on the high seas. He cited two recent cases in which American ships, the "Housatonic" and the "Lyman M. Law," had been sunk, and pointed out that the submarines were acting as an embargo on American trade. Even while he was proceeding to the Capitol to deliver his address, news came of another sinking to be added to the list, that of the Cunard liner "Laconia," in which American lives were lost. Immediately after the President's appeal, a bill was introduced in the House embodying most of his suggestions and, after a debate in which partisanship played no part, was passed, March 1. In the Senate, however, the bill failed to pass, although an overwhelming majority was in favor of it. A determined filibuster was organized by a small group, who, under the rules of the Senate, were able to prolong debate until the bill died automatically at the ending of the session on March 4.
The President appealed to the country, and the almost overwhelming response convinced him of the depth of the indignation that had been aroused by the action of the recalcitrant group of senators. On March 9, he issued a proclamation calling Congress to meet in special session on April 16. No purpose was specified, though it was intimated that the President wanted the support of Congress in any action he might find necessary to take for the public defense. At the same time a statement was issued from the White House that the President was convinced of his right to direct the arming of merchant ships by Presidential proclamation. This he did on March 12. On that date all members of the diplomatic corps in Washington were informed by Secretary Lansing that, in view of the course of the German Government in sinking ships without warning, the Government of the United States had determined to place upon all American merchant vessels, whose course lay through the barred zone, armed guards for the protection of vessels and lives. In the short space that intervened between the issuance of the proclamation and the actual declaration of war, the position of the United States was that of armed neutrality.
The indorsement of the President's action by the country at large was made the more emphatic because of a sensational episode growing out of the correspondence of the German Foreign Secretary with the German Minister to Mexico. A letter was published March 1 that was dated Jan. 19, 1917, and signed by Zimmermann, German Foreign Secretary. It told the German Minister, Von Eckhardt, that Germany intended on Feb. 1 to begin unrestricted submarine warfare, and that this might endanger relations with the United States. In that event, Von Eckhardt was directed to propose to Mexico that she and Germany make war and make peace together. Germany was to furnish financial support to Mexico, and the latter was to recover her "lost territory" in New Mexico, Arizona and Texas. The "details" of this program were to be left to Von Eckhardt. As if this large order were not enough, he was also to suggest that the President of Mexico communicate this plan to Japan and seek to secure the latter's adherence.
While the ingenuousness of the plan was not without its elements of humor, the publication of the letter hardened the determination of the United States to pursue the course it was treading, even if it should lead to war. The revelation of diplomatic clumsiness was particularly disconcerting to Germany and the pro-German elements in this country. The letter was denounced in some quarters as a patent forgery, but on March 3, Zimmermann himself acknowledged that it was genuine and sought to defend it. Mexico made haste to deny any implication in the matter and Japan denounced it as a "monstrous plot" that, if proposed to the Japanese Government, would not be entertained for an instant. These disclaimers, which in the case of Japan at any rate was unnecessary, were accepted by our Government, and interest in the matter was soon lost in the greater events that followed.
For the American Government had at last decided on war as the only solution consistent with American dignity and honor. Its patience had been exhausted and its people goaded to the utmost. The sinkings grew in volume, and it was evident that Germany had thrown discretion to the winds and was daring the American people to meet the issue. On March 2, the American steamship, "Algonquin," on its way from New York to London, was attacked by a submarine without warning and sunk, the crew being rescued later, after 27 hours in open boats. On March 18 three ships bearing the American flag were sunk off the English coast by submarines. These were the "City of Memphis," the "Illinois" and the "Vigilancia." Fifteen of the crew of the latter were lost.
On the day after this news was received many measures were taken by this Government that foreshadowed the coming conflict. Orders were given to speed up work on warships under construction; two classes of midshipmen were ordered to be graduated ahead of time; the eight-hour day for Government naval work was suspended, arrangements were made for the issue of bonds for naval purposes. A long Cabinet session was held, at which it was decided that Congress should be called in session at an earlier date than that previously announced. On March 21 the President issued a call for Congress to meet on April 2, "to receive a communication by the Executive on grave questions of national policy which should immediately be taken under consideration." No one doubted that this sentence could be compressed into a single word—war.
The Sixty-fifth Congress convened in special session at noon on April 2. The President, escorted by a squadron of cavalry, reached the Capitol in the evening. At about 8.40, he began his address, after having been greeted with a tremendous ovation. He spoke for 36 minutes and was listened to with breathless attention. He recited the offenses of Germany against this Government, and recommended Congress to declare "the recent course of the Imperial German Government to be in fact nothing less than war against the Government and people of the United States" and that Congress "formally accept the status of belligerent that had thus been thrust upon it." A notable passage of the speech was that in which he defined the issue as one between autocracy and democracy. "The world must be made safe for democracy. Its peace must be planted upon the tested foundations of political liberty. We have no selfish ends to serve. We desire no conquest, no dominion. We seek no indemnities for ourselves, no material compensations for the sacrifices we shall freely make. We are but one of the champions of the rights of mankind."
At the conclusion of the President's address, he was wildly cheered, the whole audience rising to its feet and waving flags. Immediately after the President's withdrawal, both Houses assembled in separate session, and bills were introduced embodying the President's recommendations. On April 4, by a vote of 82 to 6, the war resolution was passed by the Senate. On April 6 it was passed by the House of Representatives by a vote of 373 to 50. At 1.18 p. m., it was signed by the President, thus making the United States and Germany officially at war. Simultaneously the President issued an address to the American people, announcing the existence of a state of war and prescribing rules for the behavior and treatment of enemy aliens.
The text of the Declaration of War was as follows:
Whereas, the Imperial German Government has committed repeated acts of war against the Government and the people of the United States of America; therefore be it
Resolved, by the Senate and House of Representatives of the United States of America in Congress assembled. That the state of war between the United States and the Imperial German Government, which has thus been thrust upon the United States, is hereby formally declared; and
That the President be and he is hereby, authorized and directed to employ the entire naval and military forces of the United States and the resources of the Government to carry on war against the Imperial German Government; and to bring the conflict to a successful termination all the resources of the country are hereby pledged by the Congress of the United States.
The declaration was received by the nation without any outburst of hysterical excitement. Its coming had been too apparent to have in it any element of surprise. But except in some pacifist quarters, it was received with the heartiest approval and a whole-souled determination to bend every effort toward securing victory. It had been feared that riots would be instigated by some of the 10,000,000 citizens of Teutonic birth and sympathies, but although there were some minor disorders, less than 100 arrests in all were made. The Socialist party alone expressed formal opposition to the war, and lost a considerable part of its following in consequence. Rarely has a nation facing a great conflict been so united in spirit and purpose. It is true that the great body of the people failed to realize the great part that America was to take in the war. It was generally expected that our participation would be limited to the navy and to the furnishing of money, munitions and food. That we should be called upon to raise an army of 5,000,000 men, of whom more than 2,000,000 would be actually carried overseas was probably believed by none. But even if it had been, there would have been no softening of the national purpose to prosecute the war to a successful termination.
By the nations of the Entente, the decision of the United States was received with the greatest relief and enthusiasm. They saw certain victory in the accession of so formidable an ally. By the neutral nations also, who had so many causes for grievance against Germany, the declaration was in general approved, though from motives of discretion their expressions were restrained. Some of them, however, deemed the action regrettable, because they had pinned their hopes to America's mediation in securing the world's peace.
Germany received the news with blended feelings. In many influential quarters there was a frank acknowledgment of the seriousness of the step that placed the richest and most powerful nation in the world on the side of her enemies. Others ridiculed the military power of this country, and predicted that our opposition would prove negligible. It was urged that redoubled efforts be made to crush the forces of the Entente, before America's help could be made available. It was freely prophesied that the submarines would prevent any American transport from landing troops in France. And even if this hope failed and American troops were brought into action, it was declared that they could never sustain the onset of German veterans.
Although there had been much complaint of the country's unpreparedness for war, prior to the declaration, there was no legitimate ground for criticism of the energy and resolution with which all departments of the Government began to function, immediately after the state of war became a fact. The instant the news was flashed from Washington, port officials everywhere, accompanied by detachments of Federal troops, seized all German ships that were lying in American Harbors. There were 91 of these in all. With the exception of a German gunboat at Manila, that was blown up by its officers, all were taken possession of without serious incident. The crews were interned at stations on shore, and Government machinists were put at work repairing the damaged machinery of the vessels.
The radio system throughout the United States was also taken under Government control. Every wireless station, not only on this continent, but also in all our island possessions, was seized on April 6, in conformity with the order of President Wilson. Those that might be useful were retained in operation, but others were dismantled and suppressed. All amateur wireless plants were forbidden to function.
Barred zones were established about the entire coast line of the United States, varying in width from two to ten miles. Vessels were forbidden to enter ports at night, and their ingress and egress in the daytime were conformed to strict rules that were enforced by an extensive coast patrol.
Hand in hand with these defensive measures, went energetic preparation for offense. Even prior to the declaration of war, orders had been issued March 25-26, for the mobilization of 37 units—regiments and battalions—of the National Guard, for the purpose ostensibly of policing threatened points but really to get ready for war. The 22,000 men who had been on border duty near Mexico, though they were due to be mustered out, were retained in the service. By April 1, more than 60,000 of the entire National Guard of 150,000 men were under arms, and the mobilization had outrun the equipment that was ready for them.
In the Navy, also, work was rushed with all possible speed. An executive order was issued, March 26, increasing the enlisted naval strength to 87,000 men. Ensigns were rushed from Annapolis three months before graduation. The marine corps was increased to 17,000 men. Retired officers were called back for bureau work, so that younger men might be released for active service. By June 6, American warships had arrived off the coast of France. Naval bases were established on both the French and English coasts as stations for American destroyers, co-operating with the Allied navies against German submarines. In addition, over 200 merchantmen had been provided with guns and crews to work them before the end of August.
Army work was necessarily slower, because of the magnitude of the demands of this arm of the service. The regular army had been recruited to its full authorized strength of 300,000 men by August 9. By August 5, the National Guard regiments had also swelled to their full strength of 300,000 men. The aggregate fighting strength of the two bodies was 650,000 men, many of whom had been well drilled, but most of whom had seen no actual fighting. And much of what these knew had to be promptly unlearned, in order to conform to the new tactics and strategy developed by the war.
By this time, the conviction had dawned upon the nation and its leaders that military operations must be participated in by American troops on a vastly greater scale than had been anticipated at the beginning. At first it had been thought possible to increase the armies to the required size by voluntary enlistments. But it soon became evident that other methods must be adopted, if America's intervention was to be prompt and effective.
Conscription had an unpleasant sound to American ears, but its necessity be- came so apparent that the Selective Draft Act, when it was approved on May 18, met with general approbation. The first application of the act resulted in the registration of over 9,500,000 young men on June 5, and the subsequent calling into service from this number of 687,000 on July 20. So energetically was the work carried on that by the end of August the men were streaming into the cantonments and army posts that had been selected as training grounds. Thirty-two great cantonments in various parts of the country were planned and built in record time, and great numbers of officers were being trained at Plattsburg and similar camps established for that purpose.
The legislative branch of the government made movements on so great a scale possible by liberal appropriations. Partisanship was laid aside, and both parties stood loyally behind the Executive in all action looking toward a successful prosecution of the war. On June 15 an appropriation bill carrying over $3,000,000,000 for army and navy purposes was signed by the President, and a little over a month later an appropriation of $640,000,000 was made for the aviation service. It was estimated by Secretary of the Treasury McAdoo on July 24, that $5,000,000,000, in addition to what had been already authorized, would be necessary to finance the war up to June 30, 1918. Taxation and the issue of bonds on an unprecedented scale were foreshadowed by this announcement, but the sacrifice was cheerfully made. The first Liberty Loan which called for two billion dollars was oversubscribed by more than a billion. The campaign for the loan opened May 2, and closed June 15, and its raising was attended by a spirit of enthusiasm and patriotism that showed how deeply the nation was stirred.
The enormously important economic feature of national defense was not overlooked. It was realized that this was a war of resources and that the nation that could hold out for "the last quarter of an hour" would win. A nation-wide system of activities was organized that enlisted the ablest business minds of the country in the Council of National Defense, which had as its official nucleus the members of the Cabinet. The Council was sub-divided into a number of committees, each headed by a recognized expert, and their work went on under the control and supervision of the various Government departments. Herbert C. Hoover, who had demonstrated his executive ability by his work in connection with the Belgian Relief Commission, was made the head of the Food Board, whose work was to mobilize the agricultural resources of the country, stimulate economy and production, prevent waste and assure an adequate food supply not only for civilians but for the army and navy, as well as to supplement the failing resources of the Allied nations. The operation of the railroads was put under the control of a railway beard, in order to prevent freight congestion and send goods by the quickest and shortest routes. A committee on raw materials saw to it that the Government secured the requisite amount of copper, steel and other products. The Federal Shipping Board was authorized to build a fleet of wooden cargo ships, 1,000 in number and from 3,000 to 5,000 tons burden. These, it was figured, would make up largely for the damage done to shipping by the submarines, and keep up a steady stream of supplies to Europe.
Important acts passed by Congress strengthened the hands of the Executive. The Espionage Act dealt with internal foes, with especial bearing on the activities of resident enemy aliens. Death or imprisonment was provided for convicted spies. Penalties were appended for any interference with commerce carried on with the Allied nations. More rigid restrictions were put on passports. The use of search warrants was extended. The Embargo Act provided for a system of licensing the transfer of commodities abroad, and was designed to prevent supplies being shipped to neutral ports which might get into the hands of Germany, either through deliberate design or through the natural channels of trade. The act was resented by neutrals, who feared that their legitimate needs might go unsupplied, but it was warmly welcomed by the Allies, who saw in this tightening of the blockade against Germany an effective means of shortening the war.
While the cause of the Allies was being strengthened by the accession of the United States, it was being weakened by the threatened collapse of Russia. That nation, whose great work in the early years of the war had done so much toward barring Germany's path to conquest, was threatening to withdraw from the conflict. The breakdown of the entire Eastern front was foreshadowed. The Czar had been overthrown, disintegrating forces were everywhere at work, and the former empire was in a welter of chaos and confusion. The serious results to the Allied cause of Russia's defection were apparent. The Central Powers, relieved of the necessity of fighting on two fronts, could concentrate on one. Rumania, deprived of Russia's support, would fall an easy prey to the German armies. A million men could be hurried across Germany to be hurled against the hard-pressed Allies in the west. Austria would be able to give her undivided attention to Italy. The war would be prolonged indefinitely, and immensely greater demands would be made on American blood and treasure than had been anticipated.
To prevent this calamity, it was thought advisable by the United States Government that a commission be sent to Russia to assure her of this country's sympathy and support, to urge her adherence to the cause of the Entente, and to promise help in developing her resources and re-establishing her transportation system, that had utterly broken down. The Commission was headed by Elihu Root, former Secretary of State, and comprised naval and military officials, practical railway men and representative citizens. It reached Petrograd June 13, 1917, and was received with respect, and in some quarters with cordiality. The aims of the Commission had been previously communicated to the Russian Government then in power by President Wilson. The work of the Commission was carried on with great energy and ability, and by July 10 Mr. Root was so encouraged that he declared that it had accomplished what it had gone to Russia to do and that it had found "no organic or incurable malady in the Russian democracy." This same view was held by him when the Commission returned to the United States and made its report to Washington on Aug. 12. Events, however, showed that he had been too optimistic. Russia passed from democracy to Bolshevism and withdrew from the war. Still, the Root Commission had a real value in deferring, if it could not prevent, the Russian collapse.
Military preparations went on with increased energy as the signs of Russian weakening multiplied. On Aug. 14, President Wilson sent to the Senate for confirmation the names of 37 major-generals and 147 brigadier-generals, whom he had appointed as officers in the National Army. Radical changes were made in army organization to embody the lessons learned by the Allies in three years of war. The ratio of artillery strength to infantry was greatly increased. It was ordered that there should be three regiments of field artillery to every four regiments of infantry, instead of the former ratio of three to nine. The machine-gun arm was also materially enlarged. The one regiment of cavalry, that was previously a unit in every division, was abolished, as cavalry had been shown to be a comparatively unimportant factor in the war, except in the Far East. Many new services were provided for, such as gas and flame service, forestry regiments, trench, mortar, anti-aircraft and chemical units demanded by the exigencies of this greatest of all wars.
A notable episode and one that symbolized to the world the actual entry of America into active warfare was the arrival in Europe, June 8, 1917, of Major-General John J. Pershing, who had been chosen as Commander-in-Chief of the American Expeditionary Forces abroad. He was accompanied by his staff of 53 officers and 146 men. He received an enthusiastic greeting in London and a thrilling welcome in France, he was looked upon as the leader of a coming army of 20th Century Crusaders. He visited the tomb of Napoleon and laid a wreath on the tomb of Lafayette. Long conferences were held with the military authorities regarding American participation in the conflict. It was announced that General Pershing would determine where the American expedition should be sent, and that his decision would be final. He was to be an independent commander, in absolute control of his own forces, but co-operating with the British and the French. This arrangement continued in force until, as will be narrated later, General, afterward Marshal, Ferdinand Foch was made Generalissimo of the Allied forces, March 28, 1918.
The first units of the United States army that were to fight abroad reached a French port on June 26 and 27. They had been despatched in compliance with a Presidential order of May 18. They received a magnificent welcome from enormous crowds while bands played the "Star Spangled Banner" and the "Marseillaise." The detachment was under the command of Major-General William L. Sibert. They and the troops that soon began to follow in an ever increasing stream were placed in French camps behind the firing line, where they were given intensive training by war veterans of the French and British armies. After this training was completed, they were placed in the trenches on comparatively quiet sectors near Toul and in Lorraine. The Germans soon learned of their presence, and subjected them to artillery fire, gas attacks and bombs dropped from airplanes. The Americans, in conjunction with the French, took part in trench raids and minor operations, and soon a growing casualty list gave warning of the sacrifice of life that was to be demanded of America before victory could be achieved.
The pressing need of shipping to transfer men and supplies to France was met in several ways. By Jan. 29, 1918, it was announced that the damage done by the crews to the seized German ships had been repaired, thus making available a tonnage of over 600,000. By an agreement with Japan and some of the neutral nations, 400,000 more tons were added to the total. On March 14, the United States and Great Britain announced their intention of seizing over 600,000 tons of Dutch shipping that was lying in their harbors, making compensation for them at the end of the war, in the meantime supplying food and fuel to Holland. This action was protested by the Dutch Government, though it was strictly in accordance with the principles of international law, and was duly carried out.
It was stated in Washington on Nov. 7, 1917, that the army at that time was 1,800,000 strong. A movement was set on foot to classify the 9,000,000 registrants under the first draft, putting into Class I those who were unmarried or without dependents, and making them the first ones subject to the nation's call. It was believed that by this method, 2,000,000 more men would be made almost immediately available for service.
Notable among the non-military events shortly following the advent of America as a combatant had been the Pope's appeal for peace. This was made public in this country on Aug. 16, 1917. The letter was couched in a benevolent form, and was received with respect because of the position held by the author and the lofty sentiments that inspired it. Pope Benedict, after deploring the horrors of the conflict, suggested as a basis of settlement a decrease of armaments, the freedom of the seas, no indemnity, the evacuation of Belgium, and the restitution of the German colonies. While the appeal was addressed to all the belligerents, the answer of the Entente was embodied in a reply to the letter made by President Wilson on Aug. 27. He pointed out that the Pontiff's proposal practically involved a return to the status quo ante. This, in view of Germany's unrepentance and continuing ambition, would only give that Government time for a recuperation of its strength and renewal of the attack upon civilization. He declared that "we cannot take the word of the present rulers of Germany as a guarantee of anything that will endure unless explicitly supported by such conclusive evidence of the will and purpose of the German people themselves as the other peoples of the world would be justified in accepting."
The answer was approved heartily by all the nations of the Entente. By the German Government and press it was bitterly denounced as an attempt to drive a wedge between the Government and the people. The replies of the German Powers to the Pope, while sympathetic, were non-committal, and the intervention had no result.
The alertness of the American Secret Service, which had previously caused Germany such discomfiture by the publication of the Zimmermann note, was illustrated anew on Sept. 8, 1917, by the giving to the world of certain telegrams that had been sent in cipher to the Berlin Foreign Office by the German Chargé d'Affaires at Buenos Aires, Argentina. As a demonstration of perfidy and heartlessness, it created an immense sensation. It was dated May 19, 1917, and read:
"This Government has now released German and Austrian ships on which hitherto a guard had been placed. In consequence of the settlement of the Monte (Protegido) case there has been a great change in public feeling. Government will in future only clear Argentine ships as far as Las Palmas. I beg that the small steamers 'Oran' and 'Guazo,' 31st of January, 300 tons, which are now nearing Bordeaux with a view to change the flag, may be spared if possible or else sunk without a trace being left (spurlos versenkt). Luxburg."
Other despatches described the Argentine Acting Minister for Foreign Affairs as a "notorious ass and Anglophile." But it was the "spurlos versenkt" cipher, recommending the butchery if necessary of helpless crews so that their fate might never be known, that stirred the world with indignation. In Argentina the feeling was exceedingly bitter and German shops were wrecked and newspaper offices burned. Luxburg was promptly given his passports by the Argentine Government.
America was chiefly concerned, however, by the fact that the Swedish Legation at Buenos Aires had allowed itself to be used for the transmission of the despatches. This was regarded as a serious breach of neutrality. The Swedish people themselves severely criticised their Government in the matter. The Swedish Government, on Sept. 15, announced that no further messages of any sort would be forwarded for Germany from any point. The German Government on Sept. 17 expressed "keen regret" for the embarrassment that had been caused Sweden by the incident.
On Dec. 7, 1917, the United States declared war on Austria-Hungary. The resolution declaring that a state of war existed between the two countries was passed in the Senate by a unanimous vote and in the House by 363 to 1, the single negative vote being cast by a New York Socialist member. The joint resolution, after declaring that the Imperial and Royal Austro-Hungarian Government had committed repeated acts of war against the Government and the people of the United States of America, followed closely in its phrasing the declaration against Germany. The action was largely formal, for it merely stated what had been actually the fact for months, and involved no special changes in our naval or military preparations. As regards internal relations, the same policy was adopted toward resident Austrian aliens and their property and ships as had previously been pursued toward Germans.
The question naturally arose why war was not declared at the same time on Turkey and Bulgaria, who were Allies of Austria and Germany. Several reasons for the omission were given semi-officially by Government spokesmen. It was stated that Turkish citizens and interests in the United States were so few as to be negligible, and that on the other hand American citizens were numerous in Turkey and had considerable business interests and property that might be endangered by a war declaration. Moreover, the staunchness of Turkey's fidelity to the cause of the Central Powers was questioned, and it was thought that she might be induced to conclude a separate peace. In Bulgaria's case it was pointed out that her interest in the war was largely local, and that in defiance of Germany's command she had refused to break off diplomatic relations with this country. Whatever action the Government might take in the matter was after all academic. Germany was the real enemy, and if she were conquered, her allies would be compelled also to submit.
Perhaps the most important statement of war aims in the whole course of the conflict was that made by President Wilson in a memorable address to Congress, Jan. 8, 1918. It was in this that he stated the famous "Fourteen Points" about which discussion ranged from then until and after the close of the war. They are of such historic importance that they are here subjoined in full:
"The program of the world's peace, therefore, is our program, and that program, the only possible program, as we see it, is this:
I. Open covenants of peace, openly arrived at, after which there shall be no private international understandings of any kind, but diplomacy shall proceed always frankly and in the public view.
II. Absolute freedom of navigation upon the seas, outside territorial waters, alike in peace and in war, except as the seas may be closed in whole or in part by international action for the enforcement of international covenants.
III. The removal, so far as possible, of all economic barriers and the establishment of an equality of trade conditions among all the nations consenting to the peace and associating themselves for its maintenance.
IV. Adequate guarantees given and taken that national armaments will be reduced to the lowest point consistent with domestic safety.
V. A free, open-minded, and absolutely impartial adjustment of all colonial claims, based upon a strict observance of the principle that in determining all such questions of sovereignty the interests of the populations concerned must have equal weight with the equitable claims of the Government whose title is to be determined.
VI. The evacuation of all Russian territory and such a settlement of all questions affecting Russia as will secure the best and freest co-operation of the other nations of the world in obtaining for her an unhampered and unembarrassed opportunity for the independent determination of her own political development and national policy and assure her of a sincere welcome into the society of free nations under institutions of her own choosing; and, more than a welcome, assistance also of every kind that she may need and may herself desire. The treatment accorded Russia by her sister nations in the months to come will be the acid test of their good-will, of their comprehension of her needs as distinguished from their own interests, and of their intelligent and unselfish sympathy.
VII. Belgium, the whole world will agree, must be evacuated and restored, without any attempt to limit the sovereignty which she enjoys in common with all other free nations. No other single act will serve as this will serve to restore confidence among the nations in the laws which they have themselves set and determined for the government of their relations with one another. Without this healing act the whole structure and validity of international law is forever impaired.
VIII. All French territory should be freed and the invaded portions restored, and the wrong done to France by Prussia in 1871 in the matter of Alsace-Lorraine, which has unsettled the peace of the world for nearly fifty years, should be righted in order that peace may once more be made secure in the interest of all.
IX. A readjustment of the frontiers of Italy should be effected along clearly recognizable lines of nationality.
X. The peoples of Austria-Hungary whose place among the nations we wish to see safeguarded and assured, should be accorded the freest opportunity of autonomous development.
XI. Rumania, Serbia, and Montenegro should be evacuated, occupied territories restored, Serbia accorded free and secure access to the sea, and the relations of the several Balkan states to one another determined by friendly counsel along historically established lines of allegiance and nationality, and international guarantees of the political and economic independence and territorial integrity of the several Balkan states should be entered into.
XII. The Turkish portions of the present Ottoman Empire should be assured a secure sovereignty, but the other nationalities which are now under Turkish rule should be assured an undoubted security of life and an absolutely unmolested opportunity of autonomous development, and the Dardanelles should be permanently opened as a free passage to the ships and commerce of all nations under international guarantees.
XIII. An independent Polish state should be erected which should include the territories inhabited by indisputably Polish populations, which should be assured a free and secure access to the sea, and whose political and economic independence and territorial integrity should be guaranteed by international covenant.
XIV. A general association of nations must be formed under specific covenants for the purpose of affording mutual guarantees of political independence and territorial Integrity to great and small states alike."
A reply to this statement of principles, that in the view of the United States must serve as the only possible basis of peace, was made by the German Chancellor, Von Hertling, in the Reichstag on Jan. 24, 1918. The speech was evasive on many points. He flatly refused to consider the cession of Alsace-Lorraine. He accepted the President's views in regard to secret diplomatic agreements. The question of disarmament he agreed was discussable. He favored the freedom of the seas with the elimination of the naval bases of England at Gibraltar, Malta and other points. Belgium, he declared, was regarded by Germany as "a pawn"—an unfortunate expression that he afterward strove lamely to explain away—and the question could be settled at the Peace Conference. Other questions, he alleged, concerned Germany's allies and could only be settled after consultation with them.
On behalf of Austria-Hungary, Count Czernin on the same day answered the President's speech, in an address before the Austrian Parliament. His tone was more friendly and his concessions more unreserved than those of the German Chancellor, but he went no further than his ally in definite promises, except in the case of Poland.
On Feb. 11, President Wilson again addressed Congress in what was practically a reply to Von Hertling and Czernin. He declared that the method proposed by the former was that of the discredited Congress of Vienna, and that the German Chancellor in his thought was living in a world that was past and gone. Czernin, the President agreed, saw more clearly, and doubtless would have gone further yet in the way of concession, had he not been bound to silence by the interests of his allies.
Once more the President sought to state in more compact form—in four points this time instead of 14—what he regarded as the fundamental conditions of durable peace.
First—That each part of the final settlement must be based upon the essential justice of that particular case and upon such adjustments as are most likely to bring a peace that will be permanent.
Second—That peoples and provinces are not to be bartered about from sovereignty to sovereignty as if they were mere chattels and pawns in the game, even the great game, now forever discredited, of the balances of power; but that
Third—Every territorial settlement involved in this war must be made in the interest and for the benefit of the populations concerned, and not as a part of any mere adjustment or compromise of claims among rival states; and,
Fourth.—That all well-defined national aspirations shall be accorded the utmost satisfaction that can be accorded them without introducing new or perpetuating old elements of discord and antagonism, that would be likely in time to break the peace of Europe and consequently of the world.
Whatever expectation might have been entertained that this restatement of principles would elicit a reply that would bring peace appreciably nearer, was doomed to disappointment. The iniquitous treaty of Brest-Litovsk with Russia had put that country definitely out of war, and had released vast forces that could now be used in a savage onslaught on the western front. The German war party was in the saddle, and felt that it had a chance to dictate peace instead of negotiating it. Enormous preparations were made for the great drive at the opening of the Spring campaign, that Germany confidently expected would bring a victorious end to the war, and all thought of further peace parleys was abandoned.
Though the loss of American lives at this stage had not reached considerable proportions, America was feeling the economic strain caused by the necessity of having to send food and fuel to the Allies. An order was issued by Fuel Administrator Garfield on Jan. 16, 1918, providing for a series of "heatless" days in all parts of the country east of the Mississippi river from Jan. 18 to 22 inclusive and on each following Monday from Jan. 28 to March 25 inclusive. This did not apply to private dwellings, but to manufacturing plants, business offices, theaters, and the like, with certain stated exceptions. The order was criticized in some quarters as needless, but it had the endorsement of the President and was generally obeyed.
The tremendous demands upon the railroads in the matter of transporting troops and supplies led to a condition of congestion and paralysis that caused the Government on Dec. 26, 1917, to assume full control of all the railroad systems in the country. This represented 260,000 miles and a property investment of $17,500,000,000 while 1,600,000 employes were required for operation. Steps were at once taken by the Government to unify competing lines into one general system to prevent reduplications, to supply equipment that might be lacking, and to bill all freight by the shortest and quickest routes. The control of the vast system was vested in Secretary of the Treasury McAdoo. The property rights of stockholders were to be protected. The action was generally approved by the country as a measure imperatively necessary for the prosecution of the war.
Criticism was not lacking, however, of some features of the work of the Administration. A severe attack was made on the conduct of the War Department by Senator Chamberlain, Chairman of the Military Committee of the Senate, who declared in a public speech that the military establishment had broken down and that there was inefficiency in every Government bureau and department. His Committee the next day introduced into the Senate a bill to create a Minister of Munitions and to establish a special War Cabinet of three, which should have complete charge of war operations.
The charges were promptly denied by Secretary Baker, who was also warmly defended by the President. The Secretary at his own request was given an opportunity to appear before the Senate Military Committee on Jan. 28 and reply to the criticisms leveled at his department. He admitted that there had been delays, mistakes and false starts, but asserted that these were only the inevitable accompaniments of work prosecuted on such a colossal scale, and that in general the accomplishments of the Administration deserved praise rather than rebuke. He explained the delay in furnishing rifles and ordnance, and to the charges of hospital neglect, replied that in an army of over a million men only eighty complaints had been made of neglect or abuse. All defects and shortcomings, he declared, were being remedied as rapidly as possible.
Whatever may have been the merits of the case, the criticism resulted in a quickening of Government effort and a thorough reorganization of the War Department. An order was issued by Secretary Baker on Feb. 10, 1918, directing the establishment of five divisions of the General Staff as follows:
1. An Executive Division under an executive assistant to the Chief of Staff.
2. A War Plans Division under a Director.
3. A Purchase and Supply Division under a Director.
4. A Storage and Traffic Division under a Director.
5. An Army Operations Division under a Director.
The authority of the Chief of Staff was emphasized, and it was believed that this concentration of authority would result in greatly increased efficiency. The new organization began functioning at once, and the results speedily became apparent in the more rapid movement of troops and supplies to France.
A serious disaster to the naval arm of the service occurred Feb. 5, 1918, when the British steamship, "Tuscania," which was engaged as a transport in carrying United States troops to France, was torpedoed by a German submarine. The attack took place off the north coast of Ireland. There were 2,179 American soldiers on board, and of these nearly two hundred lost their lives.
The vital problem of financing the war for ourselves and in large part for our Allies continued to be met by the issue of loans. The second Liberty Loan closed on Oct. 27, 1917, and amounted to $4,617,532,300. As the amount asked for was three billions, this represented an oversubscription of 54 per cent. The total of subscribers was 9,400,000. This number of buyers, vast as it was, was exceeded by those for the third loan which closed on May 4, 1918, with a subscription of over four billions, a billion more than was requested. On this occasion the buyers exceeded 17,000,000. The results were exceedingly gratifying, not only because of the amounts secured, but because of the popular determination to win the war evinced by the widespread distribution of the loan.
The treaties of Germany with dispirited Russia at Brest-Litovsk and with vanquished Rumania at Bucharest had revealed anew the cynicism of the German Government, and the threat that was held out to all the free peoples of the world, if the war should result in final German triumph. President Wilson, in an address on the treaties delivered at Baltimore, April 6, 1918, reviewed the events that led up to and followed them, and gave utterance to the "Force to the utmost" phrase which stirred the nation like a clarion call.
"Germany has once more said that force and force alone shall decide whether justice and peace shall reign in the affairs of men, whether right as America conceives it or dominion as she conceives it shall determine the destinies of mankind. There is therefore but one response possible for us: Force, force to the utmost, force without stint or limit, the righteous and triumphant force which shall make right the law of the world and cast every selfish dominion down in the dust."
The real baptism of fire for the American troops in France was now beginning. Hitherto there had been scattered and comparatively small actions, which had, however, demonstrated American pluck and mettle. American army engineers working on the British railways near Gouzeaucourt, on Nov. 30, 1917, had been caught in the swirl of an unexpected German attack. They had dropped their picks and shovels, grasped rifles wherever they could find them, and fought side by side with the British repelling the assault. The French communique rendered "warm praise to the coolness, courage and discipline of these improvised combatants."
The first action of note, although still only a minor operation, in which Americans took part was the affair at Seicheprey in the Toul sector, April 20, 1918. A force of Germans numbering about 1,500, of whom a considerable portion were shock troops, launched itself against the American trenches on a one-mile front. The attack had been preceded by a heavy bombardment. Gas as well as shells were used. The force of the attack carried the Germans into the first line of defense and the village of Seicheprey. There was fierce hand to hand fighting, but that same day the Americans regained most of the captured ground and the following morning completed the work and re-established their lines. Our losses were between 200 and 300 while the enemy's losses were much heavier.
Ten days later the Americans were called upon to repel another heavy assault at Villers-Bretonneux. After a heavy bombardment at 5 o'clock in the afternoon, a wave of the enemy swept forward, but was repelled after intense hand to hand fighting and retreated, leaving their dead and wounded behind them.
Nor should the courage be overlooked of about three hundred engineers who "held the gap" with Carey. It was just after the beginning of the great German drive that began on March 21, swept everything before it for the first few days, and threatened an overwhelming disaster to the Allied arms. The road to Amiens lay open through a breach that had opened up between the British 3d and 5th armies. Gen. Sandeman Carey was commissioned to hold the gap and he did it with a nondescript army made up of laborers, telegraph linemen and any others whom he could get together. The 300 American engineers joined in, and for days against desperate odds held the breach, until it could be closed definitely by the arrival of regular troops.
In the meantime, a momentous action had been taken—so momentous in fact that in all probability it decided the fate of the war. This was the appointment of General Ferdinand Foch to be Generalissimo of the Allied armies. The Allies had been hampered throughout the conflict by the various armies representing the Entente being under the control of their own generals. This led inevitably to diversity of plan and action, as distinguished from the Germans who were a unit. No matter how greatly the need of harmony among the Allies was recognized, it was impossible to secure it in fact. The English were moved by the supreme desire to bar the way to the Channel ports. The French desired to protect Paris at any cost. Each nation had a certain reluctance to send re-enforcements to the other, for fear that their own special interests might be weakened by the action. In case of a difference in views on strategy or tactics, there was no supreme power that could decide the question.
The need of unity became especially apparent a week after the beginning of the great German drive of March 21, 1918. During that week, the Germans had met with enormous successes, gaining a vast area of territory and many thousands of prisoners. It was the blackest week in the entire war for the Allied cause. There was no further hesitation. On March 28 General Pershing called upon General Foch who on that same date had been made Generalissimo, and placed at his disposal all the American troops and resources.
"I came to say to you," General Pershing said, "that the American people would hold it a great honor for our troops, were they engaged in the present battle. There is at this moment no other question than that of fighting. Infantry, artillery, aviation—all that we have are yours, to dispose of as you will."
The offer was accepted gratefully by the War Council and the following statement was issued:
"The American troops will fight side by side with the British and French troops, and the Star-Spangled Banner will float beside the French and English flags in the plains of Picardy."
By the time arrangements had been completed to utilize our troops, there were nearly 800,000 American soldiers in France, and they were coming across the seas in an apparently unending stream at the rate of 300,000 a month. America's weight was about to be thrown in the scales with decisive effect.
A deft and finished piece of work was the capture of the strongly held and fortified town of Cantigny N. W. of Montdidier. On May 28 the Americans, in conjunction with French artillery and tanks, attacked on a front of one and a quarter miles. They took the town in the first forward sweep and captured 200 prisoners besides inflicting severe losses on the enemy in killed and wounded. Repeated counter-attacks were made by the Germans in heavy force, but all were repelled.
Three days later the Americans distinguished themselves at Château-Thierry, a town that later was to become forever memorable because of the luster there shed on American arms. Units of the American Marine Corps, armed with machine guns, beat back an attack by heavy German forces on the town. They repulsed the Germans and took many prisoners, losing none of their own men as prisoners. Two more determined German attacks were beaten back a short time later on the Marne. On June 6 the Americans penetrated to a depth of two miles and took possession of the high ground N. W. of Château-Thierry. In a five-hour fight, they captured Bouresches and Torcy. Far bitterer was the fight that followed for the possession of Belleau Wood, where marines and regular soldiers won imperishable fame. The wood was densely forested, was defended by artillery and machine-gun nests and was held by the crack divisions of the German army. But the Americans pressed doggedly forward, gaining ground foot by foot, and at last in a headlong charge swept the remnants of the enemy forces from the wood, capturing hundreds of prisoners, while the ground was carpeted with German dead.
On July 1, at Vaux, the Americans, acting alone, captured the town in forty minutes, taking 500 prisoners. On July 4, in a great attack at Hamel, Independence Day was celebrated by the Americans in conjunction with the Australians by a victory that netted 1,500 prisoners. The resistance was determined, but the Americans advanced to the charge uttering the cry "Lusitania," and the fate of the day was decided.
The greatest action of the war so far for the Americans was that of July 15, when they stopped the thrust of the German Crown Prince toward Paris. The American forces were holding Jaulgonne and Dormans on the Marne. The Germans threw 25,000 of their best troops across the river. Under the shock of the great masses hurled against them the American line at first bent, but quickly rallied and threw the enemy back across the river. The Germans lost 10,000 men in killed and captured. Had the Germans broken through on that epic occasion, they would have had excellent chances of reaching the French capital.
On the following day, the Germans again attacked the American forces, only to be driven back with heavy loss. The Germans were wavering and confused. They had met with a sharp defeat, where they had confidently counted on victory. And at that critical juncture, Foch struck at them on a 28-mile front in the most magnificent counter-attack of the war. Americans in this attack were brigaded with the French troops under General Mangin and played a prominent part in the great advance to the Vesle and the Ourcq that followed the initial victory. South of Soissons, they pushed the Allied line farthest ahead. They took Fère-en-Tardenois in conjunction with the French. At Sergy, they drove the Germans beyond the Ourcq. On Aug. 1, after fighting of the severest kind, they stormed and captured Meunières Wood. At the Vesle American engineers, under fierce artillery fire, threw bridges over the stream, over which their comrades swarmed with a determination that would not be denied. In those weeks of continuous and bloody fighting the Americans were always at the front, and were everywhere victorious.
On Aug. 7, General Mangin issued the following order of the day:
Officers, Non-Commissioned Officers, and Soldiers of the American army:
Shoulder to shoulder with your French comrades, you threw yourselves into the counter-offensive begun July 18. You ran to it as if going to a feast. Your magnificent dash upset and surprised the enemy, and your indomitable tenacity stopped counter-attacks by his fresh divisions. You have shown yourselves to be worthy sons of your great country, and have gained the admiration of your brothers in arms.
Ninety-one cannon, 7,200 prisoners, immense booty and ten kilometers of reconquered territory are your share of the trophies of this victory. Besides this, you have acquired a feeling of your superiority over the barbarian enemy against whom the children of liberty are fighting. To attack him is to vanquish him.
American comrades, I am grateful to you for the blood you generously spilled on the soil of my country. I am proud of having commanded you during such splendid days, and to have fought with you for the deliverance of the world. In the operations following this great victory the American force took a brilliant part in smashing the Hindenburg line. Their steady drive against the Crown Prince's army compelled its retreat on a twenty-mile line on Sept. 4, At the battle of Juvigny, Aug. 29, the Americans captured Juvigny plateau, one division conquering four of the best of the German divisions.
In the meantime, a great American attack was being prepared in the Lorraine sector entirely under the direction of General Pershing and his assistants. The plan and strategy were American throughout, as were the bulk of the forces employed, although some French troops co-operated under Pershing's command.
The St. Mihiel salient was a wedge that had been driven by the Germans into French territory in the vicinity of the village of that name, and had been held in force by them since the first invasion in 1914. It effectually prevented an Allied offensive in the direction of Metz. During four years the French had not been able to reduce it. The Americans undertook the task and for weeks the most careful and intensive campaign was prepared. More than 100,000 detail maps were issued showing the character of the terrain and the posts held by the enemy. 40,000 photographs were distributed among the officers and men. Five thousand miles of wire were laid and 6,000 telephone instruments were connected with the wires. Nothing was left to chance, the result being one of the most signal and overwhelming victories of the war.
The position of the American army just preceding the battle is thus officially stated by General Pershing:
From Les Eparges around the nose of the salient at St. Mihiel to the Moselle river the line was, roughly, forty miles long and situated on commanding ground greatly strengthened by artificial defenses. Our 1st Corps (82d, 90th, 5th, and 2d Divisions), under command of Major-General Hunter Liggett, resting its right on Pont-à-Mousson, with its left joining our 3d Corps (the 89th, 42d, and 1st Divisions), under Major-General Joseph T. Dickman, in line to Xivray, was to swing toward Vigneulles on the pivot of the Moselle river for the initial assault. From Xivray to Mouilly the 2d Colonial French Corps was in line in the center, and our 5th Corps, under command of Major-General George H. Cameron, with our 26th Division and a French division at the western base of the salient, was to attack three difficult hills—Les Eparges, Combres, and Amaranthe. Our 1st Corps had in reserve the 78th Division, our 4th Corps the 3d Division, and our First Army the 35th and 91st Divisions, with the 80th and 33d available. It should be understood that our corps organizations are very elastic, and that we have at no time had permanent assignments of divisions to corps.
On Sept. 12, 1918, the assault was begun, and resulted in a sweeping victory. The tanks in advance broke down the enemy entanglements, and a tremendous artillery fire prepared the way for the dash of the infantry. The attack began at dawn, and within 27 hours after the beginning of the offensive, the Americans had recaptured 155 square miles of territory and had taken 433 guns and 16,000 prisoners, together with vast stores of munitions and supplies. The remainder of the enemy, numbering about 100,000, fled in hasty retreat. The victory freed Verdun from the menace of the German threat against its flank, put the dominating heights of the Meuse in American hands and cleared the way for an advance toward the Briey basin and the fortress of Metz.
While our soldiers abroad were thus demonstrating that as fighting men they had no superiors in the world, renewed endeavors had been made in this country to augment the size of American armies. A new draft law was enacted by Congress and signed by the President on Aug. 31, 1918, extending the American draft ages to all males between 18 and 45 inclusive. The number of men estimated to be affected by this law was about 13,000,000. The day of registration was set as Sept. 12, which by a coincidence chanced to be the date of the victory of St. Mihiel. It was believed that about 2,300,000 men could be obtained for military service under this registration. This would make it possible for America's total army in the field to be brought to 5,000,000 men, of whom it was believed that eighty divisions aggregating 4,000,000 could be in France by June 30, 1919. This would leave 18 divisions to be trained and held in readiness at home. It was a colossal program, and would doubtless have been carried out, had not the collapse of the Central Powers made it unnecessary.
The work of the American army at St. Mihiel was equalled by the operations of other units that were brigaded with the Allies. The 27th and 30th Divisions were brigaded with the British troops and fought in company with the Australians in the brilliant series of attacks that smashed the Hindenburg line in the vicinity of the St. Quentin canal, Sept. 29-Oct. 1. They reached all their objectives, despite the most bitter artillery and machine-gun resistance. In less than two weeks they had overrun the enemy's lines to a depth of thirteen miles and had captured 6,000 prisoners. Their casualties were heavy, but did not compare with those inflicted upon the enemy.
Two other divisions, the 2d and 36th, aided the French in driving the Germans from positions they had held for four years in the Rheims sector. In the week Oct. 2-9, they stormed and held the formidable position of Blanc Mount, and later captured the town of Ste. Etienne in bloody fighting. A little later, the 37th and 91st Divisions, which had been sent in haste to re-enforce the French troops operating in Belgium, took part in a brilliant advance that on Oct. 31 and Nov. 3 crushed the enemy's resistance, and drove his troops across the Escaut river, the American forces reaching the town of Audenarde. In Italy also American troops did gallant work in the last great Italian drive against Austria.
While these decisive operations were proceeding on the western front, United States troops were taking part also in military activities in Russia and Siberia. Russia had by this time not only withdrawn its help from the Allies, but under Bolshevist domination had adopted an attitude of sullen if not active hostility. Vast quantities of American stores that had accumulated at Vladivostok were imperiled, and it was determined that troops should be sent to protect them. This was the ostensible reason for the sending of the expedition, but there was another reason of much greater importance, based on political and military considerations. There was still a possibility that Russia might overthrow the hostile Bolshevist regime and establish a government that would once more range itself on the side of the Entente and rebuild the collapsed Russian front. Certain facts lent plausibility to this belief. A powerful body of Russian opinion was anxious to overturn the Lenine-Trotzky régime and form a constitutional government. In addition the Czechoslovaks had won decided military victories over the Bolshevist forces, and had revived hopes that that Government might be overthrown. The Czecho-Slovaks were prisoners who had been taken by the Russians in the early part of the war. They had been forced into the Austrian army, but they preferred captivity among the Russians to fighting under the hated flag of the Hapsburg monarchy. After the Russian debacle, these prisoners possessed themselves of arms, and fought their way across Russia and part of Siberia, with the intention of reaching Vladivostok and thence finding their way to the western front, to fight there in conjunction with the Allies (see Czecho-Slavovakia). They offered the nucleus of an army that might by Allied re-enforcements be made formidable, and therefore the Entente, in co-operation with America, decided to intervene. Early in August, 1918, American forces, under General William S. Graves and numbering about 10,000 men, arrived in Vladivostok. Japanese troops were sent about the same time, together with small British and French contingents. Even earlier than this, on July 15, a comparatively small detachment of Americans had landed with Allied troops at Murmansk, in the north of Russia. Desultory fighting, in no case rising above the dignity of outpost actions, followed the arrival of the troops. The story of the intervention is told elsewhere in detail (See Russia in the World War). It is sufficient here to say that the expeditions had no practical results. The forces at Archangel were withdrawn in 1919 and those at Vladivostok in 1920. It was simply a military adventure that had no practical bearing on the fortunes of war. The logic of events had decreed that the issue should be settled on the western front, and the Russian situation had ceased to be a factor in the struggle. The total casualties of the Americans in Siberia were 105; on the Archangel front 553.
The greatest battles in which the Americans were engaged were those of the Meuse-Argonne, in the fall of 1918. This epic struggle with its victorious outcome will ever be a glorious page in American history. The Argonne forest was the most formidable position that any troops had been called on to take in the entire course of the war. So formidable was it that Napoleon himself had refused to attack the enemy there, deeming the forest impregnable. It was densely wooded, and in places almost impenetrable. To these natural obstacles the Germans had brought all the aids known to military science. Thousands of miles of wire were stretched from bush to bush and tree to tree. Every foot of ground had been ranged for their heavy artillery. Machine-gun nests by the thousands were hidden in shell holes and behind rocks and tree trunks. Even many of the Allied commanders doubted if success were possible. The Americans, however, undertook the task, and carried it through to a successful conclusion. On Sept. 20, after intensive artillery fire that cut lanes through some of the wire entanglements, the American troops launched a vigorous attack that on the first day mastered the first line defenses. In the next two days, fighting against terrific resistance, they penetrated to a depth varying from three to seven miles, captured a dozen towns and took 10,000 prisoners. In successive days they improved their position and continued their advance in the face of almost insuperable obstacles. In the words of General Pershing in reporting the battle, the American troops "should have been unable to accomplish any progress, according to previously accepted standards, but I had every confidence in our aggressive tactics and the courage of our troops." By the middle of October, the important town of Grandprè had been taken and the forest practically cleared.
After this, the fighting was easier, though much stiff battling remained to be done before the Americans reached their goal. The enemy's morale had weakened before the irresistible onslaught and the successive defeats inflicted on them. Huge naval guns had been brought up by the Americans—guns capable of carrying a half-ton projectile almost twenty miles—and with these a bombardment was begun that cut the Mézières-Sedan railway line, the chief German artery of supplies for their army. By the 6th of November, the Americans had reached a point on the Meuse opposite Sedan. From that moment the German cause was doomed. The enemy's line of communications had been cut, and only an armistice or abject surrender remained. In this gigantic offensive, the Americans had captured 468 guns and 26,059 prisoners.
Taking no more time than to give his soldiers a breathing spell the American commander was preparing an advance toward Longwy and the Briey coal fields and had already commenced the attack on the morning of Nov. 11 when the order came to suspend hostilities at 11 A. M. The armistice had been signed and the greatest war in history came to an end.
The series of disasters to German arms and the impending collapse of their allies were reflected in the changed tone of German statesmen at the beginning of autumn. Hindenburg, the military idol of the German people, issued a manifesto on Sept. 6, in which he acknowledged the severity of the struggle and exhorted the army to be on its guard against enemy propaganda. The Kaiser, speaking to the municipality of Munich, a day earlier, had admitted the difficulty of the present struggle against an enemy "filled with jealousy, destruction and the will to conquer." A week later, his agitation and apprehension were clearly marked in a halting address that he made to the workmen at Essen. The German Crown Prince supplemented his father's efforts by declaring that Germany had never wanted war and was fighting simply for her existence, ringed in as she was by a circle of foes. Von Hertling, Burian, and Czernin, in the same month, made addresses that were palpable bids for peace. There was no longer any arrogant talk about annexations and indemnities. Panic fear was beginning to spread among the statesmen of the Central Powers as they read the "handwriting on the wall."
The first open peace proposal was made in a communication by the Austro-Hungarian Government to the governments of all neutral and belligerent powers, dated Sept. 15. While the note ostensibly came from Austria alone, it developed later that it had received the approval of Germany. The note was carefully worded, was devoid of bitterness or arrogance, and asked that a confidential and "non-binding" discussion be entered into, that might clear away misunderstandings and pave the way to peace.
But the note came too late. It was regarded on all sides as an attempt to escape an impending military defeat by causing a slackening of effort on the part of the Allies while the retreating armies of the Central Powers should have a chance to regain their poise and vanishing morale. The offer was met with rejection by all the Entente nations. The refusal of our own Government was despatched on the same day that the note was received. The President stated that the note required no extended answer for "the Government has repeatedly and with entire candor stated the terms on which the United States would consider peace, and can and will entertain no proposal for a conference upon a matter concerning which it has made its position and purpose so plain."
Answers of a similar tenor, though in some cases more extended, were made by the members of the Entente, and the overture came to naught. Its receipt, however, was probably the moving cause of a notable address made by the President in the Metropolitan Opera House, New York, on Sept. 27, 1918. In this the President set forth what he called a "practical program" the salient points of which were as follows:
First.—The impartial justice meted out must involve no discrimination between those to whom we wish to be just and those to whom we do not wish to be just. It must be a justice that plays no favorites and knows no standard but the equal rights of the several peoples concerned;
Second.—No separate or special interest of any single nation or any group of nations can be made the basis of any part of the settlement which is not consistent with the common interest of all;
Third.—There can be no league or alliances or special covenants and understandings within the general and common family of the League of Nations;
Fourth.—And more specifically, there can be no special selfish economic combinations within the league and no employment of any form of economic boycott or exclusion, except as the power of economic penalty by exclusion from the markets of the world may be vested in the League of Nations itself as a means of discipline and control;
Fifth.—All international agreements and treaties of every kind must be made known in their entirety to the rest of the world.
This program was promptly seized upon by Austria as the basis of a new appeal which was made on Oct. 7, not to an belligerents this time, but to the President alone. It offered to conclude an immediate armistice on the basis of the fourteen points enunciated in the President's address to Congress on Jan. 8, the four points emphasized in his Feb. 1 speech and the program stated in the address in New York, Sept. 27. The substance of these three notable utterances have been given in the preceding pages.
This proposition was again refused. Events in the interim between the setting forth of these several points of view had changed the situation so that one at least of the fourteen points was no longer applicable. This was the tenth point, which had demanded the fullest possible autonomy for the peoples of Austria-Hungary. But by this time the independence, not autonomy alone, of Czecho-Slovakia had been recognized. Jugoslavia's claim also to a separate national existence had been approved by this Government.
One more attempt was made by Austria, now frantic and distracted, to secure terms. She willingly admitted the right of Czecho-Slovakia and Jugoslavia to independence and urged that immediate negotiations be initiated. She asked further that this might be done, irrespective of any correspondence that might be proceeding with any other power, the reference of course being to Germany.
By this time the fate of Austria had been sealed by the arbitrament of arms on the Italian front. There was no need of further correspondence with the doomed nation. Her note was transmitted to the Inter-Allied Conference at Versailles, and Austria was instructed to deal directly with the commander of the Italian forces.
Much more important than the Austrian peace overtures were those begun by Germany. That empire had at last abandoned all hope of military success. Her Macedonian front had crumbled, by the Kaiser's own admission; Turkey was threatened with absolute overthrow by the whirlwind campaign of Allenby; Austria-Hungary alone of all her allies was left, and could not maintain her own line, let alone render help to the hard-pressed German forces. The end was at hand, and it only remained for Germany to save as much as she could from the wreck of her military fortunes.
On Sept. 12, Vice-Chancellor von Payer had expressed the willingness of his Government to give back Belgium. Two days following the delivery of President Wilson's address of Sept. 27, the German Government began to set its official house in order, so that it might more fully conform to the President's views on popular government. The more conservative and war-insistent members of the Government were dismissed, and men of a more liberal character took their places. Changes were also made in the direction of ballot reform, looking for a more general participation by the people in the Government. The Constitution itself was changed, and the Cabinet Ministers were given the right to demand to be heard by the Reichstag. But the greatest change was made in the Chancellorship. Von Hertling, who was supposed to be persona non grata to the Allies because of his previous committals on questions connected with the war, was replaced on Oct. 2 by Prince Maximilian of Baden, who had no antagonisms to overcome and who was reputed to be of Liberal tendencies.
The first act of the new Chancellor after assuming office was to send to President Wilson through the Swiss Government as intermediary the following note:
The German Government requests the President of the United States to take in hand the restoration of peace, acquaint all the belligerent states with this request and invite them to send plenipotentiaries for the purpose of opening negotiations.
It accepts the program set forth by the President of the United States in his message to Congress on January 8, and in his later pronouncements, especially his speech of Sept. 27, as a basis for peace negotiations.
With a view to avoiding further bloodshed, the German Government requests the immediate conclusion of an armistice on land and water and in the air.
On the same day, the Chancellor outlined in a speech to the Reichstag the changes that had been made in the German administration and constitution. This was done evidently to convince the American Government that in any dealings it might henceforth have with Germany it would be dealing with a government of the German people, instead of a militaristic clique. The speech hinted also that Germany might be willing to pay an indemnity, and promised the complete restoration and rehabilitation of Belgium.
The reply of the President was despatched Oct 8. It neither accepted nor rejected the German offer, but rather deferred a positive statement pending the receipt of further information. The President declared that he would not feel at liberty to propose a cessation of arms to his associate powers until their soil had been evacuated by the German armies. He asked also whether the German note meant that the German Government actually accepted the terms that the President had set forth in his Jan. 8 address, and whether its object in entering into discussions would be only to agree upon the practical application of those terms. Furthermore, the President wanted to know whether the Chancellor was speaking merely as the mouthpiece of the constituted authorities of the empire who had hitherto conducted the war. The answer to these questions the President declared was vital.
The answer of the German Government was quick in coming. It was dated Oct. 12, and bore the signature of Dr. Solf, the former Colonial Secretary, but for the preceding six days the Imperial Foreign Secretary. The note accepted unequivocally the President's address of Jan. 8 and his subsequent utterances as the bases of peace. It pointed out the changes that had been made in the German Government to bring it closer to the masses, and declared that the Chancellor spoke in the name of the German Government and the German people. As to evacuation, readiness was expressed to agree to this, and it was proposed that a mixed commission be appointed to consider the details.
In the interval between the receipt of the first and second note, Germany, with an almost unbelievable blindness, in view of the fact that she was suing for peace and that her interests lay in conciliating rather than exasperating her enemies, had committed fresh atrocities during her retreat through Belgium and had horrified the Allied and neutral nations by a submarine sinking resembling somewhat the tragedy of the "Lusitania." The British mail steamer "Leinster" had been torpedoed during a storm in the Irish Sea on Oct. 10 and had gone down in fifteen minutes with a loss of 480 lives, of which 135 were those of women and children.
These devastations and massacres strongly influenced the wording and tenor of the President's second reply. After stating that the matter of arranging the process of evacuation and conditions of an armistice must be left to the judgment of the military advisers of the United States and the Allied Governments, and emphasizing that there must be absolutely satisfactory safeguards and guarantees of the maintenance of the "present military supremacy of the armies of the United States and the Allies in the field," he gave a solemn warning that no proposition for an armistice would be considered as long as Germany persisted in her illegal and inhuman practices.
"At the very time that the German Government approaches the Government of the United States with proposals of peace, its submarines engaged in sinking passenger ships at sea, and not the ships alone, but the very boats in which the passengers and crews seek to make their way to safety, and in their present enforced withdrawal from Flanders and France, the German armies are pursuing a course of wanton destruction, which has always been regarded as in direct violation of the rules and practices of civilized warfare. Cities and villages, if not destroyed, are being stripped of what they contained not only, but often of their very inhabitants. The nations associated against Germany cannot be expected to agree to a cessation of arms, while acts of inhumanity, spoliation and desolation are being continued, which they justly look upon with horror and with burning hearts."
The President also directed the attention of the German Government to a sentence that occurred in his address at Mt. Vernon on the preceding July 4, in which as a term of peace was declared necessary, "the destruction of every arbitrary power everywhere that can separately, secretly and of its single choice disturb the peace of the world; or, if it cannot be presently destroyed, at least its reduction to virtual impotency." This, the President declared, was the kind of power that had hitherto controlled the German people. It was within the power of the German people to alter it, and the whole possibility of securing peace rested upon this being done.
The answer was thoroughly gratifying to this nation. There had been some dissatisfaction with the first reply, but this second one received the heartiest indorsement from all quarters. The Allied nations also gave it their warmest approval.
Germany's third note was dated Oct. 20, and was received in this country on the 22d. It denied the charges of atrocities, or declared that if severities had occurred, they were due to military necessity. It reiterated that the form of the German Government had radically changed, and that the proposals put forth were supported by the approval of an overwhelming majority of the German people. As regards the armistice, it suggested that the "actual standard of power" on both sides in the field was to form the basis for arrangements safeguarding and guaranteeing that standard.
The President's reply to this third note was sent the day after it was officially received. He declared his willingness to submit the proposal for an armistice to the associated powers, but warned Germany, that the only conditions he would feel justified in recommending would be such as would make the renewal of hostilities on the part of Germany impossible. If, moreover, the changes in the German Government were only nominal, not real, and if the United States must deal with the military masters of Germany now, or if it is likely to have to deal with them later, "it must demand not peace negotiations but surrender."
The reply received prolonged consideration by the German authorities. By that time the military situation was so hopeless that nothing remained but submission. Ludendorff resigned his command on the 26th. On the 27th, a message to President Wilson, virtually accepted the terms by declaring that it awaited the receipt of the armistice proposition from the allied military staffs.
The ultimate compliance of Germany had been counted upon as absolutely foreshadowed by the progress of events, and terms had been drawn up while the correspondence was being interchanged. On Nov. 5, Secretary Lansing announced to the German Government that peace would be made on the terms prescribed in his public utterances by President Wilson. An important reservation was made, however, namely that liberty of action was reserved on the clause relating to the freedom of the seas, since it was liable to differing interpretations. It was also demanded that compensation be made by Germany "for all damage to the civilian population of the Allies by land, by sea, and from the air." The note closed with the statement that Marshal Foch had been authorized to receive the German delegates and acquaint them with the terms of armistice the Allied and Associated Powers were prepared to grant.
The Germans promptly requested the Marshal to appoint a time and place of meeting. The time was set as Nov. 7, and the place was the railroad car of Marshal Foch in the forest of Compiègne. The delegates proceeded there under a white flag. They were met at the French lines by guards, who conducted them to the place of meeting. There the armistice terms were read by the Generalissimo of the Allied armies. Their alleged severity aroused protests from the Germans, who were informed, however, that the Marshal's power to change them extended only to minor details. On Nov. 11, the German delegates affixed their signatures to the armistice terms, and the war was over. It is true that the treaty of peace remained to be signed, and this was not done until June 28, 1919, five years to a day from the date in 1914 when the assassination of the Austrian Archduke Ferdinand, at Sarajevo, had furnished the pretext for the World War. But the actual cessation of hostilities dates from Nov. 11, 1918. The terms of the armistice were such as to make it impossible for Germany to resume the war, even if she were so inclined. Those terms were drastic, but in the general judgment of the Allied world did not go beyond what justice and security from future aggression required.
In America, as in other Allied nations, the news that the armistice had been signed was received with joyful popular demonstrations. The relief from the strain of war was unspeakable, and with this was mingled pride at the part that America had played in bringing the war to a successful conclusion.
The terms of the armistice provided that three bridgeheads on the Rhine should be occupied by Allied forces. The Coblenz bridgehead was the one assigned to the American Army of Occupation. The march was begun almost at once, and on Dec. 12 the army reached Coblenz, their forces crossing the Rhine the following morning to occupy the bridgehead. Military administration was inaugurated at once, though the municipal authorities were allowed to function, under American supervision and control. The occupation continued until the signing of the Peace treaty, after which the American troops were gradually withdrawn and sent home, the places of some of them being taken by new units sent from America. In May, 1920, there remained about 13,000 American troops at the bridgehead under the command of Major-General Henry T. Allen. In the main, the occupation, beyond a little occasional friction, was marked by few untoward incidents, and order was well maintained.
The cessation of hostilities made it possible for the American people to become acquainted with the real extent of American participation in the conflict by various arms of the service. Previous to that time, many of the operations had been recorded in fragmentary form, or had been hidden under the veil of secrecy required by the censorship. The work of the land forces had been fairly well followed, but that of the navy and the air services had not been, gauged at their full value. An official report of Secretary of the Navy Daniels, issued Dec. 3, 1918, gave interesting details of the navy's achievements.
On the day that war was declared the navy numbered 65,777 men. At the signing of the armistice, it had increased to 497,030. The ships in commission had increased in the same period from 197 to 2,003. Less than a month after war was declared, a division of United States destroyers was in European waters. By October, 1918, there were 338 ships of all classes serving abroad. Up to Nov. 1, 1918, of the total number of American troops in Europe, 924,578 had been carried over in United States naval convoys, escorted by American cruisers and detroyers. Not one eastbound American transport was torpedoed or damaged by enemy submarines, and only three were sunk on the return voyage. In 10 months, the transportation service grew from 10 ships to a fleet of 321 cargo carrying vessels, with a dead-weight tonnage of 2,800,000.
A mine barrage had been laid in the North Sea for which 85,000 mines had been shipped abroad. The work of the destroyers in curbing the submarine menace was declared by the Secretary to have been without a precedent in Allied warfare and had received the most enthusiastic commendation of the Allied naval authorities. The work of the marines in the fighting at Château-Thierry, Belleau Wood, and many other battle fields has already been described in the foregoing pages.
Statistics of the main accomplishments of America in the two years between April 6, 1917, when war was declared, and April 6, 1919, are here subjoined:
Regular Army 127,588
National Guard in Federal Service 80,466
Reserve Corps in service 4,000
Total of soldiers 212,034
Personnel of Navy 65,777
Marine Corps 15,627
Total armed forces 293,438
Nov. 11, 1918:
Army 3,764,000
Navy 497,030
Total armed forces 4,339,047
Soldiers transported overseas 2,053,347
American troops in action, Nov. 11, 1918 1,338,169
Soldiers in camps in the United States, Nov. 11, 1918 1,700,000
Casualties, Army and Marine Corps, A. E. F. 282,311
Death rate per thousand, A. E. F. .057
German prisoners taken 44,000
Americans decorated by British, French, Belgian,
and Italian Armies, about 10,000
Number of men registered and classified under
selective service law 23,700,000
Gas masks, extra canisters and horse masks 8,500,000
NAVY AND MERCHANT SHIPPING
Warships at beginning of war 197
Warships at end of war 2,003
Small boats built 800
Submarine chasers built 355
Merchant ships armed 2,500
Naval bases in European waters and the Azores 54
Shipbuilding yards (merchant marine)
increased from 61 to more than 200.
Cost of 32 National Army cantonments and
National Guard camps $179,629,497
Students enrolled in 500 S. A. T. C. camps 170,000
Officers commissioned from training camps
(exclusive of universities, etc.) 80,000
Women engaged in Government war industries 2,000,000
BEHIND THE BATTLE LINES
Railway locomotives sent to France 967
Freight cars sent to France 13,174
Locomotives of foreign origin operated by A. E. F. 350
Cars of foreign origin operated by A. E. F. 973
Miles of standard gauge track laid in France 843
Warehouses, approximate area in square feet 23,000,000
Motor vehicles shipped to France 110,000
Persons employed in about 8,000 ordnance plants in the
United States at signing of armistice 4,000,000
Shoulder rifles made during war 2,500,000
Rounds of small arms ammunition 2,879,148,000
Machine guns and automatic rifles 181,662
High explosive shells 4,250,000
Gas shells 500,000
Shrapnel 7,250,000
Shipbuilding ways increased from 235
to more than 1,000.
Ships delivered to Shipping Board by end of 1918 592
Deadweight tonnage of ships delivered 3,423,495
FINANCES OF THE WAR
Total cost, approximately $24,620,000,000
Credits to 11 nations 8,841,657,000
Raised by taxation in 1918 3,694,000,000
Raised by Liberty Loans 14,000,000,000
War Savings Stamps to November, 1918 834,253,000
War relief gifts, estimated 4,000,000,000
Political Happenings During and After the World War. Congress passed, in 1915, another immigration bill with a literacy test. This was vetoed by the President. The Supreme Court, in June, 1915, decided against the Government in the dissolution suit against the steel trust and declared that the "grandfather clauses" of the Oklahoma and Maryland constitutions were void. The United States in September, 1915, undertook the supervision of the revenues of Haiti. A conference was called in October of that year of South American diplomats to consider the Mexican question. It was decided that Carranza should be recognized, and, on October 19, President Wilson recognized Carranza as heading the de facto Government of Mexico. In February, 1916, as a result of the failure to agree with the President's policy on national defense, Lindley M. Garrison, Secretary of War, resigned, and was succeeded by Newton D. Baker of Ohio. In January, 1915, Francisco Villa, the most powerful of the Mexican revolutionary leaders, killed several American miners and on March 9, with 500 followers, invaded the town of Columbus, New Mexico, killing seven troopers and several citizens, destroying much property. He was pursued across the border by United States troops and on the following day the President authorized a punitive expedition to pursue and capture him. An agreement was made with President Carranza permitting this force to cross the border. General Pershing pursued Villa in the mountain regions of Chihuahua. Several Engagements were fought with the Mexicans. At Parral, American soldiers were attacked by natives and General Carranza demanded that the expedition be recalled. This was refused by President Wilson on the ground that the Mexican Government was not able to keep peace along the border. In spite of the protest of the Mexican authorities, the United States forces remained in Mexico, although Villa was not captured. The State Militia was called to protect the border. On July 15, after an exchange of notes, the matter was settled temporarily by a joint commission.
There were indications early in 1916 of a reunion between the Republican and Progressive parties, and this was verified by the announcement that their conventions would meet at the same time in the same city. A committee of prominent men of both parties was appointed to reach a common ground of agreement on both candidates and platform. At the convention, Charles E. Hughes, of New York, was nominated without serious opposition. Charles W. Fairbanks, of Indiana, was nominated for vice-president. At the Democratic convention held in St. Louis, in the early part of July, President Wilson was renominated by acclamation and Thomas R. Marshall was again nominated for vice-president. An element of the Progressive party nominated Mr. Roosevelt for president, but as he declined the nomination, the National Committee of the party indorsed Justice Hughes. The campaign was bitterly fought. Colonel Roosevelt took an active part in the support of Justice Hughes, and attacked the President's policies in regard to the war and Mexico in unsparing terms. The election in November proved one of the closest in the history of the United States. The first returns made it evident that Hughes had carried all the industrial and commercial States of the North and East, with the exception of Ohio. On the day following the election, it was announced that Mr. Hughes had been elected. Later in the day, however, gains from the West indicated the possibility of the re-election of the President. Many States which had been regarded as safely Republican, went Democratic. The turning point, however, was California. After a few days of suspense during the counting of the votes, the electoral vote of the State was announced for Wilson, and it was sufficient to re-elect him. The electoral vote stood 277 for Wilson and 254 for Hughes. The President's popular vote showed a gain of 2,000,000 over that of 1912, in spite of the fact that the Republicans gained in the House of Representatives and elected many of their State candidates. The legislation of 1917 was devoted chiefly to the successful conduct of the war. The measures passed, included those providing for the Emergency Fleet Corporation, food control. Federal regulation of coal, Trading with the Enemy Act, and like measures. On Oct. 24, 1918, President Wilson issued an appeal to the people of the United States to return a Democratic Congress in the coming fall election, declaring that the election of a Republican Congress would be taken abroad by Germany and the Allies alike as a repudiation of his leadership of policies. The Republicans resented this appeal, and the result of the election was a defeat for the administration. The Republicans secured a substantial majority in the House of Representatives and in the Senate by a close margin.
On the conclusion of the war, the nation quickly returned to a peace basis, and preparations were at once begun for the return of American soldiers. The first shipload of these arrived on Dec. 2, 1918, and they were followed by an ever increasing procession of vessels from Europe to the United States, bearing home the members of the American Expedition. It was announced on Nov. 18, 1918, that the President would personally attend the Peace Conference in Paris. The President, in his message, explained that as the Allies and Germany had made his speeches the basis of their negotiations, it was due to the American people, no less than to the Entente nations, that he should be in close touch with the deliberations. Accordingly, he embarked on Dec. 4, 1918, for France, accompanied by the American delegates, with a large group of experts. The peace delegates named by the President were Robert Lansing, Secretary of State; Henry White, formerly Ambassador to France; Edward M. House; and Tasker Bliss. The President was received with the greatest warmth in Paris, as well as in Great Britain and Italy, which he visited previous to the formal meetings of the Conference.
The chief concern of President Wilson at the Peace Conference was avowedly the preparation of the covenant of the League of Nations, and to this object he devoted his chief efforts. The draft of the covenant was cabled to the United States, prior to the return of the President for a brief visit. He arrived in Boston on February 24. Opposition to the covenant had already developed, chiefly from a group of Republican Senators, who asserted that it violated the sovereignty of the United States and repudiated the Monroe Doctrine. The chief article singled out for attack was Article 10, which pledged the United States "to preserve, as against external aggressions, the territorial integrity of all the States in the League." President Wilson returned to Paris early in March, after having delivered several speeches in support of the Treaty of the Covenant. On the signing of the Treaty on June 28, 1919, the President at once embarked for the United States, arriving on July 9.
On July 1, 1919, the prohibition of the sale of intoxicating liquors went into effect in the United States, as a result of the ratification of the 19th amendment to the Constitution by three-fourths of the States. By the terms of the provisions, however, the amendment was not effective until January, 1920.
The Peace Treaty was considered by the Committee on Foreign Relations in the Senate, and on the report of the committee on September 28, it was found to contain 38 amendments and four reservations. The amendments were rejected and the chief interest centered upon the reservations. These, in the main, covered the same ground as the amendments. The first provided that the United States should have the right to withdraw from the League of Nations "upon a notice provided in Article I of said Treaty of Peace with Germany." The second reservation absolved the United States from any obligation in Article 10 "to preserve the territorial integrity or political independence of any other country." The third reservation provided that the United States should have a right to decide what questions were within its domestic jurisdiction. In the fourth reservation, the United States declined to submit for arbitration or inquiry any questions depending on, or related to, the Monroe Doctrine." These reservations were debated in the Senate. The Treaty was defeated with the reservations, by a vote of 55 to 39, and without the reservations, by a vote of 55 to 38. No further consideration of the Treaty was given during the remainder of this session of Congress. On April 30, Senator Knox introduced a resolution providing for a declaration of peace with Germany. This resolution was adopted by both the House and the Senate on May 27, 1920, but was vetoed by the President.
Congress passed a comprehensive bill for the conduct and regulation of railroads. (See Railways.) Various measures were taken during December, 1919, and the following months for the suppression of anarchistic and communistic propaganda in the United States. Raids were made upon the headquarters of radical societies throughout the country, and many of the leaders were taken for the purpose of deportation. In December, about 300 anarchists, the most conspicuous of whom were Emma Goldman and Alexander Berkman, were deported to Russia.
Affairs in Mexico gave rise to an exchange of notes between that country and the United States, during the latter part of 1919. During the same period there were serious labor troubles. A general coal strike in bituminous fields was prevented only by the prompt action of the Government by declaring the strike illegal, and issued an injunction preventing the leaders from ordering a strike. A commission was appointed by the President in December to reconcile the differences between the employees and the employers. The railroads also assumed a threatening attitude, but these difficulties were temporarily reconciled pending the passage of railroad legislation in Congress. Several radical measures, the most important of which was the so-called Plumb Bill, were advanced by railroad employees and their representatives. This measure practically gave the railroads of the country into the hands of the employees.
On Feb. 13, 1920, Robert L. Lansing resigned as Secretary of State, as a result of severe criticism on the part of President Wilson of his conduct in summoning the cabinet during the President's illness. He was succeeded by Bainbridge Colby of New Jersey.
The presidential campaign of 1920 had for its chief issue the League of Nations. President Wilson, during the consideration of the measure in the Senate, had made an extensive tour of the country, which was ended only by a physical breakdown that for many months prevented his active participation in the affairs of the Government, and left him practically an invalid. The leading Republican candidates, prior to the convention, were General Leonard Wood; Governor Lowden, of Illinois; Senator Hiram Johnson, of California; and Herbert Hoover. The most conspicuous Democratic candidates were William G. McAdoo and Governor James M. Cox of Ohio. Preferential primary elections were held in the various States during April. The Republican national campaign opened at Chicago, on June 8.
© Keystone View Co.
PRESIDENTIAL INAUGURAL, MARCH 4, 1921, ON THE EAST FRONT OF THE CAPITOL
WARREN GAMALIEL HARDING TAKING THE OATH OF OFFICE IN THE INAUGURAL CEREMONIES, MARCH 4, 1921
© Photo, Clinedinst Studio
PRESIDENT HARDING AND HIS CABINET, AT THEIR FIRST CABINET MEETING, MARCH, 1921—HARDING, MELLON, DAUGHERTY, DANBY, WALLACE, DAVIS COOLIDGE, HOOVER, FALL, HAYS, WEEKS, HUGHES
©Keystone View Company
COUNTING THE ELECTORAL VOTE FOR PRESIDENT AND VICE PRESIDENT IN THE HOUSE OF REPRESENTATIVES
THE SUPREME COURT CHAMBER IN THE CAPITOL
©Harris & Ewing
THE SENATE CHAMBER IN THE CAPITOL, WASHINGTON, D. C.
Photo by Ewing Galloway
THE STATE, ARMY, AND NAVY BUILDING, WASHINGTON, D. C.
© Ewing Galloway
THE WHITE HOUSE, WASHINGTON, D. C.
↑ Includes $30,000,000 received from United States Sugar Equalization Board (Inc.), as dividend on capital stock owned by United States, and $60,724,742.27 received from Federal Reserve banks as franchise tax.
↑ On the basis of estimates by the Government Actuary, the amount of fractional currency outstanding on Dec. 31, 1920, is carried at $2,000,000, a reduction of $4,842,066.45 on account of fractional currency estimated to have been irrevocably lost or destroyed in circulation.
↑ 3.0 3.1 A minus sign ( − ) denotes decrease
↑ Appointed by the President and confirmed by the Senate, but did not act.
Retrieved from "https://en.wikisource.org/w/index.php?title=Collier%27s_New_Encyclopedia_(1921)/United_States_of_America&oldid=7090725"
Collier's 1921 no volume
Central discussion
Random author
Random transcription
Download/print
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy.
About Wikisource | CommonCrawl |
\begin{document}
\newcommand{\ov}[1]{\overline{#1}}
\newcommand{\halfspan}{|pa|+\frac{\theta}{\sin\theta}|aq|} \newcommand{\func}{\frac{\theta}{\sin\theta}} \newcommand{\fullspan}[1]{\left(1+\frac{\theta}{\sin\theta}
\right)#1} \newcommand{\idealspan}[1]{\left(1+\frac{2\pi}{3\sqrt{3}}
\right)#1} \newcommand{\ospan}{(1+\frac{2\pi}{3\sqrt{3}})} \newcommand{\tspan}{(1+\frac{\theta}{\sin\theta})}
\newcommand{neighbourhood }{neighbourhood } \newcommand{neighbourhoods}{neighbourhoods} \newcommand{neighbourhoods }{neighbourhoods } \newcommand{neighbourhood}{neighbourhood} \newcommand{Neighbourhood }{Neighbourhood }
\newcommand{restricted neighbourhood }{restricted neighbourhood } \newcommand{restricted neighbourhoods}{restricted neighbourhoods} \newcommand{restricted neighbourhoods }{restricted neighbourhoods } \newcommand{restricted neighbourhood}{restricted neighbourhood} \newcommand{Restricted neighbourhood }{Restricted neighbourhood }
\newcommand{cone neighbourhood }{cone neighbourhood } \newcommand{cone neighbourhoods}{cone neighbourhoods} \newcommand{cone neighbourhoods }{cone neighbourhoods } \newcommand{cone neighbourhood}{cone neighbourhood} \newcommand{Cone neighbourhood }{Cone neighbourhood } \newcommand{Cone neighbourhoods }{Cone neighbourhoods } \newcommand{Cone neighbourhoods}{Cone neighbourhoods}
\newcommand{region }{region } \newcommand{regions }{regions } \newcommand{regions}{regions} \newcommand{region}{region} \newcommand{Region }{Region } \newcommand{Regions }{Regions } \newcommand{Region }{Region } \newcommand{Regions }{Regions }
\newcommand{canonical edge }{canonical edge } \newcommand{canonical edges }{canonical edges } \newcommand{canonical edges}{canonical edges} \newcommand{canonical edges}{canonical edges} \newcommand{Canonical edge }{Canonical edge } \newcommand{Canonical edges }{Canonical edges } \newcommand{Canonical Edge }{Canonical Edge } \newcommand{Canonical Edges }{Canonical Edges }
\newcommand{canonical subgraph }{canonical subgraph } \newcommand{canonical subgraphs }{canonical subgraphs } \newcommand{canonical subgraphs}{canonical subgraphs} \newcommand{canonical subgraph}{canonical subgraph} \newcommand{Canonical subgraph }{Canonical subgraph } \newcommand{Canonical subgraphs }{Canonical subgraphs } \newcommand{Canonical Subgraph }{Canonical Subgraph } \newcommand{Canonical Subgraphs }{Canonical Subgraphs }
\newcommand{ideal path }{ideal path } \newcommand{ideal paths }{ideal paths } \newcommand{ideal paths}{ideal paths} \newcommand{ideal path}{ideal path} \newcommand{Ideal path }{Ideal path } \newcommand{Ideal paths }{Ideal paths } \newcommand{Ideal Path }{Ideal Path } \newcommand{Ideal Paths }{Ideal Paths }
\newcommand{E_A}{E_A} \newcommand{E_{CAN}}{E_{CAN}} \newcommand{Ec}{Ec} \newcommand{\N}[3]{N_{#1}^{#2,#3}} \newcommand{pivotal }{pivotal } \newcommand{Can}{Can}
\title{Improved Spanning Ratio for Low Degree Plane Spanners hanks{This work was partially supported by the Natural Sciences and Engineering Research Council of Cananda (NSERC) and by the Ontario Graduate Scholarship (OGS).} \begin{abstract}
We describe an algorithm that builds a plane spanner with a maximum degree of 8 and a spanning ratio of $\approx 4.414$ with respect to the complete graph. This is the best currently known spanning ratio for a plane spanner with a maximum degree of less than 14. \end{abstract}
\section{Introduction}
Let $P$ be a set of $n$ points in the plane. Let $G$ be a weighted geometric graph on vertex set $P$, where edges are straight line segments and are weighted according the the Euclidean distance between their endpoints. Let $\delta_{G}(p,q)$ be the sum of the weights of the edges on the shortest path from $p$ to $q$ in $G$. If, for graphs $G$ and $H$ on the point set $P$, where $G$ is a subgraph of $H$, for every pair of points $p$ and $q$ in $P$, $\delta_G(p,q) \leq t\cdot\delta_H(p,q)$ for some real number $t>1$, then $G$ is a $t$-spanner of $H$, and $t$ is called the \emph{spanning ratio}. $H$ is called the \emph{underlying} graph of $G$. In this paper the underlying graph is the Delaunay triangulation or the complete graph.
The $L_1$-Delaunay triangulation was first proven to be a $\sqrt{10}$-spanner by Chew\cite{chew}. Dobkin \emph{at al.}\cite{dobkin} proved that the $L_2$-Delaunay triangulation is a $\frac{1+\sqrt{5}}{2}\pi$-spanner. This was improved by Keil and Gutwin\cite{keil} to $\frac{2\pi}{3\cos(\frac{\pi}{6})}$, and finally taken to its currently best known spanning ratio of $1.998$ by Xia\cite{xia}.
The Delaunay triangulation may have an unbounded degree. High degree nodes can be detrimental to real world applications of graphs. Thus there has been research into bounded degree plane spanners. We present a brief overview of some of the results in Table \ref{table-uno}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Paper & Degree & Stretch Factor \\
\hline
\hline
Bose \emph{et al.}\cite{bose27} & 27 & $(\pi+1)C_{DT} \approx 8.27$\\
Li \& Wang\cite{wang23} & 23 & $(1+\pi\sin(\frac{\pi}{4}))C_{DT} \approx 6.44$ \\
Bose \emph{et al.}\cite{bose17} & 17 & $(\frac{1+\sqrt{3}+3\pi}{2}+2\pi\sin(\pi/12))C_{DT} \approx 23.58$ \\
Kanj \emph{et al.}\cite{kanj} & 14 & $(1 + \frac{2\pi}{14\cos(\pi/14)})C_{DT} \approx 2.92$ \\
Bose \emph{et al.}\cite{bose-paz} & 7 & $(\frac{1}{1-2\tan(\pi/8)})C_{DT} \approx 11.65$ \\
Bose \emph{et al.}\cite{bose-paz} & 6 & $(\frac{1}{(1-\tan(\pi/7)(1+1/\cos(\pi/14)))}C_{DT} \approx 81.66$ \\
Bonichon \emph{et al.}\cite{bonichon} & 6 & 6 \\
Bonichon \emph{et al.}\cite{degree4} & 4 & $\sqrt{4+2\sqrt{2}}(19+29\sqrt{2})\approx 156.82$ \\
\hline \hline
This paper & 8 & $(1+\frac{2\pi}{6\cos(\pi/6)})C_{DT} \approx 4.41$\\
\hline
\end{tabular}\\
$C_{DT}$ is the spanning ratio of the Delaunay triangulation, currently $< 1.998$\cite{xia}
\caption{Known results for bounded degree plane spanners.}\label{table-uno}
\end{center}
\end{table} Bounded degree plane spanners are often obtained by taking a subset of edges of an existing plane spanner and ensuring that it has bounded degree, while maintaining spanning properties. We note how in Table \ref{table-uno} that all of the results are subgraphs of some variant of the Delaunay triangulation.
Our contribution is an algorithm to construct a plane spanner of maximum degree 8 with a spanning ratio of $\approx 4.41$. This is the lowest spanning ratio of any graph of degree less than 14.
The rest of the paper is organized as follows. In Section \ref{chap-buildingd8} we describe how to select a subset of the edges of the Delaunay triangulation $DT(P)$ to form the graph $D8(P)$. In Section \ref{chap-degree} we prove that $D8(P)$ has a maximum degree of 8. In Section \ref{chap-spanner} we bound the spanning ratio of $D8(P)$ with respect to $DT(P)$. Since $DT(P)$ is a spanner of the complete Euclidean graph, this makes $D8(P)$ a spanner of the complete Euclidean graph as well.
\section{Building D8(P)}\label{chap-buildingd8}
Given as input a set $P$ of $n$ points in the plane, we present an algorithm for building a bounded degree plane graph with maximum degree 8 and spanning ratio bounded by a constant, which we denote as $D8(P)$. The graph denoted $D8(P)$ is constructed by taking a subset of the edges of the Delaunay triangulation of $P$, denoted $DT(P)$.
We assume general position of $P$; i.e., no three points are on a line, no four points are on a circle, and no two points form a line with slope $0$, $\sqrt{3}$ or $-\sqrt{3}$.
The space around each vertex $p$ is partitioned by \emph{cones} consisting of 6 equally spaced rays from $p$. Thus each cone has an angle of $\pi/3$. See Figure \ref{fig-base-cones}. We number the cones starting with the topmost cone as $C_0$, then number in the clockwise direction. Cone arithmetic is modulo 6. By our general position assumption we note that no point of $P$ lies on the boundary of a cone.
We introduce a distance function known as the \emph{bisector distance}, which is the distance from $p$ to the orthogonal projection of $q$ onto the bisector of $C_i^p$, where $q\in C_i^p$. We denote this length $[pq]$. Any reference made to distance is to the bisector distance, unless otherwise stated.
\begin{definition}
Let $\{q_0,q_1,...,q_{d-1}\}$ be the sequence of all neighbours of $p$ in $DT(P)$ in consecutive clockwise order. The neighbourhood $N_p$, with \emph{apex} $p$, is the graph with the vertex set $\{p,q_0,q_1,...,q_{d-1}\}$ and the edge set $\{(p,q_j)\}\cup \{(q_j,q_{j+1})\}, 0 \leq j \leq d-1$, with all values mod $d$. The edges $\{(q_j,q_{j+1})\}$ are called \emph{canonical edges}. $N_i^p$ is the subgraph of $N_p$ induced by all the vertices of $N_p$ in $C_i^p$, including $p$. This is called the \emph{\textbf{cone neighbourhood}} of $p$. See Figure \ref{fig-con-neighbour}. \end{definition}
\begin{figure}
\caption{Preliminaries.}
\end{figure}
The algorithm $ConstructD8(P)$ takes as input a point set $P$ and returns the bounded degree graph $D8(P)$, with vertex set $P$ and edge set $E$. The algorithm calls two subroutines. $Add\-Incident()$ selects a set of edges $E_A$. For each edge $(p,r)$ of $E_A$, we call $Add\-Canonical(p,r)$ and $Add\-Canonical(r,p)$ which add edges to the set $E_{CAN}$. Both $E_A$ and $E_{CAN}$ are a subset of the edges in $DT(P)$. The final graph $D8(P)$ consists of the vertex set $P$ and the union of edge sets $E_A$ and $E_{CAN}$.
\begin{wrapfigure}[16]{r}{4cm}
\begin{center}
\includegraphics[width = 4cm]{pics/canonical-edges.pdf}
\caption{The graph $Can_0^p$, based on $(p,r)\in E_A$, in red. Vertex $r$ is the anchor, $d$ and $b$ are end vertices, and $c$ and $r$ are inner vertices.}\label{fig-canonical-edges}
\end{center} \end{wrapfigure}
We present the algorithm here:\\
\begin{tabularx}{\textwidth}{l Y}
\textbf{Algorithm:} & \textbf{ConstructD8(P)}\\
\textbf{INPUT:} & Set $P$ of $n$ points in the plane.\\
\textbf{OUTPUT:} & $D8(P)$: spanning subgraph of $DT(P)$.\\
\end{tabularx} \begin{enumerate}[labelindent=*,
style=multiline,
leftmargin=*,label=Step \arabic*:, ref =Step \arabic*] \item Compute the Delaunay triangulation $DT(P)$ of the point set $P$.
\item Sort all the edges of $DT(P)$ by their bisector length, into a set $L$, in non-decreasing order.
\item Call the function $AddIncident(L)$ with $L$ as the argument. $AddIncident()$ selects and returns the subset $E_A$ of the edges of $L$.
\item For each edge $(p,r)$ in $E_A$ in sorted order call $AddCanonical(p,r)$ and $Add\-Canonical(r,p)$, which add edges to the set $E_{CAN}$.
\item Return $D8(P) = (P, E_A \cup E_{CAN})$. \end{enumerate}
\begin{tabularx}{\textwidth}{l Y} \textbf{Algorithm:} & \textbf{AddIncident(L)}\\ \textbf{INPUT:} & $L$: set of edges of $DT(P)$ sorted by bisector distance.\\ \textbf{OUTPUT:} & $E_A$: a subset of edges of $DT(P)$.\\ \end{tabularx}
\begin{enumerate}[labelindent=*,
style=multiline,
leftmargin=*,label=Step \arabic*:, ref =Step \arabic*]
\item Initialize the set $E_A = \emptyset$.
\item For each $(p,q) \in L$, in non-decreasing order, do:
\begin{enumerate}
\item \label{step2a} Let $i$ be the cone of $p$ containing $q$. If $E_A$ has no edges with endpoint $p$ in $N_i^p$, and if $E_A$ has no edges with endpoint $q$ in $N_{i+3}^q$, then we add $(p,q)$ to $E_A$.
\end{enumerate}
\item return $E_A$. \end{enumerate}
\begin{comment} Note that this algorithm adds, for any vertex $p \in P$, at most one edge per cone of $p$. Thus the graph $G=(P,E_A)$ has maximum degree 6.
However, it is not clear that $G=(P,E_A)$ is a spanner. $AddCanonical()$ will add edges to the graph giving us a constant spanning ratio while slightly increasing the degree to 8. \end{comment} The next algorithm requires the following definition:
\begin{definition}\label{pointfive}
Let $Can_i^{(p,r)}$ be the subgraph of $DT(P)$ consisting of the ordered subsequence of canonical edges $(s,t)$ of $N_i^p$ in clockwise order around apex $p$ such that $[ps]\geq[pr] \text{ and }[pt]\geq[pr]$. We call $Can_i^{(p,r)}$ a \emph{canonical subgraph}. A vertex that is the first or last vertex of $Can_i^{(p,r)}$ is called an \emph{end vertex} of $Can_i^{(p,r)}$. A vertex that is not the first or last vertex in $Can_i^{(p,r)}$ is called an \emph{inner vertex} of $Can_i^{(p,r)}$. Vertex $r$ is called the \emph{anchor} of $Can_i^{(p,r)}$. See Fig. \ref{fig-canonical-edges}. \end{definition}
\begin{tabularx}{\textwidth}{l Y} \textbf{Algorithm:}& \textbf{AddCanonical(p,r)}\\ \textbf{INPUT:}& $(p,r)$, an edge of $E_A$.\\ \textbf{OUTPUT:}& A set of edges that are a subset of the edges of $DT(P)$. All edges generated by calls to $AddCanonical()$ form the set $E_{CAN}$.\\ \end{tabularx}
\begin{enumerate}[labelindent=*,
style=multiline,
leftmargin=*,label=Step \arabic*:, ref =Step \arabic*]
\item Without loss of generality, let $r\in C_0^p$.
\item \label{unos} If there are at least three edges in $Can_0^{(p,r)}$, then for every canonical edge $(s,t)$ in $Can_0^{(p,r)}$ that is not the first or last edge in the ordered subsequence of canonical edges $Can_0^{(p,r)}$, we add $(s,t)$ to $E_{CAN}$.
\item \label{trois} If the anchor $r$ is the first or last vertex in $Can_0^{(p,r)}$, and there is more than one edge in $Can_0^{(p,r)}$, then add the edge of $Can_0^{(p,r)}$ with endpoint $r$ to $E_{CAN}$. See Fig. \ref{step3}.
\item\label{step-duos} Consider the first and last canonical edge in $Can_0^{(p,r)}$. Since the conditions for the first and last canonical edge are symmetric, we only describe how to process the last canonical edge $(y,z)$. There are three possibilities.
\begin{enumerate}
\item \label{n5} If $(y,z)\in N_5^z$ we add $(y,z)$ to $E_{CAN}$. See Fig. \ref{fig-handlelast}.
\item \label{aux-edge}If $(y,z)\in N_4^z$ and $N_4^z$ does not have an edge with endpoint $z$ in $E_A$, then we add $(y,z)$ to $E_{CAN}$. See Fig. \ref{fig-handlelast2}
\item \label{aux-path} If $(y,z)\in N_4^z$ and there is an edge with endpoint $z$ in $E_A \cap N_4^z\backslash(y,z)$, then there is exactly one canonical edge of $z$ with endpoint $y$ in $N_4^z$. We label this edge $(w,y)$ and add it to $E_{CAN}$. See Fig. \ref{fig-handlelast4}.
\end{enumerate} \end{enumerate}
\begin{figure}
\caption{$AddIncident(L)$ selects edge $(p,r)$ for $E_A$.}
\label{fig-addincident}
\label{fig-addincident2}
\end{figure}
\begin{figure}
\caption{$AddCanonical(p,r)$}
\label{fig-addcanonical}
\label{step3}
\label{fig-handlelast}
\label{fig-handlelast2}
\label{fig-handlelast4}
\end{figure}
\section{D8(P) has Maximum Degree 8}\label{chap-degree}
To prove $D8(P)$ has a maximum degree of 8 we use a simple charging scheme. We charge each edge $(p,q)$ of $D8(P)$ once to $p$ and once to $q$. Thus the total charge on a vertex is equal to the degree of that vertex. To help track the number of charges on a vertex, each charge is associated with a specific cone, which may not be the cone containing the edge. We show that a cone can be charged at most twice, and that for any vertex $p$ of $P$, at most two cones of $p$ can be charged twice, while the remaining cones are charged at most once, which yields our maximum degree of 8.
Sections \ref{cone-section}, \ref{neighbourhood-section}, \ref{shared-section}, and \ref{empty-section} identify the different types of cones and their properties. Sections \ref{charge-ea} and \ref{charge-eb} detail the charging scheme. Section \ref{deg-prove} proves the maximum degree of $D8(P)$.
\subsection{Cone Types}\label{cone-section}
\begin{definition}
For an arbitrary cone neighbourhood $N_i^p$ we define the \emph{region of $N_i^p$} as the polygonal region bounded by the canonical edges of $N_i^p$ and the first and last edge of $N_i^p$ with endpoint $p$. See Figure \ref{fig-region2}. \end{definition}
\begin{definition}
Let $(a,b)$ be the first edge and let $(y,z)$ be the last edge in a canonical subgraph $Can_i^{(p,r)}$ (Definition \ref{pointfive}). We define the \emph{region of $Can_i^{(p,r)}$} as the polygonal region bounded by the canonical edges of $N_i^p$ between $(a,b)$ and $(y,z)$ inclusive, and the edges $(p,a)$ and $(p,z)$. See Figure \ref{fig-conetypes}. \end{definition}
We provide the following definitions regarding the placement of cones in regions. Both of the following definitions also extend to regions of a cone neighbourhood.
\begin{figure}
\caption{Regions and cones.}
\label{fig-region2}
\label{fig-region}
\label{fig-conetypes}
\end{figure}
\begin{definition}
Let $B(s,\epsilon)$ be a ball with center $s$ and radius $\epsilon>0$. Consider a cone $C_j^s$ of a point $s$ in $Can_i^{(p,r)}$. If there exists an $\epsilon >0$ such that $B(s,\epsilon) \cap C_j^s$ is inside the region of $Can_i^{(p,r)}$, then we call this an \emph{internal cone} of $Can_i^{(p,r)}$. Alternatively we say $C_j^s$ is in $Can_i^{(p,r)}$. \end{definition}
\begin{definition}
Consider a cone $C_j^s$ of a point $s$ in $Can_i^{(p,r)}$. If for all $\epsilon >0$, $B(s,\epsilon) \cap C_j^s$ is partially but not entirely in the region of $Can_i^{(p,r)}$, then we call this a \emph{boundary cone} of $Can_i^{(p,r)}$. Alternatively we say $C_j^s$ is on the boundary of $Can_i^{(p,r)}$. \end{definition}
\begin{definition}
A cone with vertex $s$ as endpoint is \emph{empty} if no edge of $E_A$ or $E_{CAN}$ incident to $s$ is in the cone. \end{definition}
\subsection{Cones in Neighbourhoods}\label{neighbourhood-section}
When referring to an angle formed by three points, we refer to the smaller of the two angles (that is, the angle that is $< \pi$) unless otherwise stated. When referring to a circle through three points $p_1,p_2,$ and $p_3$, we use the notation $O_{p_1,p_2,p_3}$.
We consider the edge $(p,r)$ of $E_A$, where without loss of generality, $r$ is in $C_0^p$. In this section we show the location of cones in the region of $Can_0^{(p,r)}$, so we may charge edges of $E_{CAN}$ to them. To facilitate this we introduce another variation on the concept of the neighbourhood of a vertex:
\begin{definition}
Consider the cone neighbourhood $N_i^p$ with the vertex set $\{p,q_0,q_1,$ $... ,q_{m-1}\}$, where $\{q_0,q_1, ... ,q_{m-1}\}$ are listed in clockwise order around $p$. A \emph{restricted neighbourhood } $N_p^{(q_j,q_k)}$ is the subgraph of $N_i^p$ induced on the vertex set $\{p,q_j,q_{j+1} ... ,q_k\}, 0 \leq j \leq k \leq m-1$. \end{definition}
Now we illustrate some of the geometric properties of restricted neighbourhoods in $DT(P)$.
\begin{lemma}\label{lemma-inside-edge-in-disk}
Consider the arbitrary restricted neighbourhood $N_p^{(r,q)}$. Each vertex $x \in N_p^{(r,q)}\backslash\{p,r,q\}$ is in the circle $O_{p,r,q}$ through $p$, $r$, and $q$. \end{lemma}
\begin{figure}
\caption{Properties of convex quadrilaterals in $DT(P)$.}
\label{circ1}
\label{circ2}
\label{circ3}
\end{figure}
\begin{proof}
Since $(p,x)$ is an edge in $DT(P)$, we can draw a disk through $p$ and $x$ that is empty of points of $P$. In particular, neither $r$ nor $q$ is in this disk. Hence the sum of the angles $\angle(prx)$ and $\angle(pqx)$ which lie on opposite sides of the same chord is smaller than $\pi$, and the sum of the other two angles $\angle(rxq)$ and $\angle(rpq)$ in the quadrilateral $(prxq)$ is greater then $\pi$. That implies $x$ is inside $O_{p,r,q}$. \end{proof}
\begin{lemma}\label{lemma-wedge-opposite-angles}
Consider the restricted neighbourhood $N_p^{(r,q)}$ in cone $C_i^p$. Let $(p,x)$ be an edge in $N_p^{(r,q)}$ where $x \neq r$ and $x \neq q$. Then angle $\angle(qxr) \geq \pi - \angle(qpr)$. Since the cone angle is $\pi/3$, we have that $\angle(qxr)>2\pi/3$. \end{lemma}
\begin{proof}
We know by Lemma \ref{lemma-inside-edge-in-disk} that $x$ lies inside the circle through $p$, $r$ and $q$, which we label $O_{p,r,q}$. The angle $\angle(qxr)$ is minimized when $x$ is on $O_{p,r,q}$. When $x$ is on $O_{p,r,q}$, $\angle rxq = \pi - \angle(qpr)$, since the two angles lie on the same chord $(r,q)$. Therefore $\angle(rxq) \geq \pi - \angle(qpr)$. Since both $q$ and $r$ are in the same cone $C_i^p$, and the cone angle is $\pi/3$, the $\angle(qxr)>2\pi/3$. \end{proof}
Which leads to the corollary:
\begin{corollary} \label{cor-empty-cone}
Let $s$ be an inner vertex of $Can_i^{(p,r)}$ that is not the anchor. Then there is at least one empty cone of $s$ in $Can_i^{(p,r)}$. \end{corollary}
\begin{proof}
Since $s$ is not the anchor, any internal cone of $Can_i^{(p,r)}$ on vertex $s$ is empty, and by Lemma \ref{lemma-wedge-opposite-angles}, there is at least one internal cone of $Can_i^{(p,r)}$ on vertex $s$. Therefore there is at least one empty internal cone on $s$ in the region of $Can_i^{(p,r)}$. See Fig. \ref{fig-can-charging}.
\qed \end{proof}
\begin{figure}
\caption{Locating empty cones.}
\label{circ}
\label{fig-can-charging}
\label{fig-r-edges}
\end{figure}
\begin{lemma}\label{r-edges}
Consider the edge $(p,r)$ in $E_A$, and without loss of generality let $r$ be in $C_0^p$. If $r$ is an inner anchor of $Can_0^{(p,r)}$, then cones $C_2^r$ and $C_4^r$ are empty and in the region of $Can_0^{(p,r)}$. If $r$ is an end vertex and not the only vertex in $Can_0^{(p,r)}$, then at least one of $C_2^r$ and $C_4^r$ are empty and in the region of $Can_0^{(p,r)}$. \end{lemma}
\begin{proof}
Since $(p,r)$ is in $C_0^p$, it must also be in $C_3^r$, and thus it is in neither $C_2^r$ or $C_4^r$.
If $r$ is an inner vertex, assume that $q$, $r$ and $s$ are in consecutive order in $Can_0^{(p,r)}$. Thus $Can_i^{(p,r)}$ contains canonical edges $(q,r)$ and $(r,s)$.
Recall that for every vertex $x$ in $Can_0^{(p,r)}$, $[px]\geq[pr]$. Thus $[ps]\geq[pr]$ and $[pq]\geq[pr]$, which means that $s$ and $q$ are above the horizontal line through $r$ in $C_0^p$. Since $C_2^r$ and $C_4^r$ lie below the horizontal line through $r$, they cannot contain the edges $(q,r)$ and $(r,s)$. Since $\triangle(pqr)$ and $\triangle(prs)$ are triangles in $DT(P)$, $C_2^r$ and $C_4^r$ are empty and inside $Can_0^{(p,r)}$. See Fig. \ref{fig-r-edges}.
Otherwise $r$ is an end vertex. By the same argument as above, but applied to only one side of $(p,r)$, either $C_2^r$ or $C_4^r$ is empty.
\qed \end{proof}
\subsection{Cones in Shared Triangles}\label{shared-section}
We will show the location of uncharged cones in the special case of overlapping regions. A set of regions overlap when at least one triangle of $DT(P)$ is contained in the intersection of all the regions in the set.
\begin{lemma}\label{awesome_lemma}
Consider the triangle $\triangle(pp's)$ in $DT(P)$. Let $p'$ and $s$ be in $C_0^p$, and let $p$ and $s$ be in $C_3^{p'}$. Then $\angle(p'sp)>2\pi/3$. \end{lemma}
\begin{proof}
Without loss of generality, assume that $s$ is left of directed line segment $(p,p')$. Consider the parallelogram formed by $C_0^p\cap C_3^{p'}$. Let $a$ be the left intersection and $b$ be the right intersection of $C_0^p$ and $C_3^{p'}$. Thus $s$ is in $\triangle(app')$. Note that $\angle(pp'a) + \angle(app') = \pi/3$. Thus $\angle(pp's) + \angle(spp') < \pi/3$, which implies that $\angle(p'sp)>2\pi/3$. See Figure \ref{fig-sharedtriangle}. \end{proof}
\begin{lemma}\label{at-most-two}
Let $\triangle(pp's)$ be a triangle in $DT(P)$. Then $\triangle(pp's)$ can belong to cone neighbourhoods of at most two of $p$, $p'$ and $s$. \end{lemma}
\begin{proof}
Suppose, for the sake of contradiction, that triangle $\triangle(pp's)$ is in a cone neighbourhood of $p$, and a cone neighbourhood of $p'$, and a cone neighbourhood of $s$. Without loss of generality, let $p'$ and $s$ be in $C_0^p$. This means that $p'$ and $s$ must be in $C_3^{p'}$. By Lemma \ref{awesome_lemma} angle $\angle(psp')>2\pi/3$. Therefore $p$ and $p'$ cannot be in the same cone neighbourhood of $s$. \end{proof}
\begin{corollary}\label{cor-shared}
A triangle can be shared by at most 2 cone neighbourhoods. \end{corollary}
\begin{proof}
Follows from Lemma \ref{at-most-two}. \end{proof}
Which leads to the following definition:
\begin{definition}
If $\triangle(pp's)$ occurs in exactly \emph{two} cone neighbourhoods of $p$,$p'$ and $s$, then we refer to it as a \emph{shared triangle}. If $\triangle(pp's)$ is in cone neighbourhoods of $p$ and $p'$, then $(p,p')$ is referred to as the \emph{base} of the shared triangle. \end{definition}
\begin{figure}
\caption{$q \in C_0^p \cap C_3^{p'}$ violates the empty circle property of Delaunay triangulations.}
\label{fig-sharedtriangle}
\end{figure}
\begin{corollary}\label{shared-empty-cones}
In a shared triangle $\triangle(pp's)$ with base $(p,p')$, $s$ has two empty cones internal to $\triangle(pp's)$. \end{corollary}
\begin{proof}
Without loss of generality, assume that $p' \in C_0^p$ and $p \in C_3^{p'}$. Then $p \in C_3^s$ and $p' \in C_0^s$. Then either $C_2^s$ and $C_1^s$, or $C_5^s$ and $C_4^s$ are internal to $\triangle(pp's)$ and thus cannot contain any edges with endpoint $s$. \end{proof}
We show two cone neighbourhoods can share at most one triangle:
\begin{lemma}\label{lemma-two-shared-triangles}
Each Delaunay edge is the base of at most 1 shared triangle. \end{lemma}
\begin{proof}
By contradiction.
Consider the shared triangle $\triangle(pp's)$ with base $(p,p')$. Without loss of generality, assume $p'$ is in $C_0^p$, and $p$ is in $C_3^{p'}$. Since $(p,p')$ is an edge in exactly two triangles, let $\triangle(pp'q)$ be the other triangle with edge $(p,p')$, and assume that $q$ is in both $C_0^p$ and $C_3^{p'}$.
By Lemma \ref{awesome_lemma} both angles $\angle(p'sp)$ and $\angle(pqp')$ are greater than $2\pi/3$. Thus their sum is greater than $4\pi/3$. But by the empty circle property of Delaunay triangulations, $\angle(p'sp) + \angle(pqp')$ must be less than $\pi$, which is a contradiction. \end{proof}
\begin{corollary}
Two cone neighbourhoods can share at most one triangle in $DT(P)$. \end{corollary}
\subsection{Empty Cones}\label{empty-section}
\begin{lemma}\label{lemma-empty-cone}
Consider the edge $(p,r)$ in $E_A$. Without loss of generality let $r$ be in $C_0^p$. Each inner vertex $s$ of $Can_0^{(p,r)}$ that is not the anchor has at least one unique empty cone in the region of $Can_0^{(p,r)}$. \end{lemma}
\begin{proof}
If $s$ is not part of a shared triangle, we know by Corollary \ref{cor-empty-cone} that $s$ has an empty cone internal to $Can_0^{(p,r)}$.
Consider the shared triangle $\triangle(pp's)$, and without loss of generality, let $p'$ be in $C_0^p$. Assume that there is an edge $(p,r)$ of $E_A$ in $C_0^p$, and an edge $(p',r')$ of $E_A$ in $C_3^{p'}$. Thus both sets $Can_0^{(p,r)}$ and $Can_3^{(p',r')}$ are well-defined.
By Corollary \ref{shared-empty-cones} there are two empty cones of $s$ internal to $\triangle(pp's)$. The empty cone adjacent to $(p',s)$ is the empty cone of $s$ in the region of $Can_0^{(p,r)}$, and the empty cone adjacent to $(p,s)$ is the empty cone of $s$ in the region of $Can_3^{(p',r')}$.
Thus any inner vertex $s$ of an arbitrary canonical subgraph $Can_0^{(p,r)}$ that is not the anchor, has a unique empty cone that is in the region of $Can_0^{(p,r)}$.\\ \end{proof}
\begin{lemma}\label{lemma-empty-cone-end}
Consider the edge $(p,r)$ in $E_A$. Without loss of generality let $r$ be in $C_0^p$. Let $z \neq r$ be an end vertex in $Can_0^{(p,r)}$. By symmetry, let $z$ be the last vertex. Let $y$ be the neighbour of $z$ in $Can_0^{(p,r)}$. If $y$ is in $C_5^z$, then $C_4^z$ is a unique empty cone internal to the region of $Can_0^{(p,r)}$. \end{lemma}
\begin{proof}
Triangle $\triangle(pyz)$ is a triangle in $DT(P)$. Since $(p,z)$ is in $C_3^z$ and $(y,z)$ is in $C_5^z$, $C_4^z$ will have no edges in $DT(P)$ with endpoint $z$. Since both $E_A$ and $E_{CAN}$ are subsets of the edges of $DT(P)$, $C_4^z$ will not contain any edges of $E_A$ and $E_{CAN}$ with endpoint $s$, and thus is empty.
We prove $C_4^z$ is unique to $Can_0^{(p,r)}$ by contradiction. Since $C_4^z$ is inside a triangle of $DT(P)$, it cannot be a boundary cone, thus it must be inside of shared triangle $\triangle(pyz)$. Corollary \ref{shared-empty-cones} states that $\triangle(pyz)$ must have two empty cones internal to $\triangle(pyz)$. However, since $(p,z)$ is in $C_3^z$, and $(y,z)$ is in $C_5^z$, only $C_4^z$ is an empty internal cone of $\triangle(pyz)$, which is a contradiction. \end{proof}
\begin{lemma}\label{lemma-empty-cone-anchor}
Consider the edge $(p,r)$ in $E_A$, and without loss of generality let $r$ be in $C_0^p$ (thus $r$ is the anchor of $Can_0^{(p,r)}$). The empty cones of $r$ internal to $Can_0^{(p,r)}$ are unique to $Can_0^{(p,r)}$. \end{lemma}
\begin{proof}
By Lemma \ref{r-edges}, $C_2^r$ and $C_4^r$ are (possibly) empty cones inside $Can_0^{(p,r)}$. Since the cases are symmetric, we consider $C_2^r$. Assume $s$ is the neighbour of $r$ in $Can_0^{(p,r)}$ such that $C_2^r$ is inside $\triangle(rsp)$.
If $(r,s)$ is in $C_1^r$, then $C_2^r$ is the only empty cone inside $\triangle(rsp)$. By Corollary \ref{shared-empty-cones} a shared triangle must have two empty cones, thus $C_2^r$ must be unique to $Can_0^{(p,r)}$.
Otherwise, if $(r,s)$ is in $C_0^r$, then $\triangle(rsp)$ is a shared triangle. Since both $C_2^r$ and $C_1^r$ are empty, we designate $C_2^r$ as belonging to $Can_0^{(p,r)}$, and $C_1^r$ as belonging to $Can_3^{(s,\cdot)}$. Thus $C_2^r$ is unique to $Can_0^{(p,r)}$ \end{proof}
\subsection{Charging Edges in $E_A$}\label{charge-ea}
The charging scheme for the edges of $E_A$ is as follows. Consider an edge $(p,r)$ of $E_A$, where without loss of generality $r$ is in $C_0^p$ and $p$ is in $C_3^r$. An edge $(p,r)$ of $E_A$ charges $C_0^p$ once and $C_3^r$ once.
\begin{lemma}\label{lemma-deg6}
Each cone of an arbitrary vertex $p$ of the graph $D8(P)$ is charged at most once by an edge of $E_A$ (thus yielding a maximum degree for the graph $G=(P,E_A)$ of 6). \end{lemma}
\subsection{Charging Edges in $E_{CAN}$}\label{charge-eb}
Let $(p,r)$ be an edge of $E_A$, and without loss of generality let $r\in C_0^p$. Let $Can_0^{(p,r)}$ be the subgraph consisting of the ordered subsequence of canonical edges $(s,t)$ of $N_0^p$ in clockwise order around apex $p$ such that $[ps]\geq[pr] \text{ and }[pt]\geq[pr]$. We call $Can_0^{(p,r)}$ a canonical subgraph.
For edges in $E_{CAN}$ we consider an arbitrary canonical subgraph $Can_i^{(p,r)}$, and without loss of generality let $i=0$. We note that there are three types of vertices in $Can_0^{(p,r)}$: anchor, inner and end vertices. Thus any edge added to $E_{CAN}$ from $Can_0^{(p,r)}$ will be charged to an inner, end or anchor vertex (refer to Fig \ref{fig-canonical-edges}). We outline the charging scheme below by referencing the steps of $AddCanonical(p,r)$ where edges were added to $E_{CAN}$.
\begin{enumerate}[labelindent=*,
style=multiline,
leftmargin=*,label=Step \arabic*:, ref = Step \arabic*]
\item Without loss of generality, let $r\in C_0^p$.
\item If the anchor $r$ is the first or last vertex in $Can_0^{(p,r)}$, and there is more than one edge in $Can_0^{(p,r)}$, then add the edge of $Can_0^{(p,r)}$ with endpoint $r$ to $E_{CAN}$. See Fig. \ref{step3}.
\item \label{unos-c} \emph{If there are at least three edges in $Can_0^{(p,r)}$, then for every canonical edge $(s,t)$ in $Can_0^{(p,r)}$ that is not the first or last edge in the ordered subsequence of canonical edges $Can_0^{(p,r)}$, we add $(s,t)$ to $E_{CAN}$.}
The edge $(s,t)$ is charged once to $s$ and once to $t$. Since the charging scheme is the same for both $s$ and $t$, without loss of generality we only describe how to charge $s$.
\textbf{Charge vertex $s$:} (Steps 1 and 2)
\begin{enumerate}
\item \label{charge1a} If $s$ is the anchor (thus $s=r$), then by Lemma \ref{r-edges}, $C_2^r$ and $C_4^r$ are empty cones inside $Can_0^{(p,r)}$. If $t$ is left of directed line segment $pr$, charge $(r,t)$ to $C_4^r$. If $t$ is right of $pr$, charge $(r,t)$ to $C_2^r$. See Fig. \ref{ttq4}.
\item \label{charge1b} If $s\neq r$ then by Lemma \ref{lemma-empty-cone}, $s$ has an empty cone $C_j^s$ inside $Can_0^{(p,r)}$. Charge $(s,t)$ once to $C_j^s$. See Fig.s \ref{ttq1}, \ref{ttq2}, \ref{ttq3}.
\end{enumerate}
\item\label{step-duos-c} \emph{Consider the first and last canonical edge in $Can_0^{(p,r)}$. Since the conditions for the first and last canonical edge are symmetric, we only describe how to process the last canonical edge $(y,z)$. There are three possibilities.}
\begin{enumerate}[label = (\alph*), ref = (\alph*)]
\item \label{n5-c} \emph{If $(y,z)\in C_5^z$, add $(y,z)$ to $E_{CAN}$. See Fig. \ref{fig-handlelast}.}
\item \label{aux-edge-c}\emph{If $(y,z)\in N_4^z$ and $N_4^z$ does not have an edge with endpoint $z$ in $E_A$, then we add $(y,z)$ to $E_{CAN}$. See Fig. \ref{fig-handlelast2}}
\textbf{Charge vertex $y$:}
\setcounter{MyCounter}{1}
\begin{enumerate}[label={\roman{MyCounter}}]
\item \label{charge2a} If $y$ is the anchor, then $C_2^y$ is empty and inside $Can_0^{(p,r)}$ by Lemma \ref{r-edges}. Charge $(y,z)$ to $C_2^y$. Fig. \ref{ttq4}.
\addtocounter{MyCounter}{1}
\item \label{charge2b}Otherwise $y$ is not the first or last vertex in $Can_0^{(p,r)}$, and by Corollary \ref{cor-empty-cone} has an empty cone $C_j^y$ inside $Can_0^{(p,r)}$. Charge $(y,z)$ to $C_j^y$. Fig.s \ref{ttq1}, \ref{ttq2}, \ref{ttq3}.
\end{enumerate}
\textbf{Charge vertex $z$:}
\begin{enumerate}[label={\roman{MyCounter}}]
\addtocounter{MyCounter}{1}
\item \label{charge2c}\ref{n5}: $(y,z)$ is in $C_5^z$. By Lemma \ref{lemma-empty-cone-end} $C_4^z$ is empty and inside $Can_0^{(p,r)}$. Charge $(y,z)$ to $C_4^z$. Fig. \ref{aux-edge3}.
\addtocounter{MyCounter}{1}
\item \label{charge2d}\ref{aux-edge}: $(y,z)$ is in $C_4^z$, and $C_4^z$ does not contain an edge of $E_A$ with endpoint $z$. Note $C_4^z$ is a boundary cone of $Can_0^{(p,r)}$. Charge $(y,z)$ to $C_4^z$. Fig. \ref{aux-edge2}.
\end{enumerate}
\item \label{aux-path-c} \emph{If $(y,z)\in C_4^z$ and there is an edge $(u,z), u \neq y$ of $E_A$ in $C_4^z$, then there is one canonical edge of $z$ with endpoint $y$ in $C_4^z$. Label the edge $(w,y)$ and add it to $E_{CAN}$. See Fig. \ref{fig-handlelast4}.}
\textbf{Charge vertex $y$:}
\setcounter{MyCounter}{1}
\begin{enumerate}[label={\roman{MyCounter}}]
\item \label{charge3a}If $y=r$, then $C_2^y$ is empty and inside $Can_0^{(p,r)}$ by Lemma \ref{r-edges}. Charge $(w,y)$ to $C_2^y$. Fig. \ref{ttq4}.
\addtocounter{MyCounter}{1}
\item \label{charge3b}Otherwise $y$ is not the first or last vertex in $Can_0^{(p,r)}$, and by Corollary \ref{cor-empty-cone} has an empty cone $C_j^y$ inside $Can_0^{(p,r)}$. Charge $(w,y)$ to $C_j^y$. Fig. \ref{can-edge2}.
\end{enumerate}
\textbf{Charge vertex $w$:}
\begin{enumerate}[label={\roman{MyCounter}}]
\addtocounter{MyCounter}{1}
\item \label{charge3c} If $w=u$ ($(z,u)$ in $E_A$), then $C_2^w$ is empty and inside $Can_4^{(z,u)}$ by Lemma \ref{empty-cone}. Charge $(w,y)$ to $C_2^w$. Fig. \ref{ttq4}.
\addtocounter{MyCounter}{1}
\item \label{charge3d}If $w \neq u$, then $w$ is not the first or last vertex in $Can_4^{(z,u)}$, and by Corollary \ref{cor-empty-cone} has an empty cone $C_j^w$ inside $Can_4^{(z,u)}$. Charge $(w,y)$ to $C_j^w$. Fig. \ref{can-edge3}.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\ref{step-duos-c}\ref{aux-path-c}\ref{charge3c} makes use of the following lemma:\\
\begin{lemma}\label{empty-cone}
Assume that on a call to $AddCanonical(p,r)$, where $(p,r)$ is in $C_0^p$, we add edge $(w,y)$ to $E_{CAN}$ in \ref{aux-path}. Let $(y,z)$ be the last edge in $Can_0^{(p,r)}$, and assume that $(w,z)$ is in $E_A$. Then $C_2^w$ is empty and inside $Can_4^{(z,u)}$. \end{lemma}
\begin{proof}
To prove this we shall establish that $[yz]\geq[wz]$. This, together with Lemma \ref{r-edges} implies that $C_2^w$ is empty and inside $Can_4^{(z,u)}$.
We prove by contradiction, thus assume that $[wz]>[yz]$. See Fig. \ref{wyincan}. This means that $AddIncident(L)$ examined $(y,z)$ before $(w,z)$, and thus $C_4^z \cap E_A$ was empty of edges with endpoint $z$ when $(y,z)$ was examined by $AddIncident(L)$. Since $(y,z)$ was not added to $E_A$, $y$ must have had an edge of $E_A$ in $C_1^y$ with endpoint $y$ that was shorter than $(y,z)$.
Since $\triangle(pyz)$ is a triangle in $DT(P)$, and $p \in C_3^y$, there cannot be an edge with endpoint $y$ in $C_1^y$ clockwise from $(y,z)$. In the counter-clockwise direction from $(y,z)$, we have the $\triangle(ywz) \in DT(P)$. However, since $[wz]>[yz]$, $(w,y)$ cannot be in $C_1^y$. Thus $C_1^y$ contained no edge of $E_A$ with endpoint $y$ when $(y,z)$ was examined by $AddIncident(L)$. Which means if $[wz]>[yz]$, then $(y,z)$ would have be added to $E_A$ by $AddIncident(L)$. But we know $(w,z)$ is in $E_A$, therefore it must be that $[yz]\geq[wz]$, which implies that $C_2^w$ is empty and by Lemma \ref{r-edges} is inside $Can_4^{(z,u)}$.
\qed \end{proof}
\begin{comment} Additionally we note the following concerning the charging scheme:
\begin{lemma}\label{unique-cones} Let $Can_i^{(p,r)}$ be an arbitrary canonical subgraph referenced in the charging scheme. Let $C_j^s$ be the cone charged. Note that $C_j^s$ is designated as an internal or boundary cone to $Can_i^{(p,r)}$. Then, in the context of the charging scheme, $C_j^s$ is unique to $Can_i^{(p,r)}$. That is, $C_i^s$ will not be charged as an internal or boundary cone of $Can_k^{(p',r')} \neq Can_i^{(p,r)}$. \end{lemma}
\begin{proof} If $s$ is an inner vertex of $Can_i^{(p,r)}$, then Lemma \ref{lemma-empty-cone} $C_j^s$ is unique to $Can_i^{(p,r)}$.
Otherwise assume that $Can_k^{(p',r')}$ and $Can_i^{(p,r)}$, $Can_k^{(p',r')} \neq Can_i^{(p,r)}$ both reference $C_j^s$ in the charging scheme. That implies that either $Can_k^{(p',r')}$ and $Can_i^{(p,r)}$ overlap or $C_j^s$ is a boundary cone to both.
If $Can_k^{(p',r')}$ and $Can_i^{(p,r)}$ overlap, they must do so in a shared triangle. Then by Corollary \ref{shared-empty-cones} there are two empty cones in the shared triangle, one of which must be $Can_j^p$.
\end{proof} \end{comment} \begin{figure}
\caption{The case if $[wz]>[yz]$.}
\label{wyincan}
\end{figure}
\begin{figure}
\caption{Charging scheme for edges of $E_{CAN}$.}
\label{ttq1}
\label{ttq5}
\label{ttq2}
\label{ttq3}
\label{ttq4}
\label{aux-edge2}
\label{aux-edge3}
\label{can-edge2}
\label{can-edge3}
\end{figure}
\subsection{Proving the Degree of D8(P)}\label{deg-prove}
The charging argument of the previous section establishes where charges are made. In this section we show a limit to how many edges can be charged to the different cones.
All edges added to $E_{CAN}$ are charged to internal cones, with the exception of the edge that is added in $AddCanonical(p,\cdot)$ \ref{aux-edge} to $C_4^z$, which is to a boundary cone. Since $C_4^z$ is on the boundary of $Can_0^{(p,r)}$, it may also be the boundary cone of a different cone neighbourhood.
\begin{lemma}\label{no-internal}
The boundary cone $C_4^z$ of $Can_0^{(p,r)}$ charged in $AddCanonical(p,r)$ \ref{aux-edge} cannot be the internal cone of a different canonical subgraph . \end{lemma}
\begin{proof}
$C_4^z$ is inside a canonical subgraph , then $y$ must be the apex of said canonical subgraph. That implies that both shared neighbours of $z$ and $y$ must be in $C_1^y$. But shared neighbour $p$ is in $C_3^y$, thus $C_4^z$ cannot be inside a canonical subgraph .
\qed \end{proof}
This implies that $C_4^z$ may only be shared with a different canonical subgraph as a boundary cone. Thus the only other edge of $E_{CAN}$ that can be charged to $C_4^z$ must be added in some call to $AddCanonical(\cdot,\cdot)$ \ref{aux-edge}. We prove here this is impossible.
\begin{lemma}\label{lemma-double-canonical}
Consider the edge $(p,r)$ of $E_A$ in $C_i^p$, and without loss of generality, let $i=0$. Let $(y,z)$ be the last edge in $Can_0^{(p,r)}$, and let $z$ be the last vertex in $Can_0^{(p,r)}$. Assume that $(y,z)$ was added to $E_{CAN}$ in a call to $AddCanonical(p, r)$ \ref{aux-edge}, and thus by Charge \ref{charge2d} is charged to the cone $C_4^z$. Then $(y,z)$ is the only edge in $D8(P)$ charged to $C_4^z$. \end{lemma}
\begin{proof}
Edge $(y,z)$ is added to $E_{CAN}$ in a call to $AddCanonical(p, r)$ \ref{aux-edge}, only if there is no edge of $E_A$ in $C_4^z$. Thus $C_4^z$ is not charged by an edge of $E_A$.
Cone $C_4^z$ is a boundary cone of $Can_0^{(p,r)}$. By Lemma \ref{no-internal}, $C_4^z$ cannot be an internal cone of another canonical subgraph. Thus $C_4^z$ can only be a boundary cone of any canonical subgraph. Since $AddCanonical(\cdot, \cdot)$ \ref{aux-edge} is the only call that adds an edge to $E_{CAN}$ that is charged to a boundary cone, only another call to $AddCanonical(\cdot, \cdot)$ \ref{aux-edge} can charge an additional edge to $C_4^z$.
Assume we have edges $(y,z)$ and $(y',z)$ in $Can_i^{(p,r)}$ and $Can_j^{(p',r')}$ respectively. Without loss of generality, assume that $z$ is the first vertex in $Can_j^{(p',r')}$ and the last vertex in $Can_i^{(p,r)}$, and assume $(y,z)$ and $(y',z)$ occupy the same cone $C_k^z$. For both $(y,z)$ and $(y',z)$ to be added in $AddCanonical(\cdot, \cdot)$ \ref{aux-edge}, it must be that $k = i-2 = j+2$. Without loss of generality let $i=0$,$k=4$ and $j=2$, and $z$ is in $C_0^p$ and $C_2^{p'}$.
We know that both $\triangle(pyz)$ and $\triangle(p'y'z)$ are triangles in $DT(P)$. Since $(y,z) \in C_1^y$, and $(y,p) \in C_3^y$, there is no edge in $C_1^y$ clockwise from $(y,z)$. Symmetrically, there is no edge in $C_1^{y'}$ counter-clockwise from $(y',z)$. Thus $y$ must have a neighbour closer than $z$ in $C_1^y$ counter-clockwise from $(y,z)$, and $y'$ must have a neighbour closer than $z$ in $C_1^{y'}$ clockwise from $(y',z)$. See Figure \ref{fig-one-canonical-1}.
\begin{figure}
\caption{Lemma \ref{lemma-double-canonical}}
\label{fig-one-canonical-1}
\label{fig-one-canonical-2}
\end{figure}
We consider the shorter of $(y,z)$ and $(y',z)$, ties broken arbitrarily. Without loss of generality we will assume $[yz]<[y'z]$. We therefore know that $y' \notin C^y_1$. So there must be a vertex $t$ in $C_1^y$ that is a neighbour of $y$ and closer to $y$ than $z$ counter-clockwise from $(y,z)$. Since $z \in C_1^y$ and $y'$ is not, the counter-clockwise cone boundary of $C_1^y$ must intersect $(y',z)$ at a point which we will call $x$. Therefore $t$ must be in triangle $\triangle(xyz)$. See Figure \ref{fig-one-canonical-2}.
Within $\triangle(xyz)$ we take the closest vertex to $z$ and call it $u$. $(u,z)$ must be an edge in $DT(P)$, and $C_1^u$ is bounded on one side by $(y,z)$, and bounded on the other side by $(y',z)$, and thus $z$ is the closest point to $u$ in $C_1^u$. Which means that $(u,z)$ would have been added to $E_A$ in $AddIncident()$, which means that $C_4^z$ has an edge $(u,z)\in E_A$. Since there is an edge of $E_A$ in $C_4^z$, neither $(y,z)$ nor $(y',z)$ would have been added to $E_{CAN}$ in calls to $AddCanonical(\cdot, \cdot)$ \ref{aux-edge}, and neither would be charged to $C_4^z$.
\end{proof}
This leads to the following corollary:
\begin{corollary}
Assume an edge $(y,z)$ is added to $E_{CAN}$ in $AddCanonical(p,r)$ \ref{aux-edge}, and charged to a boundary cone $C_4^z$. Then of all the edges in $D8(P)$, only $(y,z)$ is charged to $C_4^z$. \end{corollary}
The shared triangle is the only scenario where the internal cones of two separate cone neighbourhoods are adjacent on the same vertex.
Consider a shared triangle $\triangle(pp's)$ with base $(p,p')$, and assume that $p'$ is in $C_0^p$, $(p,r)\inE_A$ is in $C_0^p$, and $(p',r')\in E_A$ is in $C_3^{p'}$. Vertex $s$ has adjacent cones inside $Can_0^{(p,r)}$ and $Can_3^{(p',r')}$. We prove a limit on the number of canonical edges of $p$ and $p'$ that were added to $E_{CAN}$ and charged to cones of $s$ inside $Can_0^{(p,r)}$ and $Can_3^{(p',r')}$.
\begin{lemma}\label{charging-shared-triangles2}
If $(s,p')$ was added to $E_{CAN}$ and charged to the empty cone of $s$ inside $Can_0^{(p,r)}$, then $(s,p)$ will not be charged to the empty cone of $s$ inside $Can_3^{(p',r')}$. \end{lemma}
\begin{proof}
Assume that $(p',s)$ was added to $E_{CAN}$ by a call to $AddCanonical(p,r)$. That implies that $(p',s)$ is not the first or last edge of $Can_i^{(p,r)}$. Thus we know by Lemma \ref{lemma-two-shared-triangles} that $(p,s)$ must be the last edge in $Can_3^{(p',r')}$, which implies that it is not added by $AddCanonical(p',r')$.
Otherwise assume that $(p,p')$ is a canonical edge of $q$, and $(p,s)$ was added to $E_{CAN}$ on a call to $AddCanonical(q,\cdot)$ in \ref{aux-path}. This implies that $(p,p')$ is in $C_5^q$. See Figure \ref{fig-sharedtriangle3}. There are two possible ways to add $(p',s)$ so that it is charged to the cone of $s$ inside $Can_3^{(p',r')}$. We show that neither occurs:
\begin{enumerate}
\item $AddCanonical(q,\cdot)$ adds $(p',s)$ to $E_{CAN}$ in \ref{aux-path}. This implies that $(p,p')$ is in $C_4^q$. See Figure \ref{fig-sharedtriangle3.1}. However, $(p,s)$ was added in \ref{aux-path}, which means that $(p,p')$ is in $C_5^q$, which is a contradiction. Thus both edges cannot be added by calls to $AddCanonical(q,\cdot)$, \ref{aux-path}.
\item $AddCanonical(p,r)$ adds $(p',s)$ to $E_{CAN}$. The shared neighbour $q$ of $p$ and $p'$ is not in $C_0^p$, and thus $(p',s)$ is the last canonical edge in $Can_0^{(p,r)}$. Thus $(p',s)$ is not added to $E_{CAN}$ by a call to $AddCanonical(p,r)$ (by omission, \ref{step-duos}).
\end{enumerate} \end{proof}
\begin{figure}
\caption{The limit on edges added to $E_{CAN}$ in a shared triangle.}
\label{fig-sharedtriangle3}
\label{fig-sharedtriangle3.1}
\end{figure}
\begin{corollary}\label{at-most-three}
If the empty cone of $s$ inside $Can_0^{(p,r)}$ is charged twice by edges of $Can_0^{(p,r)}$, then the empty cone of $s$ inside $Can_3^{(p',r')}$ is charged at most once by edges of $Can_3^{(p',r')}$. \end{corollary}
\begin{lemma} \label{unique-cones}
All cones charged in the charging scheme are unique to their referenced canonical subgraph. \end{lemma}
\begin{proof}
We note that all the edges added here are from a canonical subgraph, thus all the charges are to vertices of a canonical subgraph, and thus must be to an inner vertex, an anchor, or an end vertex. By Lemma \ref{lemma-empty-cone} each inner vertex has an empty cone unique to its canonical subgraph . By Lemma \ref{lemma-empty-cone-end}, if the end vertex has an empty cone it is unique to its canonical subgraph , and by Lemma \ref{lemma-empty-cone-anchor}, the two possible empty cones on an anchor are unique to it canonical subgraph .
If an edge is added to $E_{CAN}$ in $AddCanonical(p, r)$ \ref{aux-edge} it is charged to the boundary cone that it occupies. By Lemma \ref{lemma-double-canonical} it is the only edge charged to that cone, thus we consider it unique to its canonical subgraph . \end{proof}
\begin{lemma} \label{single-charged}
Cones of an end vertex or anchor of a canonical subgraph are charged at most once by edges of $E_{CAN}$. \end{lemma}
\begin{proof}
Lemma \ref{unique-cones} proves that all cones charged in the charging scheme are unique (to the referenced canonical subgraph ). Since cones of end vertices or anchors are charged at most once in the charging scheme, this implies the lemma. \end{proof}
\begin{lemma} \label{double-charged}
Cones on an inner vertex of a canonical subgraph are charged at most twice by edges of $E_{CAN}$. \end{lemma}
\begin{proof}
Lemma \ref{unique-cones} proves that all cones charged in the charging scheme are unique (to the referenced canonical subgraph ). Since cones of inner vertices are charged at most twice in the charging scheme, this implies the lemma. \qed \end{proof}
\begin{lemma}\label{no-edge-in-cone}
The edges of $E_A$ and $E_{CAN}$ are never charged to the same cone. \end{lemma}
\begin{proof}
The edges of $E_A$ are charged directly to the cone they occupy on each endpoint. We know from the charging scheme that the edges of $E_{CAN}$ are charged to either empty cones, or to a cone that does not contain an edge of $E_A$. Thus the edges of $E_{CAN}$ and $E_A$ are never charged to the same cone. \qed \end{proof}
\begin{lemma}\label{bounding-cones}
Consider a cone $C_i^s$ of a vertex $s$ in $D8(P)$ that is charged twice by edges of $E_{CAN}$. Then the two neighbouring cones $C_{i-1}^s$ and $C_{i+1}^s$ are charged at most once by edges of $D8(P)$. \end{lemma}
\begin{proof}
Lemmas \ref{lemma-deg6}, \ref{single-charged}, \ref{double-charged}, and \ref{no-edge-in-cone} state that only a cone on an inner vertex may be double charged.
Each cone $C_{i-1}^s$ and $C_{i+1}^s$ is either an empty internal cone of $Can_i^{(p,r)}$, or a boundary cone containing a canonical edge of $Can_i^{(p,r)}$ with endpoint $s$. We will consider $C_{i+1}^s$ since the other cases are symmetric.
If $C_{i+1}^s$ is an empty internal cone of $Can_i^{(p,r)}$, then it is only charged for an edge if $s$ is on a shared triangle $\triangle(pp's)$ and $s$ is not on the base. In this case $C_{i+1}^s$ is charged for at most one edge of $E_{CAN}$ by Lemma \ref{at-most-three}.
Otherwise $C_{i+1}^s$ contains a canonical edge in $Can_i^{(p,r)}$. By our charging scheme and Lemma \ref{at-most-three} we know only empty cones are double charged, and by Lemma \ref{no-edge-in-cone} no cone is charged for both an edge of $E_A$ and an edge of $E_{CAN}$. Thus $C_{i+1}^s$ is either charged for an edge of $E_A$, an edge of $E_{CAN}$, or it is not charged. \qed \end{proof} \begin{theorem}\label{lemma-d8}
The maximum degree of $D8(P)$ is at most 8. \end{theorem}
\begin{proof}
Each edge $(p,r)$ of $E_A$ is charged once to the cone of $p$ containing $r$ and once to the cone of $r$ containing $p$. By Lemma \ref{lemma-deg6}, no cone is charged more than once by edges of $E_A$.
No edge of $E_{CAN}$ is charged to a cone that is charged by an edge of $E_A$ by Lemma \ref{no-edge-in-cone}.
By Lemma \ref{bounding-cones}, if a cone of a vertex $s$ of $D8(P)$ is charged twice, then its neighbouring cones are charged at most once. This implies that there are at most 3 double charged cones on any vertex $s$ in $D8(P)$.
Assume that we have a vertex $s$ with 3 cones that have been charged twice. A cone of $s$ that is charged twice is an internal cone of some cone neighbourhood $N_i^p$ by our charging argument. Thus $s$ is endpoint to two canonical edges $(q,s)$ and $(s,t)$ in $N_i^p$. Note that $\angle(qst)>2\pi/3$ by Lemma \ref{lemma-wedge-opposite-angles}, and this angle contains the cone of $s$ that is charged twice. Thus to have 3 cones charged twice, the total angle around $s$ would need to be $>2\pi$, which is impossible. Thus there are at most two double charged cones on $s$, which gives us a maximum degree of 8. See Fig. \ref{constr} for an example of a degree 8 vertex.
\qed \end{proof}
\begin{wrapfigure}[18]{r}{6cm}
\centering
\includegraphics[width = 6cm]{pics/degree8.pdf}
\caption{A degree 8 vertex in $D8(P)$. The red edges belong to $E_{CAN}$, while the black edges belong to $E_A$.}\label{constr} \end{wrapfigure}
\section{D8(P) is a Spanner}\label{chap-spanner}
We will prove that $D8(P)$ is a spanner of $DT(P)$ with a spanning ratio of $\tspan = \ospan \approx 2.21$, thus making it a $\ospan\cdot C_{DT}$-spanner of the complete geometric graph, where $C_{DT}$ is the spanning ratio of the Delaunay triangulation. As of this writing, the current best bound of the spanning ratio of the Delaunay triangulation is 1.998\cite{xia}, which makes $D8(P)$ approximately a $4.42$-spanner of the complete graph.
Suppose that $(p,q)$ is in $DT(P)$ but not in $D8(P)$. We will show the existence of a short path between $p$ and $q$ in $D8(P)$. If the short path from $p$ to $q$ consists of the ideal situation of an edge $(p,r)$ of $E_A$ in the same cone of $p$ as $q$, plus every canonical edge of $p$ from $r$ to $q$, then we have what we call the \emph{ideal path}. We give a spanning ratio of the ideal path with respect to the \emph{canonical triangle} $T_{pq}$, which, informally, is an equilateral triangle with vertex $p$ and height $[pq]$. Notice that in our construction, when adding canonical edges to $E_{CAN}$ on an edge $(p,r)$ of $E_A$, there are times where the first or last edges of $Can_i^{(p,r)}$ are not added to $E_{CAN}$. In these cases we prove the existence of alternate paths from $p$ to $q$ that still have the same spanning ratio. Finally we prove that the spanning ratio given in terms of the canonical triangle $T_{pq}$ has an upper bound of $(1+\theta/\sin\theta)|pq|$, where $\theta = \pi/3$ is the cone angle. A canonical triangle $T_{pq}$ is the equilateral triangle with $p$ at one corner, contained in the cone of $p$ that contains $q$, and has height $[pq]$.
\subsection{Ideal Paths}\label{ideal-paths}
We begin by defining the ideal path, and proving the spanning ratio of an ideal path with respect to the graph $DT(P)$.
\begin{definition}
Consider an edge $(p,r)$ in $C_i^p$ in $E_A$, and the graph $Can_i^{(p,r)}$. An \emph{ideal path} is a simple path from $p$ to any vertex in $Can_i^{(p,r)}$ using the edges of $(p,r) \cup Can_i^{(p,r)}$. \end{definition}
Consider an edge $(p,r)$ in $C_i^p$ in $E_A$, and the graph $Can_i^{(p,r)}$. We will prove that the length of the ideal path from $p$ to $q$ is not greater than $|pa|+\frac{\theta}{\sin\theta}|aq|$, where $a$ is the corner of the canonical triangle to the side of $(p,q)$ that has $r$, and $\theta = \pi/3$ is the cone angle.
We then use ideal paths to prove there exists a path with bounded spanning ratio between any two vertices $p$ and $q$ in $D8(P)$, where $(p,q)$ is an edge in $DT(P)$. We prove a bound on the length of the path from $p$ to $q$ of $\halfspan$.
We note that the distance $\halfspan$ is with respect to the canonical triangle $T_{pq}$ rather than the Euclidean distance $|pq|$. To finish the proof we show that $|pa|+\frac{\theta}{\sin\theta}|aq| \leq (1+\frac{\theta}{\sin\theta})|pq|$.
To bound the length of ideal paths , we first show that a canonical subgraph forms a path. Then we prove the bound.
We begin with a couple of well-known geometric lemmas. The first is an observation regarding the relative lengths of convex paths, when one resides inside the other.
\begin{lemma}\label{lemma-convex-length}
If a convex body $C$ is contained within another convex body $C'$, then the perimeter of $C'$ is longer than $C$.\cite{euclidean}, page 42. \end{lemma}
The next lemma is a well known result traditionally called ``The Inscribed Angle Theorem''.
\begin{lemma}\label{lemma-inscribed-angle}
Consider 3 points $p,q,s$ on the boundary of a circle $O$ with center $o$, such that $\angle(pqs) = \alpha$. Let $A$ be the arc of $O$ from $p$ to $s$ that does not go through $q$, and let $\overline{A}$ be the arc of $O$ from $p$ to $s$ through $q$. Then the angle $\angle(pos)$ facing $A$ is equal to $2\alpha$. Further, the angles $\angle(pqs)$ facing $A$ is the same for any point $p$ that is on $\overline{A}$. \end{lemma}
That allows us to establish this result:
\begin{lemma}\label{lemma-arc-length}
Let $O$ be a circle through points $p$ and $q$ and $r$ in clockwise order, and let $\alpha$ denote the angle $\angle (qpr)$. Then the length of the arc from $q$ to $r$ on the boundary of $D_{p,q,r}$ is $$\frac{\alpha}{\sin\alpha}|qr|$$. \end{lemma}
\begin{proof}
From the center point of $O$, the angle between $q$ and $r$ is $2\alpha$ by Lemma \ref{lemma-inscribed-angle}. Thus the arc length between $q$ and $r$ is $2\alpha R$, where $R$ is the radius of $O$. Also, $|qr| = 2 \sin\alpha R$, which means $R = \frac{|qr|}{2\sin\alpha}$. Thus the arc length between $q$ and $r$ is equal to:
\begin{align*}
2\alpha R &= \frac{2\alpha}{2\sin\alpha}|qr|\\
&= \frac{\alpha}{\sin\alpha}|qr|
\end{align*}
which completes the proof. See Figure \ref{arc-length}. \end{proof}
\begin{figure}
\caption{Relating arc length to angle.}
\label{arc-length}
\end{figure}
We require that a canonical subgraph is a path, which is proven here.
\begin{lemma}\label{canpath}
Let $(p,r)$ be an edge in $E_A$ in the cone $C_i^p$. Then $Can_i^{(p,r)}$ forms a path. \end{lemma}
\begin{proof}
We prove by contradiction. Note that $Can_i^{(p,r)}$ is a collection of paths. Assume that there are at least two paths in this collection. Without loss of generality, let $i=0$. Let $(a,b)$ and $(y,z)$ be the first and last edge respectively in $Can_0^{(p,r)}$. Thus of all the vertices in $N_0^p \backslash \{p\}$ between $a$ and $z$ there exists at least one consecutive subset $T$ where for each $t_j \in T, 0\leq j < |T|, [pt_j]<[pr]$. We consider the vertex $t_k\in T, [pt_k]\leq [pt_j]$, for all $t_j \in T, 0\leq j < |T|$. Since $[pt_k]<[pr]$, $AddIncident(L)$ examined $(p,t_k)$ before $(p,r)$. Thus when $(p,t_k)$ was examined, $C_i^p$ contained no edges of $E_A$ with endpoint $p$. Since $(p,t_k)$ was not added to $E_A$, there must have been an edge of $E_A$ with endpoint $t_k$ in $C_3^{t_k}$. However, we know $[pt_{k-1}]\leq[pt_k]$ and $[pt_{k+1}]\leq [pt_k]$ (whether or not $t_{k-1}$ and $t_{k+1}$ are in $T$). Thus neither $(t_k,t_{k-1})$ nor $(t_k,t_{k+1})$ can be in $C_3^{t_k}$. Since $\triangle(pt_kt_{k-1})$ and $\triangle(pt_kt_{k+1})$ are triangles in $DT(P)$, the only edge with endpoint $t_k$ in $C_3^{t_k}$ is $(p,t_k)$. This means that $(p,t_k)$ would have been added to $E_A$ instead of $(p,r)$, which is a contradiction.
\qed \end{proof}
\begin{lemma} \label{no-points}
Consider the restricted neighbourhood $N_p^{(r,q)}$ in $DT(P)$ in the cone $C_i^p$. Let $O_{p,r,q}$ be the circle through the points $p$, $q$, and $r$. Then there are no points of $P$ in $O_{p,r,q}$ to the side of $(p,r)$ that does not contain $q$. Likewise there are no points of $P$ in $O_{p,r,q}$ to the side of $(p,q)$ that does not contain $r$. \end{lemma}
\begin{proof}
Since the cases are symmetric, we prove that there are no points of $P$ in the region $R$ of $O_{p,r,q}$ to the side of $(p,r)$ that does not contain $q$. We prove by contradiction. Thus assume there is a point $t$ in $R$. Then the circle $O_{p,t,r}$ contains $q$ and the circle $O_{p,r,q}$ contains $t$, thus there is no circle through $p$ and $r$ that is empty of points of $P$. Thus $(p,r)$ cannot be a Delaunay edge, which is a contradiction to our definition of restricted neighbourhood. See Fig. \ref{new-keil2}.
\qed \end{proof}
\begin{lemma}\label{intermediate}
Consider the restricted neighbourhood $N_p^{(r,q)}$ in cone $C_i^p$. Let $rq$ be the directed line from $r$ to $q$, and assume there are no neighbours of $p$ in $N_p^{(r,q)}$ right of $rq$. If $(r,q)$ is not an edge in $N_p^{(r,q)}$, then there is a vertex $a\in N_p^{(r,q)}$ such that the circle $O_{r,a,q}$ is empty of vertices of $P$ left of $rq$. \end{lemma}
\begin{figure}
\caption{Locations of $a$.}
\label{new-keil2}
\label{intermediate1}
\label{intermediate2}
\end{figure}
\begin{proof}
We prove by contradiction, thus assume that we have found a vertex $a$ left of $rq$ such that $O_{r,a,q}$ is empty of vertices of $P$ left of $rq$, and $a$ is not in $N_p^{(r,q)}$. Note vertex $a$ must exist, otherwise $(r,q)$ is on the convex hull and thus in $N_p^{(r,q)}$. Since the region of $N_p^{(r,q)}$ is empty of vertices of $P$, $a$ must be outside of $N_p^{(r,q)}$.
We look at two cases:
\begin{enumerate}
\item $a$ is outside of $O_{p,r,q}$: Since $(r,q)$ is not in $DT(P)$, there is at least one vertex $u$ in $N_p^{(r,q)}\backslash\{p,r,q\}$. By Lemma \ref{lemma-inside-edge-in-disk} and our initial assumption that $N_p^{(r,q)}$ contains no neighbours of $p$ to the right of $rq$, $u$ must be in $O_{p,r,q}$ to the left of $rq$. Since $a$ is outside of $O_{p,r,q}$, the arc of $O_{r,a,q}$ to the left of $rq$ contains the arc of $O_{p,r,q}$ to the left of $rq$. Thus $u$ is in $O_{r,a,q}$ to the left of $rq$, which is a contradiction to our selection of vertex $a$. See Fig. \ref{intermediate1}.
\item $a$ is inside $O_{p,r,q}$: Since $\angle(rpq) <\pi/3$ (since it is in a cone), and $a$ is inside $O_{p,r,q}$, $a$ must be positioned radially between two consecutive edges with endpoint $p$ in $N_p^{(r,q)}$. Call these edges $(p,u)$, and $(p,v)$. Note that $\triangle(puv)$ is a triangle in $DT(P)$, and thus the circle $O_{p,u,v}$ does not contain $a$ by the empty circle property of the Delaunay triangulation. This implies that, since $p$,$u$,$a$, and $v$ form a convex quadrilateral with $p$ and $a$ across the diagonal, any circle through $p$ and $a$ must contain at least one of $u$ or $v$.
Since $a$ is inside $O_{p,r,q}$, $O_{r,a,q}$ contains $p$. Thus we can draw the circle $O_1$ through $a$ and $p$ tangent to $O_{r,a,q}$. The portion of $O_1$ to the left of $rq$ is contained in $O_{r,a,q}$, and thus does not contain any points of $P$. But any circle through $a$ and $p$ must contain at least one of $u$ and $v$, and $u$ and $v$ are to the left of $rq$, which is a contradiction. See Fig. \ref{intermediate2}
\end{enumerate}
Thus, if $(r,q)$ is not an edge in $N_p^{(r,q)}$, there is a neighbour $a$ of $p$ in $N_p^{(r,q)}$ such that $O_{r,a,q}$ is empty of vertices of $P$ left of $rq$. See Fig. \ref{intermediate2}.
\qed \end{proof}
We now turn to a lemma from the paper of Bose and Keil\cite{bosekeil} that tells us the length of a path between two points in the Delaunay triangulation of a set of vertices. We provide a slightly modified and truncated version that suits our needs. The lemma of Bose and Keil does not provide an explicit construction. We apply the lemma to a restricted neighbourhood, and are able to provide a construction of the path along with an upper bound on its length.
\begin{lemma}\label{rq-path}
Consider the restricted neighbourhood $N_p^{(r,q)}$ in $DT(P)$ in the cone $C_i^p$. Let $\alpha = \angle(rpq)< \pi/3$. If no point of $P$ lies in the triangle $\triangle(prq)$ then there is a path from $r$ to $q$ in $DT(P)$, using canonical edges of $p$, whose length satisfies:
$$\delta(r,q) \leq |rq|\frac{\alpha}{\sin \alpha}$$ \end{lemma}
\begin{proof}
Let $o$ be the center of $O_{p,r,q}$, and let $\beta = \angle(roq) = 2\alpha$.
Lemma \ref{no-points} and the assumption that no vertices of $P$ lie in the triangle $\triangle(prq)$ imply that there are no vertices of $P$ in $O_{p,r,q}$ to the right of directed line segment $rq$.
We proceed by induction on the number of vertices in $N_p^{(r,q)}$. If there are only 3 vertices in $N_p^{(r,q)}$, then $(r,q)$ is an edge in $DT(P)$, and the path from $r$ to $q$ has length $|rq|<|rq|\frac{\alpha}{\sin \alpha}$ and we are done.
Now assume that the inductive hypothesis holds for all restricted neighbourhoods with fewer vertices than $N_p^{(r,q)}$. Assume $N_p^{(r,q)}$ has more than 3 vertices, otherwise we are done by the same argument as above.
Lemma \ref{no-points} tells us that there is a vertex $a$ in $N_p^{(r,q)}$ where $O_{r,a,q}$ is empty of vertices of $P$ left of $rq$.
Let $O_1$ be the circle through $r$ and $a$ with center $o_1$ on the line segment $(o,r)$. Let $O_2$ be the circle through $a$ and $q$ whose center $o_2$ lies on the line segment $(o,q)$. Let $\alpha_1 = \angle(ro_1a)$ and let $\alpha_2 = \angle(ao_2q)$. $N_p^{(r,a)}$ and $N_p^{(a,q)}$ have fewer vertices than $N_p^{(r,q)}$, and $O_1$ is empty of vertices of $P$ to the right of directed segment $ra$, and $O_2$ is empty of vertices of $P$ to the right of directed line segment $aq$. Thus by the inductive hypothesis:
\begin{align*}
\delta(r,q)&=\delta(r,a) +\delta(a,q)\\
&= |ra|\frac{\alpha_1}{\sin \alpha_1} +|aq|\frac{\alpha_2}{\sin \alpha_2}
\end{align*}
Let $r'\neq r$ be the intersection of $O_1$ and $rq$, and let $q'\neq q$ be the intersection of $O_2$ and $rq$. Since $\beta < \pi$, $O_1$ and $O_2$ overlap. Let $O_3$ be the circle through $q'$ and $r'$ with center $o_3$ on the intersection of the line segment between $o_1$ and $r'$ and the line segment between $o_2$ and $q'$. See Fig. \ref{keils-lemma}.
Triangles $\triangle(roq)$, $\triangle(ro_1r')$, $\triangle(q'o_2q)$, and $\triangle(q'o_3r')$ are all similar isosceles triangles. Thus by Lemmas \ref{lemma-inscribed-angle} and \ref{lemma-arc-length} the length of the arc of $O_1$ left of $rq$ is $|rr'|\frac{\alpha}{\sin \alpha}$, the length of the arc of $O_2$ left of $rq$ is $|q'q|\frac{\alpha}{\sin \alpha}$, and the length of the arc of $O_3$ left of $rq$ is $|q'r'|\frac{\alpha}{\sin \alpha}$.
Note that $O_3$ is completely contained in the intersections of $O_1$ and $O_2$. Let $A_1$ be the arc of $O_1$ left of $rq$ from $a$ to $r'$, and let $A_2$ be the arc of $O_2$ left of $rq$ from $a$ to $q'$. Note that $A_1\cap A_2$ is a convex shape from $q'$ to $r'$ that contains the arc of $O_3$ left of $rq$. Thus $|A_1\cap A_2|\geq|q'r'|\frac{\alpha}{\sin \alpha}$ by convexity (Lemma \ref{lemma-convex-length}).
We observe that:
\begin{align*}
\delta(r,q)&=\delta(r,a) +\delta(a,q)\\
&= |ra|\frac{\alpha_1}{\sin \alpha_1} +|aq|\frac{\alpha_2}{\sin \alpha_2}\\
&= |rr'|\frac{\alpha}{\sin \alpha} + |q'q|\frac{\alpha}{\sin \alpha} - |A_1\cap A_2|\\
&\leq |rr'|\frac{\alpha}{\sin \alpha} + |q'q|\frac{\alpha}{\sin \alpha} - |q'r'|\frac{\alpha}{\sin \alpha}\\
&=|rq|\frac{\alpha}{\sin \alpha}
\end{align*}
as required.
\qed \end{proof}
\begin{figure}
\caption{Lemma \ref{rq-path}.}
\label{keils-lemma}
\end{figure}
\begin{lemma}\label{path}
The path $\delta(r,q)\leq |rr_q|+|r_qq|\frac{\theta}{\sin\theta}$ \end{lemma}
\begin{proof}
By convexity.
\qed \end{proof} Now we prove the following:
\begin{lemma} \label{inward-path3}
Consider the restricted neighbourhood $N_p^{(r,q)}$ and without loss of generality let $N_p^{(r,q)}$ be in $C_0^p$. Let $\alpha = \angle(rpq)$. Let $r_q\neq p$ be the point where the line through $p$ and $r$ intersects the canonical triangle $T_{pq}$. Let $q_r\neq p$ be the point where the edge $(p,q)$ intersects $T_{pr}$. If $[pr]$ is the shortest edge of all edges in $N_p^{(r,q)}$ with endpoint $p$, then the distance from $r$ to $q$ using the canonical edges of $p$ in $N_p^{(r,q)}$ is at most $\max\{|rr_q|,|q_rq|\}+|r_qq|\frac{\theta}{\sin \theta}$. \end{lemma}
\begin{proof}
Let $\delta(r,q)$ be the length of the path between $r$ and $q$ in $N_p^{(r,q)}$. We will prove by induction on the number of canonical edges of $p$ in $N_p^{(r,q)}$.
If there is only one canonical edge of $p$ in $N_p^{(r,q)}$, then $(r,q)$ is that edge and $\delta(r,q) = |rq|\leq \max\{|rr_q|,|q_rq|\}+|r_qq|\frac{\alpha}{\sin \alpha}$, we are done.
Otherwise assume there is more than one canonical edge of $p$ in $N_p^{(r,q)}$. Consider the edge $(p,a)\in N_p^{(r,q)}$, such that $[pa]\leq [pt]$, for all $(p,t)\in N_p^{(r,q)}\backslash \{r,q\}$. We consider two cases:
\begin{enumerate}
\item If $[pa]>[pq]$, then $[pr]$ and $[pq]$ are the shortest edges in $N_p^{(r,q)}$, which implies that there are no points in $\triangle(prq)$. Thus from Lemma \ref{path}, the length of the path from $r$ to $q$ is at most $|rq|\frac{\theta}{\sin \theta}$. We have
\begin{align*}
\delta(r,q)&\leq |rq|\frac{\alpha}{\sin \alpha}\\
&\leq |rr_q|+|r_qq|\frac{\alpha}{\sin \alpha}
\end{align*}
by convexity (Lemma \ref{lemma-convex-length}). $\frac{\alpha}{\sin \alpha}$ is increasing in $\alpha$, thus $\frac{\alpha}{\sin \alpha}\leq \frac{\theta}{\sin \theta}$. Thus
\begin{align*}
\delta(r,q)&\leq|rr_q|+|r_qq|\frac{\alpha}{\sin \alpha}\\
&\leq \max \{|rr_q|, |q_rq|\}+|r_qq| \frac{\theta}{\sin \theta}
\end{align*}
which satisfies the inductive hypothesis. See Fig. \ref{fig-inward2}
\item $[pa]<[pq]$. Since $[pr]\leq[pa]$ we can apply the inductive hypothesis on $N_p^{(r,a)}$. Let $r_a$ be the point where the line through $p$ and $r$ intersects the horizontal line through $a$, and let $a_r$ be the point where the line through $p$ and $a$ intersects the horizontal line through $r$. See Fig. \ref{fig-inward4}. Then by the inductive hypothesis:
\begin{align*}
\delta(r,a) = \max\{|rr_a|,|a_ra|\}+ |r_aa|\frac{\theta}{\sin \theta}
\end{align*}
Since $[pa]\leq[pq]$ we can apply the inductive hypothesis on $N_p^{(a,q)}$. Let $a_q \neq p$ be the point where the line through $a$ and $p$ exits $T_{pq}$, and let $q_a\neq p$ be the point where $(p,q)$ intersects $T_{pa}$, and let $\alpha_2 = \angle(apq)$. See Fig. \ref{fig-inward5}. Then by the inductive hypothesis:
\begin{align*}
\delta(a,q) = \max\{|aa_q|,|q_aq|\}+ |a_qq|\frac{\theta}{\sin \theta}
\end{align*}
Note that $|pa_q| \leq \max\{|pr_q|,|pq|\}$. Thus:
\begin{align*}
\delta(r,q) &= \max\{|rr_a|,|a_ra|\}+\max\{|aa_q|,|q_aq|\}+ |r_aa|\frac{\theta}{\sin \theta}+|a_qq|\frac{\theta}{\sin \theta}\\
&\leq \max\{|rr_q|,|q_rq|\}+|r_aa|\frac{\theta}{\sin \theta}+|a_qq|\frac{\theta}{\sin \theta}\\
&\leq\max\{|rr_q|,|q_rq|\}+|r_qa_q|\frac{\theta}{\sin \theta}+|a_qq|\frac{\theta}{\sin \theta}\\
&\leq\max\{|rr_q|,|q_rq|\}+|r_qq|\frac{\theta}{\sin \theta}
\end{align*}
as required.
\end{enumerate}
See Fig. \ref{inductive-path}.
\qed \end{proof}
\begin{figure}
\caption{Inductive path.}
\label{fig-inward2}
\label{fig-inward4}
\label{fig-inward5}
\label{fig-inward6}
\label{fig-inward7}
\label{fig-inward9}
\label{inductive-path}
\end{figure}
Using Lemma \ref{inward-path3} we can prove the main lemma of this section: \begin{lemma}\label{lemma-path}
Consider the edge $(p,r)$ in $E_A$, located in $Can_i^p$, and the associated canonical subgraph $Can_i^{(p,r)}$. Without loss of generality, assume that $i=0$. The length of the ideal path from $p$ to any vertex $q$ in $Can_0^{(p,r)}$ satisfies $\delta(p,q) \leq |pa|+\frac{\theta}{\sin\theta}|aq|$, where $a$ is the corner of $T_{pq}$ such that $r \in \triangle(pqa)$, and $\theta = \pi/3$ is the angle of the cones.
\end{lemma}
\begin{proof}
(Refer to Fig. \ref{fig-inward9}.)
By Lemma \ref{inward-path3} the path from $r$ to $q$ is no greater than $\max\{|rr_q|,|q_rq|\}+|r_qq|\frac{\theta}{\sin\theta}$.
Since $|pr|+ \max\{|rr_q|,|q_rq|\} \leq |pa| $ and $|aq|\geq|r_qq|$ we have
\begin{align*}
\delta(p,q) &\leq |pr|+\max\{|rr_q|,|q_rq|\}+|r_qq|\frac{\theta}{\sin\theta}\\
&\leq |pa|+|aq|\frac{\theta}{\sin\theta}.
\end{align*} \qed \end{proof}
\subsection{Paths in D8(P)}
A path in $D8(P)$ that approximates an edge $(p,q)$ of $DT(P)$ can take several forms. It may consist of the edge $(p,q)$, or it may be an ideal path from $p$ to $q$, it may be the concatenation of two ideal paths from $p$ to $q$, or some combination of the above. We prove that $\delta(p,q)$, the length of the path in $D8(P)$ that approximates edge $(p,q) \in DT(P)$, is not longer than $\max\{|pa|+\func|aq|, |pb|+\func|bq|\}$. Points $a$ and $b$ are the top left and right corners of canonical triangle $T_{pq}$ respectively.
At this point our spanning ratio is with respect to $T_{pq}$. We then prove that $D8(P)$ is a spanner with respect to the Euclidean distance $|pq|$.
We consider an edge $(p,q) \in DT(P)$. If $(p,q) \in D8(P)$ then the length of the path from $p$ to $q$ in $D8(P)$ is $|pq| \leq \fullspan{|pq|}$, as required.
Thus we assume $(p,q) \notin D8(P)$. Without loss of generality we assume $q$ is in $C_0^p$. Since $(p,q)\notin D8(P)$, there is an edge $(p,r)$ of $E_A$ in $Can_0^p$ or $(q,u)$ in $Can_3^q$ (or both), where $[pr]\leq[pq]$ and $[qu]\leq[pq]$. Otherwise $(p,q)$ would have been added to $E_A$ in $AddIncident(L)$. Without loss of generality we shall assume there is the edge $(p,r) \in E_A, [pr]\leq[pq]$, and that $(p,q)$ is clockwise from $(p,r)$ around $p$.
Let $s$ be the vertex such that $s$ is a neighbour of $q$ in $N_p^{(p,r)}$ and $s\neq p$ (but possibly $s=r$). Let $a$ be the upper left corner of $T_{pq}$, and $b$ be the upper right corner. Let $\alpha = \angle(rpq)$ and $\theta = \pi/3$ be the angle of the cones.
\begin{lemma}\label{lemma-can-path}
Recall that $(p,r)\in E_A$, where $r \in C_i^p$. Then there is an ideal path from $p$ to any vertex $q$ in $Can_i^{(p,r)}$, where $q$ is not an end vertex of $Can_i^p$. \end{lemma}
\begin{proof}
In the algorithm $AddCanonical(p,r)$, we add every canonical edge of $p$ in $Can_i^{(p,r)}$ that is not the first or last edge. By Lemma \ref{canpath}, the edges of $Can_i^{(p,r)}$ form a path. Thus there is the ideal path from $p$ to any vertex $q$ in $Can_i^{(p,r)}$ that is not the first or last vertex. \qed \end{proof}
The next lemmas prove that, for a vertex $z$ that is the first or the last vertex of $Can_i^p$, the edge in $Can_i^p$ with endpoint $z$ cannot be in $C_i^z$.
\begin{lemma}\label{lemma-circle-exit}
Let $r$ and $q$ be two consecutive neighbours of $p$, in an arbitrary cone $C_i^p$. Without loss of generality, let $(p,q)$ be clockwise from $(p,r)$ in the cone $C_i^p$. If $q$ is in $C_i^r$, then all edges with endpoint $p$ in $C_i^p$ that appear after $(p,q)$ in clockwise order are longer than $[pq]$. \end{lemma}
\begin{proof}
By Lemma \ref{lemma-wedge-opposite-angles}, any edge $(p,t)$ clockwise from $(p,q)$ in $C_i^p$ is such that the angle $\angle(rqt)>2\pi/3$. Since $(r,q)$ is in $C_i^r$, it is at an angle of at least $\pi/3$ from the positive $x$-axis. Since $\angle(rqt)>2\pi/3$, the edge $(r,t)$ must be at an angle $>0$ with respect to the positive $x$-axis. Thus $[pt]>[pq]$, for all $(p,t)$ clockwise from $(p,q)$ in $C_i^p$. See Figure \ref{no-z}. \end{proof}
\begin{lemma} \label{no-zero}
Let $z$ be the first or last vertex of $Can_i^{(p,r)}$, and assume that $(p,z)$ is not in $E_A$. Let $(y,z)$ be the last edge in $Can_i^{(p,r)}$. Then $(y,z)$ is not in $C_i^z$. \end{lemma}
\begin{figure}
\caption{The edge in $Can_i^p$ with endpoint $z$ cannot be in $C_i^z$.}
\label{no-z}
\label{no-z2}
\label{fig-handle-last}
\end{figure}
\begin{proof}
We assume that $(y,z) \in C_i^z$, and prove by contradiction. By Lemma \ref{lemma-circle-exit}, if $(y,z)$ is in $C_i^z$, then $(p,y)$ is the shortest of all edges in $C_i^p$ with endpoint $p$ counter-clockwise from $(p,y)$.
Let $(p,r)$ be an edge in $E_A$, where $r \in Can_i^{(p,r)}$. Then $(p,r)$ is at least as short as all edges in $DT(P)$ from $p$ to a vertex in $Can_i^{(p,r)}$. But that is a contradiction to $(p,y)$ (and by extension $(p,z)$) being the shortest. See Fig. \ref{no-z2}
\qed \end{proof}
Let $(p,r)$ be an edge in $E_A$ is the graph $D8(P)$. Without loss of generality, assume that $r$ is in $C_0^p$. By Lemma \ref{lemma-can-path}, there is the ideal path from $p$ to any vertex in $Can_0^{(p,r)}$ that is not the first or last vertex. We now turn our attention to the first or last vertex in $Can_0^{(p,r)}$. Because the cases are symmetric, we focus on the last vertex, which we designate $z$. If $z=r$, the path from $p$ to $z$ is trivial, thus we assume $z\neq r$. Let $y$ be the neighbour of $z$ in $Can_0^{(p,r)}$. By Lemma \ref{no-zero}, $(y,z)$ cannot be in $C_0^z$. Thus $(y,z)$ can be in $C_5^z$, $C_4^z$, or $C_3^z$.
\begin{enumerate}[labelindent=*,
style=multiline,
leftmargin=*,label=Case \arabic*:, ref =Case \arabic*]
\item \label{case3}Edge $(y,z)$ is in $C_5^z$. Then $(y,z)$ was added to $E_{CAN}$ in $AddCanonical(p,r)$, \ref{n5}, and there is an ideal path from $p$ to $z$.
\item \label{case4}Edge $(y,z)$ is in $C_4^z$. There are three possibilities.
\begin{enumerate}
\item \label{case6} If $(y,z)$ is an edge of $E_A$, then there is an ideal path from $p$ to $z$.
\item \label{case7}If there is no edge in $E_A$ with endpoint $z$ in $C_4^z$, then $(y,z)$ was added to $E_{CAN}$ in $AddCanonical(p,r)$, \ref{aux-edge}, and there is an ideal path from $p$ to $z$.
\item \label{case8} If there is an edge of $E_A$ in $C_4^z$ with endpoint $z$ that is not $(y,z)$, then we have added the canonical edge of $z$ in $C_4^z$ with endpoint $y$ to $E_{CAN}$ in $AddCanonical(p,r)$, \ref{aux-path}. Therefore by Lemma \ref{lemma-can-path} there is an ideal path from $z$ to $y$, and also an ideal path from $p$ to $y$. \end{enumerate}
\item \label{case5}Edge $(y,z)$ is in $C_3^z$. Then $(y,z)$ was not added to $E_{CAN}$.
\end{enumerate}
In \ref{case3}, \ref{case6}, and \ref{case7} there is an ideal path from $p$ to $q$. Thus Lemma \ref{lemma-path} tells us there is a path from $p$ to $q$ not longer than $|pa|+\frac{\theta}{\sin\theta}|aq|$.
In \ref{case8}, we have two ideal paths that meet at $y$. As in the case of a single ideal path, the sum of the lengths of these two paths is not more than $|pa|+\frac{\theta}{\sin\theta}|aq|$. The following lemma proves this claim:
\begin{lemma}\label{lemma-basecase-3}
Consider the edge $(p,r)$ in $E_A$ in the graph $D8(P)$, $r$ in $C_0^p$. Let $(y,z)$ be the last edge in $Can_0^{(p,r)}$, and let $(y,z)$ be in $C_4^z$. Let $(z,u)$ be an edge in $E_A$ in $C_4^z$. Assume there is an ideal path from $p$ to $y$ in $C_0^p$, and an ideal path from $z$ to $y$ in $C_4^z$. Let $a$ be the top left corner of $T_{pz}$. We prove an upper bound on the length $\delta(p,z)$ of $|pa| + \frac{\theta}{\sin{\theta}}|az|.$
\end{lemma}
\begin{proof}
Let $a_1$ be the top left corner of $T_{py}$, and let $b_2$ be the top right corner of $T_{zy}$ (as seen from apex $z$. Note that $T_{zy}$ lies in $C_z^4$). Since $(y,z)$ is the \emph{last} edge in $Can_0^{(p,r)}$, we note that the ideal path from $p$ to $y$ is to the side of $(p,y)$ that contains $r$ and does not contain $z$. Similarly, the ideal path from $z$ to $y$ is to the side of $(y,z)$ that contains $u$ and does not contain $p$. See Figure \ref{fig-marked-edge-path}. By Lemma \ref{lemma-can-path}, the length of the path from $p$ to $z$ in $D8(P)$ is:
\begin{align}
\delta_{D8(P)}(p,z) &\leq \delta_{D8(P)}(p,y) + \delta_{D8(P)}(z,y) \nonumber\\
&\leq |pa_1| + \func|a_1s| + |zb_2|+ \func|b_2y| \nonumber\\
&\leq |pa_1| + \func|a_1s| + |b_2y|+ \func|zb_2| \label{eqn} \\
&\leq (|pa_1|+|b_2y|)+\func(|a_1y|+|zb_2|) \label{eqn2}\\
& = |pa| + \func|az| \label{eqn3}
\end{align}
Inequality \ref{eqn} holds because $\func > 1$, and $|b_2y|\leq |zb_2|$, since $|zb_2|$ is the longest possible line segment in $T_{zy}$. \end{proof}
\begin{wrapfigure}[12]{r}{5cm}
\centering
\includegraphics[page=1, width = 4cm]{pics/fig-marked-edge-path.pdf}\caption{Concatenating ideal paths.} \label{fig-marked-edge-path} \end{wrapfigure}
In \ref{case5} there is no edge from $y$ to $z$. We prove the length of the path from $p$ to $z$ in \ref{case5} by induction, as part of the main lemma of this section:
\begin{lemma} \label{lemma-inductive-path}
Consider the edge $(p,r)$ in $E_A$ in the graph $D8(P)$. Without loss of generality, let $r$ be in $C_0^p$. Let $a$ and $b$ be the top left corner and top right corner respectively of $T_{pq}$. For any edge $(p,q) \in DT(P)$, there exists a path from $p$ to $q$ in $D8(P)$ that is not longer than $\max\{|pa|+\func|aq|,|pb|+\func|bq|\}$. \end{lemma}
\begin{proof}
Let $\delta(p,q)$ be the shortest path from $p$ to $q$ in $D8(P)$. We do a proof by induction on the size of the canonical triangle $T_{pq}$.
The base case is when $T_{pq}$ is the smallest canonical triangle. One instance of this occurs when there is an ideal path from $p$ to $q$, as in \ref{case3}, \ref{case6}, and \ref{case7}. Thus by Lemma \ref{lemma-path}: $$\delta(p,q) \leq \halfspan.$$
The other instance is \ref{case8}, where two ideal paths meet at a vertex. By Lemma \ref{lemma-basecase-3} we have: $$\delta(p,q) \leq \halfspan.$$
Since $|aq|\leq \max\{|aq|,|bq|\}$, the proof holds in all base cases.
In \ref{case5}, $q$ is the first or last vertex in $Can_0^{(p,r)}$. Since the cases are symmetric, consider when $q$ is the last vertex, and assume it has a neighbour $s$ in $Can_0^{(p,r)}$, such that the canonical edge $(s,q)$ in $N_0^p$ is in $C_0^s$. Thus $(s,q)$ was not added to $E_{CAN}$ on a call to $AddCanonical(p,r)$.
We break down $T_{pq}$ into canonical triangles $T_{ps}$ and $T_{sq}$. Call the upper left corner of $T_{pq}$ $a$, and the upper right corner $b$. Also the upper left corner of $T_{ps}$ is $a_1$, the upper right corner of $T_{sq}$ is $a_2$, the upper right corner of $T_{ps}$ is $b_1$, and the upper right corner of $T_{sq}$ is $b_2$. Since $(s,q)$ is in $C_0^s$, both $T_{ps}$ and $T_{sq}$ must be smaller than $T_{pq}$.
\begin{figure}
\caption{Dark green are the actual paths, light green demonstrates the path is not longer than the red path.}
\label{fig-ind-path}
\label{fig-ind-path2}
\label{fig-ind-path3}
\label{fig-ind-path4}
\end{figure}
\begin{comment}
\begin{figure}
\caption{$|a_1s|\geq|sb_1|$. }
\label{fig-ind-path4}
\end{figure}
\end{comment}
We note the following facts:
\begin{enumerate}[label=Fact \arabic*:, ref =Fact \arabic*]
\item \label{fact-duo}$|pa| = |pa_1|+|sa_2|$ and likewise $|pb| = |pb_1|+|sb_2|$
\item $|ab| = |a_1b_1|+|a_2b_2|$
\item \label{fact_uno} $|aa_2| = |a_1s|$ and $|b_2b|=|sb_1|$
\item $q$ is on the line $(a_2,b_2)$
\end{enumerate}
Without loss of generality, assume the path from $p$ to $s$ is to the side of the line through $p$ and $s$ with $a_1$ (note that we are not assuming that $|a_1s|>|b_1s|$).
We extend the line $(p,s)$ until it intersects $(a_2,b_2)$ at a point we label $s'$. Since $q$ is the last vertex in $Can_0^{(p,r)}$, $q$ must be to the side of $s'$ closer to $b_2$.
Since $|pa|=|pb|$ and $|sa_2|=|sb_2|$, it is sufficient to prove:
\begin{align*}
|pa_1|+\func|a_1s| + |sa_2| + \func\max\{|a_2q|,|qb_2|\} &\leq |pa| + \func\max\{|aq|,|bq|\}\\
\end{align*}
By \ref{fact-duo} this is equivalent to:
\begin{align*}
\func|a_1s| + \func\max\{|a_2q|,|qb_2|\}&\leq\func\max\{|aq|,|bq|\}\\
|a_1s| + \max\{|a_2q|,|qb_2|\} &\leq \max\{|aq|,|qb|\}
\end{align*}
We consider two scenarios:
\begin{enumerate}
\item $|a_1s|\leq |sb_1|$: There are two sub-cases:
\begin{enumerate}
\item $|qb|\geq|aq|$: If $|a_2q|\geq|qb_2|$, then:
\begin{align*}
|a_1s|+|a_2q| &\leq |aq| \\
&\leq |qb|
\end{align*}
as required. Otherwise, $|qb_2|>|a_2q|$, thus:
\begin{align*}
|a_1s|+|qb_2|&\leq |sb_1|+|qb_2|\\
&=|qb|
\end{align*}
as required.
\item $|qb|<|aq|$: Together with $|a_1s|\leq |sb_1|$ implies that $|a_2q|>|qb_2|$. See Figure \ref{fig-ind-path4}. Then $|a_1s|+|a_2q| = |aq|$, as required.
\end{enumerate}
\item $|a_1s| > |sb_1|$: Since $q$ is radially to the right of $(p,s)$, $|aq|>|qb|$. It is also true that $|a_2q|>|qb_2|$. Thus, using \ref{fact_uno}:
\begin{align*}
|a_1s| + |a_2q| & = |aa_2| + |a_2q|\\
&= |aq|
\end{align*}
as required. See Figure \ref{fig-ind-path3}.
\end{enumerate} \end{proof}
For an edge $(p,q)$ in $DT(P)$, we have a bound on the length of the path in $D8(P)$. However, this bound is terms of the size of the canonical triangle $T_{pq}$, which is not the same as the Euclidean distance $|pq|$. In the following section we prove that $\max\{|pa|+\func|aq|,|pb|+\func|bq|\} \leq \fullspan{|pq|}$.
\subsection{The Spanning Ratio of D8(P)}\label{app:spanning}
\begin{lemma} \label{upper-bound}
\begin{align*}
\max\{|pa|+\func|aq|,|pb|+\func|bq|\} \leq \fullspan{|pq|}
\end{align*} \end{lemma}
\begin{proof}
Without loss of generality, we will assume that
\begin{align*}
\max\{|pa|+\func|aq|,|pb|+\func|bq|\}
=&|pa|+\func|aq|
\end{align*}
Let
\begin{align*}
\lambda = \left(\frac{\theta}{\sin\theta}-1\right)(|pq|-|aq|)
\end{align*}
We will show that:
\begin{align*}
|pa|+\func|aq| \leq |pa|+\func|aq| + \lambda \leq \fullspan{|pq|}
\end{align*}
Since $|pq|\geq|pa|$ (by the sine law), and $\frac{\theta}{\sin\theta}>1$, we get $\lambda \geq 0$. Thus
\begin{align*}
&|pa|+ \frac{\theta}{\sin\theta}|aq| \\
\leq &|pa|+ \frac{\theta}{\sin\theta}|aq| + \lambda\\
\end{align*}
It remains to be shown that:
\begin{align*}
|pa|+ \frac{\theta}{\sin\theta}|aq| + \lambda &\leq \fullspan{|pq|}\\
|pa|+ \frac{\theta}{\sin\theta}|aq| + \left(\frac{\theta}{\sin\theta}-1\right)(|pq|-|aq|)&\leq \fullspan{|pq|}\\
|pa|-|pq|+|aq| +\frac{\theta}{\sin\theta}(|aq|+|pq|-|aq|)&\leq \fullspan{|pq|}\\
|pa|-|pq|+|aq| +\frac{\theta}{\sin\theta}|pq|&\leq |pq| + \frac{\theta}{\sin\theta}|pq|\\
|pa|-|pq|+|aq| &\leq |pq| \\
|pa|+|aq| &\leq 2|pq|
\end{align*}
Thus we must show that $|pa|+|aq| \leq 2|pq|$ holds true for all values of $\alpha = \angle(apq)$.
Let $a'$ be the point to the side of $(p,q)$ that contains $a$, such that $\triangle(a'pq)$ is an equilateral triangle. Thus
$$|pa'|+|a'q|=2|pq|.$$ See Fig. \ref{fig:three}. We will prove that
$$|pa|+|aq|\leq |pa'|+|a'q|=2|pq|.$$
\begin{figure}
\caption{$|pa|+|aq|\leq |pa'|+|a'q|=2|pq|.$}
\label{fig-one}
\label{fig:two}
\label{fig:three}
\label{fig:four}
\end{figure}
Note that $\angle(paq)=\angle(pa'q)=\theta$. That implies that the circle $O_{pa'q}$ through $p$, $a'$ and $q$ also goes through $a$. See Fig. \ref{fig:three}.
To better analyze the problem, we rotate, translate, and scale $p$, $q$, $a$ and $a'$ such that $q = (-\sin\theta,0)$, $p= (\sin\theta,0)$, and $a'=(0,-1.5)$. Let $a$ be any point on $O_{pqa'}$ below the line through $p$ and $q$. Let $E(p,q,d)$, where $d=2|pq|$, represent an ellipse with focal points $p$ and $q$ such that for each point $b$ on the boundary of $E(p,q,d)$, $|pb|+|bq|=d = 2|pq|$. Note the center of $O_{pqa'}$ is $(0,-0.5)$, and $O_{pqa'}$ has a radius of $1$. See Fig. \ref{fig:four}. The equation for $O_{pqa'}$ is:
$$x^2 +(y+\frac{1}{2})^2 = 1$$.
The equation for $E(p,q,d)$ is:
\begin{align*}
\frac{x^2}{a^2}+\frac{y^2}{b^2} &= 1\\
\frac{x^2}{(2\sin\theta)^2}+\frac{y^2}{\frac{3}{2}^2} &= 1\\
\frac{x^2}{3}+\frac{4y^2}{9} &= 1
\end{align*}
\begin{comment}
\begin{figure}\label{fig:five}
\end{figure} \end{comment}
Thus we find the intersection of $O_{pqa'}$ and $E(p,q,d)$ by solving the following system of equations:
\begin{align*}
x^2 +(y+\frac{1}{2})^2 &= 1\\
\frac{x^2}{3}+\frac{4y^2}{9}&=1\\
\end{align*}
This gives us a single solution at $(0,-1.5)$.
Note that, when $\angle(aqp) = \pi/2$, $|pa|=2$ and $|aq| = 2\cos\theta = 1$. Thus $|pa|+|aq|=3$. We have $2|pq| = 2*(2\sin\theta) \approx 3.46$. Thus when $\angle(aqp) = \pi/2$, $|pa|+|aq|<2|pq|=|pa'|+|a'q|$, which means that $a$ is inside $E(p,q,d)$, which means all of $O_{pqa'}$ is inside $E(p,q,d)$, with the exception of $(0,-1.5)$. Thus for all points $a$ on $O_{pqa'}$, $$|pa|+|aq|\leq |pa'|+|a'q| = 2|pq|.$$ Which implies that:
\begin{align*}
\delta(p,q)\leq|pa|+\frac{\theta}{\sin\theta}|aq|\leq (1+\frac{\theta}{\sin\theta})|pq|
\end{align*}
as required.
\qed \end{proof}
Using this inequality and Lemma \ref{lemma-inductive-path}, the main theorem now follows:
\begin{theorem}\label{theorem:mainspanner8}
For any edge $(p,q) \in DT(P)$, there is a path in $D8(P)$ from $p$ to $q$ with length at most $\fullspan{|pq|}$, where $\theta = \pi/3$ is the cone width. Thus $D8(P)$ is a $\tspan D_T$-spanner of the complete graph, where $D_T$ is the spanning ratio of the Delaunay triangulation (currently 1.998\cite{xia}). \end{theorem}
\end{document} | arXiv |
MH Board
Class 12 – Eng Medium
Class 12 – Hindi Medium
Class 10 Eng Medium
Class 10 Hindi Medium
RD Sharma Class 10
RS Aggarwal Class 10
Selina Class 10
ML Aggarwal Class 10
Frank Class 10
Home » CBSE
CBSE Study Material
Write chromyl chloride test with equation
Chemistry, Previous Year Question Papers
Chloride radical is detected by the chromyl chloride test. In this test, chromyl chloride gas (orange red color) is produced. Equation Involved – 4NaCl + K2Cr2O7 + 6H2SO4 → 4NaHSO4 + 2KHSO4 + 3H2O +...
What do you understand by lanthanide contraction
The lanthanide contraction is the decrease in the atomic or ionic radii with increase in the atomic number of lanthanides
What are lanthanide elements?
Lanthanide elements resembles a lot in properties with lanthanum. Lanthanide is group of 14 elements from atomic number 58 to 71. In these elements on increasing atomic number electron enters into...
Explain oxidization properties of potassium permanganate in acidic medium.
2KMnO4 + 8H2SO4 + 10KI → 6K2SO4 + 8H2O + 5I2 2KMnO4 + 5SO2 + 2H2O → 2MnSO4 + 2H2SO4 + K2SO4 2KmO4 + 16HCl → 2KCl + 2MnCl2 + 8H2O + 5Cl2 5COOH – COOH + [5O] → 10CO2 + 5H2O
Show that the lines
CBSE, Class 12, Exercise 27D, Maths, RS Aggarwal, Straight Line in Space
Compute the shortest distance between the lines
Class 12, Rs agarwal class 12 chapter 27 Exercisw 27d
Find the shortest distance between the given lines.
CBSE, Class 12, Exercise 19.20, Exercise 27D, Maths, RS Aggarwal, Straight Line in Space
Give two differences between double salt and complex salt.
Answer: Double salt Complex salt A double salt is a combination of two salt compounds. A complex salt is a molecular structure that is composed of one or more complex ions. Double salts can give...
Give two differences between DNA and RNA.
Answer: DNA RNA DNA – Deoxyribo Nucleic Acid RNA – Ribo Nucleic acid DNA consists of adenine (A), cytosine (C), guanine (G), and thymine (T) RNA consists of adenine (A), cytosine (C), guanine (G),...
Exercise 27D, Maths, RS Aggarwal, Straight Line in Space
Answer Given equations: r¯=(ı^+ȷ^)+λ(2ı^-ȷ^+k^) \overline{\mathrm{r}}=(\hat{\imath}+\hat{\jmath})+\lambda(2 \hat{\imath}-\hat{\jmath}+\hat{\mathrm{k}})...
Which of the following compounds has tetrahedral geometry? (a) (b) (c) (d)
SOL: Correct option is D. $\left[\mathrm{NiCl}_{4}\right]^{2}$
Oxidation number of gold metal is (a)+1 (b) 0 (c) (d) all of these
Sol: Correct option is B. 0
If A(1, 2, 3), B(4, 5, 7), C(-4, 3, -6) and D(2, 9, 2) are four given points then find the angle between the lines AB and CD.
CBSE, Class 12, Exercise 27C, Exercise 27D, Maths, RS Aggarwal, Straight Line in Space
Answer Given - A = (1,2,3) B = (4,5,7) C = (-4,3,-6) D = (2,9,2) Formula to be used – If P = (a,b,c) and Q = (a',b',c'),then the direction ratios of the line PQ is given by ((a'-a),(b'-b),(c'-c))...
Show that the lines x = – y = 2z and x + 2 = 2y – 1 = – z + 1 are perpendicular to each other.
CBSE, Class 12, Exercise 27C, Maths, RS Aggarwal, Straight Line in Space
To prove – The lines are perpendicular to each other Direction ratios of L1 = (2,-2,1) Direction ratios of L2 = (2,1,-2) Tip – If (a,b,c) be the direction ratios of the first line and (a',b',c') be...
prove that
To find – The value of λ Direction ratios of L1 = (-3,2λ,2) Direction ratios of L2 = (3λ,1,-5) Tip – If (a,b,c) be the direction ratios of the first line and (a',b',c') be that of the second, then...
To prove – The lines are perpendicular to each other Direction ratios of L1 = (2,-3,4) Direction ratios of L2 = (2,4,2) Tip – If (a,b,c) be the direction ratios of the first line and (a',b',c') be...
Find the angle between each of the following pairs of lines:
To find – Angle between the two pair of lines Direction ratios of L1 = (-3,-2,0) Direction ratios of L2 = (1,-3,2) Tip – If (a,b,c) be the direction ratios of the first line and (a',b',c') be that...
To find – Angle between the two pair of lines Direction ratios of L1 = (1,0,-1) Direction ratios of L2 = (3,4,5) Tip – If (a,b,c) be the direction ratios of the first line and (a',b',c') be that of...
To find – Angle between the two pair of lines Direction ratios of L1 = (2,1,-3) Direction ratios of L2 = (3,2,-1) Tip – If (a,b,c) be the direction ratios of the first line and (a',b',c') be that of...
To find – Angle between the two pair of lines Direction ratios of L1 = (4,3,5) Direction ratios of L2 = (1,-1,1) Tip – If (a,b,c) be the direction ratios of the first line and (a',b',c') be that of...
Find the angle between each of the following pairs of lines
To find – Angle between the two pair of lines Direction ratios of L1 = (1,1,2) Direction ratios of L2 = (3,5,4) Tip – If (a,b,c) be the direction ratios of the first line and (a',b',c') be that of...
To find – Angle between the two pair of lines Direction ratios of L1 = (2,-2,1) Direction ratios of L2 = (1,2,-2) Tip – If (a,b,c) be the direction ratios of the first line and (a',b',c') be that of...
Tip – If (a,b,c) be the direction ratios of the first line and (a',b',c') be that of the second, then the angle between these pair of lines is given by
The position vectors of three points A, B and C are
CBSE, Class 12, Exercise 27B, Maths, RS Aggarwal, Straight Line in Space
It can thus be written as: A = (-4,2,-3) B = (1,3,-2) C = (-9,1,-4) To prove – A, B and C are collinear Formula to be used – If P = (a,b,c) and Q = (a',b',c'),then the direction ratios of the line...
Find the values of λ and μ so that the points A(-1, 4, -2), B(λ, μ 1) and C(0, 2, -1) are collinear.
Answer Given - A = (-1,4,-2) B = (λ,μ,1) C = (0,2,-1) To find – The value of λ and μ so that A, B and C are collinear Formula to be used – If P = (a,b,c) and Q = (a',b',c'),then the direction ratios...
Find the values of λ and μ so that the points A(3, 2, -4), B(9, 8, -10) and C(λ, μ -6) are collinear.
Answer Given - A = (3,2,-4) B = (9,8,-10) C = (λ,μ,-6) To find – The value of λ and μ so that A, B and C are collinear Formula to be used – If P = (a,b,c) and Q = (a',b',c'),then the direction...
Find the value of λ for which the points A(2, 5, 1), B(1, 2, -1) and C(3, λ, 3) are collinear.
Answer Given - A = (2,5,1) B = (1,2,-1) C = (3,λ,3) To find – The value of λ so that A, B and C are collinear Formula to be used – If P = (a,b,c) and Q = (a',b',c'),then the direction ratios of the...
Show that the points A(2, 3, -4), B(1, -2, 3) and C(3, 8, -11) are collinear.
Answer Given - A = (2,3,-4) B = (1,-2,3) C = (3,8,-11) To prove – A, B and C are collinear Formula to be used – If P = (a,b,c) and Q = (a',b',c'),then the direction ratios of the line PQ is given by...
Show that the points A(2, 1, 3), B(5, 0, 5) and C(-4, 3, -1) are collinear.
Answer Given - A = (2,1,3) B = (5,0,5) C = (-4,3,-1) To prove – A, B and C are collinear Formula to be used – If P = (a,b,c) and Q = (a',b',c'),then the direction ratios of the line PQ is given by...
Find the image of the point (2, -1, 5) in the line
CBSE, Class 12, Exercise 27A, Maths, RS Aggarwal, Straight Line in Space
Find the image of the point $(2,-1,5)$ in the line r→=(11i^-2j^-8k^)+λ(10i^-4j^-11k^) \overrightarrow{\mathrm{r}}=(11 \hat{i}-2 \hat{j}-8 \hat{\mathrm{k}})+\lambda(10 \hat{i}-4...
Find the image of the point (5, 9, 3) in the line
CBSE, Class 12
Find the image of the point $(5,9,3)$ in the line $\frac{\mathrm{x}-1}{2}=\frac{\mathrm{y}=2}{3}=\frac{\mathrm{z}-3}{4}$. Answer Given: Equation of line is...
Find the image of the point $(0,2,3)$ in the line $\frac{\mathrm{x}+3}{5}=\frac{\mathrm{y}-1}{2}=\frac{\mathrm{z}+4}{3}$. Answer Given: Equation of line is...
Find the coordinates of the foot of the perpendicular drawn from the point A(1, 8, 4) to the line joining the points B(0, -1, 3) and C(2, -3, -1).
Find the coordinates of the foot of the perpendicular drawn from the point $A(1,8,4)$ to the line joining the points $B(0,-1,3)$ and $C(2,-3,-1)$ Answer Given: perpendicular drawn from point...
Find the coordinates of the foot of the perpendicular drawn from the point A(1, 2, 1) to the line joining the points B(1, 4, 6) and C(5, 4, 4).
Find the coordinates of the foot of the perpendicular drawn from the point $A(1,2,1)$ to the line joining the points $B(1,4,6)$ and $C(5,4,4)$. Answer Given: perpendicular drawn from point...
Find the vector equation of a line passing through the point having the position vector
Find the vector equation of a line passing through the point having the position vector $(\hat{i}+2 \hat{j}-3 \hat{k})$ and parallel to the line joining the points with position vectors...
Find the vector equation of a line passing through the point A(3, -2, 1) and parallel to the line joining the points B(-2, 4, 2) and C(2, 3, 3). Also, find the Cartesian equations of the line.
Find the vector equation of a line passing through the point $A(3,-2,1)$ and parallel to the line joining the points $\mathrm{B}(-2,4,2)$ and $\mathrm{C}(2,3,3) .$ Also, find the Cartesian equations...
Find the vector and Cartesian equations of the line joining the points whose position vectors are
Find the vector and Cartesian equations of the line joining the points whose position vectors are $(\hat{i}-2 \hat{j}+\hat{k})$ and $(\hat{i}+3 \hat{j}-2 \hat{k})$ Answer Given: line passes through...
Find the vector and Cartesian equations of the line passing through the points A(2, -3, 0) and B(-2, 4, 3).
Find the vector and Cartesian equations of the line passing through the points $A(2,-3,0)$ and $B(-2,4,$, 3). Answer Given: line passes through the points $(2,-3,0)$ and $(-2,4,3)$ To find: equation...
Find the vector and Cartesian equations of the line passing through the points A(3, 4, -6) and B(5, -2, 7).
Find the equations of the line passing through the point $(1,-2,3)$ and parallel to the line $\frac{\mathrm{x}-6}{3}=\frac{\mathrm{y}-2}{-4}=\frac{\mathrm{Z}+7}{5}$. Also find the vector form of...
Find the coordinates of the foot of the perpendicular drawn from the point (1, 2, 3) to the line
Find the coordinates of the foot of the perpendicular drawn from the point $(1,2,3)$ to the line $\frac{\mathrm{x}-6}{3}=\frac{\mathrm{y}-7}{2}=\frac{\mathrm{z}-7}{-2}$. Also, find the length of the...
Show that the lines and do not intersect each other.
CBSE, Class 12, Exercise 27A, Maths, Maths, RS Aggarwal, RS Aggarwal, Straight Line in Space
Show that the lines $\frac{\mathrm{x}-1}{2}=\frac{\mathrm{y}+1}{3}=z$ and $\frac{\mathrm{x}+1}{5}=\frac{\mathrm{y}-2}{1}, Z=2$ do not intersect each other. Answer Given: The equations of the two...
Show that the lines and intersect each other. Also, find the point of their intersection.
Show that the lines $\frac{\mathrm{x}-1}{2}=\frac{\mathrm{y}-2}{3}=\frac{\mathrm{Z}-3}{4}$ and $\frac{\mathrm{x}-4}{5}=\frac{\mathrm{y}-1}{2}=\mathrm{Z}$ intersect each other. Also, find the point...
Prove that the lines
Prove that the lines $\frac{\mathrm{x}-4}{1}=\frac{\mathrm{y}+3}{4}=\frac{\mathrm{z}+1}{7}$ and $\frac{\mathrm{x}-1}{2}=\frac{\mathrm{y}+1}{-3}=\frac{\mathrm{z}+10}{8}$ intersect each other and find...
Find the Cartesian and vector equations of the line passing through the point (1, 2, -4) and
Find the Cartesian and vector equations of the line passing through the point $(1,2,-4)$ and perpendicular to each of the lines...
Find the equations of the line passing through the point (-1, 3, -2) and perpendicular to each of the lines and
Find the equations of the line passing through the point $(-1,3,-2)$ and perpendicular to each of the lines $\frac{\mathrm{X}}{1}=\frac{\mathrm{y}}{2}=\frac{\mathrm{Z}}{3}$ and...
Find the Cartesian and vector equations of a line which passes through the point (1, 2, 3) and is parallel to the line
Find the Cartesian and vector equations of a line which passes through the point $(1,2,3)$ and is parallel to the line $\frac{-\mathrm{X}-2}{1}=\frac{\mathrm{y}+3}{7}=\frac{2 Z-6}{3}$ Answer Given:...
Find the equations of the line passing through the point (1, -2, 3) and parallel to the line Also find the vector form of this equation so obtained.
Find the Cartesian equations of the line which passes through the point (1, 3, -2) and is parallel to the line Also, find the vector form of the equations so obtained.
Find the Cartesian equations of the line which passes through the point $(1,3,-2)$ and is parallel to the line given by $\frac{\mathrm{x}+1}{3}=\frac{\mathrm{y}-4}{5}=\frac{\mathrm{z}+3}{-6}$. Also,...
The Cartesian equations of a line are 3x + 1 = 6y – 2 = 1 – z. Find the fixed point through which it passes, its direction ratios and also its vector equation.
The Cartesian equations of a line are $3 x+1=6 y-2=1-z$. Find the fixed point through which it passes, its direction ratios and also its vector equation. Answer Given: Cartesian equation of line are...
The Cartesian equations of a line Find the vector equation of the line.
The Cartesian equations of a line are $\frac{\mathrm{x}-3}{2}=\frac{\mathrm{y}+2}{-5}=\frac{\mathrm{Z}-6}{4}$. Find the vector equation of the line. Answer Given: Cartesian equation of line...
A line is drawn in the direction and it passes through a point with position vector Find the equations of the line in the vector as well as Cartesian forms.
A line is drawn in the direction of $(\hat{\mathrm{i}}+\hat{\mathrm{j}}-2 \hat{\mathrm{k}})$ and it passes through a point with position vector $(2 \hat{i}-\hat{j}-4 \hat{k}) .$ Find the equations...
Find the vector equation of the line passing through the point with position vector and parallel to the vector Deduce the Cartesian equations of the line.
Find the vector equation of the line passing through the point with position vector $(2 \hat{\mathrm{i}}+\hat{\mathrm{j}}-5 \hat{\mathrm{k}})$ and parallel to the vector $(\hat{i}+3...
A line passes through the point (2, 1, -3) and is parallel to the vector Find the equations of the line in vector and Cartesian forms.
CBSE, Class 12, Exercise 27A, Maths, Maths, RS Aggarwal, Straight Line in Space
A line passes through the point $(2,1,-3)$ and is parallel to the vector $(\hat{\mathrm{i}}-2 \hat{\mathrm{j}}+3 \hat{\mathrm{k}})$. Find the equations of the line in vector and Cartesian forms....
A line passes through the point (3, 4, 5) and is parallel to the vector Find the equations of the line in the vector as well as Cartesian forms.
Answer Given: line passes through point $(3,4,5)$ and is parallel to $2 \hat{\imath}+2 \hat{\jmath}-3 \hat{k}$ To find: equation of line in vector and Cartesian forms Formula Used: Equation of a...
Evaluate:
CBSE, Class 11, Combinations, Excercise 9A, Maths, RS Aggarwal
90C88 Answer : We know that: ⇒90C88=4005 Ans: ⇒90C88=4005
16c13 Answer : We know that: nCr = ⇒16C13= ⇒16C13= ⇒16C13 = ⇒16C13 = ⇒16C13 = ⇒16C13 =560 Ans: 16C13=560
21C4 Answer : We know that: nCr ⇒20C4 ⇒20C4 ⇒20C4 ⇒20C4 ⇒20C4 ⇒20C4 =4845 Ans: 20C4 =4845
Question consists of two statements, namely, Assertion (A) and Reason (R). For selecting the correct answer, use the following code:
CBSE, Class 10, Statistics
(a) Both Assertion (A) and reason (R) are true and Reason (R) is a correct explanation of Assertion (A). (b) Both Assertion (A) and reason (R) are true and Reason (R) is not a correct explanation of...
Match the following:
The mean of 2, 7, 6 and x is 15 and mean of 18, 1, 6, x and y is 10. What is the value of y? (a) 5 (b) 10 (c) -20 (d) 30
If the median of the data 4, 7, x-1, x-3, 16, 25, written in ascending order, is 13 then x is equal to
(a) 13 (b) 14 (c) 15 (d) 16 Answer: (c) 15 Sol: Median of 6 numbers is the average of 3rd and 4th term. ∴ 13 = (????−1)+(????−3) 2 ⇒ 26 = 2x – 4 ⇒ 2x = 30 ⇒ x = 15 Thus, x is equal to...
The mean of 20 numbers is 0. OF them, at the most, how many may be greater than zero?
(a) 0 (b) 1 (c) 10 (d) 19 Answer: (d) 19 Sol: It is given that mean of 20 numbers is zero. i.e., average of 20 numbers is zero. i.e., sum of 20 numbers is zero. Thus, at most, there can be 19...
The median of the first 8 prime numbers is
(a) 7 (b) 9 (c) 11 (d) 13 Answer: (b) 9 Sol: First 8 prime numbers are 2, 3, 5, 7, 11, 13, 17 and 19. Median of 8 numbers is average of 4th and 5th terms.
Look at the cumulative frequency distribution table given below:
Answer: (c) 13 Sol: Converting the given data into a frequency table, we get: Hence, the number of families having an income range of Rs. 20,000 – Rs. 25,000 is 13. The correct option is...
For a symmetrical frequency distribution, we have:
(a) mean ˂ mode ˂ median (b) mean > mode > median (c) mean = mode = median (d) mode = 1 2 (mean + median) Answer: (c) mean = mode = median Sol: A symmetric distribution is one where the left...
The median and mode of a frequency distribution are 26 and 29 respectively. Then, the mean is
(a) 27.5 (b) 24.5 (c) 28.4 (d) 25.8 Answer: (b) 24.5 Sol: Mode = (3 × median) – (2 × mean) ⇒ (2 × mean) = (3 × median) – mode ⇒ (2 × mean) = 3 × 26 – 29 ⇒ (2 × mean) = 49 ⇒ Mean = 49 2 ∴ Mean = 24.5...
The mean and mode of a frequency distribution are 28 and 16 respectively. The median is
(a) 22 (b) 23.5 (c) 24 (d) 24.5 Answer: (c) 24 Sol: Mode = (3 × median) – (2 × mean) ⇒ (3 × median) = (mode + 2 mean) ⇒ (3 × median) = 16 + 56 ⇒ (3 × median) = 72 ⇒ Median = 72 3 ∴...
Consider the following table:
Look at the frequency distribution table given below:
If the mean and median of a set of numbers are 8.9 and 9 respectively, then the mode will be
(a) 7.2 (b) 8.2 (c) 9.2 (d) 10.2 Answer: (c) 9.2 Sol: It is given that the mean and median are 8.9 and 9, respectively, ∴ Mode = (3 × Median) – (2 × Mean) ⇒ Mode = (3 × 9) – (2 × 8.9) = 27 – 17.8 =...
Median =?
Mode = ?
Consider the following frequency distribution
Consider the frequency distribution of the heights of 60 students of a class
if the 'less than type' ogive and 'more than type' ogive intersect each other at (20.5, 15.5) then the median of the given data is
(a) 5.5 (b) 15.5 (c) 20.5 (d) 36.0 Answer: (c) 20.5 Sol: The x- coordinate represents the median of the given data. Thus, median of the given data is 20.5.
The relation between mean, mode and median is
(a) mode=(3 * mean) – (2 * median) (b) mode=(3 *median) – (2 *mean) (c) median=(3 * mean) – (2 * mode) (d) mean=(3 * median) – (2 *mode) Answer: (b) mode=(3 * median) – (2 *mean) Sol: mode=(3 *...
While computing the mean of the groue data, we assume that the frequencies are
(a) evenly distributed over the classes (b) centred at the class marks of the classes (c) centred at the lower limits of the classes (d) centred at the upper limits of the classes Answer: (b)...
In the formula for the following the mean of the grouped data, the i d 's are the deviations from A of
(a) lower limits of the classes (b) upper limits of the classes (c) midpoints of the classes (d) none of these Answer: (c) midpoints of the classes Sol: The ???????? ′???? are the deviations from A...
For the finding the mean by using the formula,
If ' i x s are the midpoints of the class intervals of a grouped data, ' i f s are the corresponding frequencies and x is the mean then
The abscissa of the point of intersection of the Less Than Type and of the More Than Type cumulative frequency curves of a grouped data gives its
(a) Mean (b) Median (c) Mode (d)None of these Answer: (b) Median Sol: The abscissa of the point of intersection of the 'less than type' and that of the 'more than type' cumulative...
The cumulative frequency table is useful is determining the
(a) Mean (b) Median (c) Mode (d) all of these Answer: (b) Median Sol: The cumulative frequency table is useful in determining the median.
The medium of a frequency distribution is found graphically with the help of
(a) a histogram (b) a frequency curve (c) a frequency polygon (d) ogives Answer: (d) ogives Sol: This because median of a frequency distribution is found graphically...
The mode of frequency distribution is obtained graphically from
(a) a frequency curve (b) a frequency polygon (c) a histogram (d) an ogive Answer: (c) a histogram Sol: The mode of a frequency distribution can be obtained graphically from a histogram.
Which of the following measures of central tendency is influence by extreme values?
(a) Mean (b) Median (c) Mode (d) None of these Answer: (a) Mean Sol: Mean is influenced by extreme values.
Which of the following cannot be determined graphically?
(a) Mean (b) Median (c) Mode (d) None of these Answer: (a) Mean Sol: The mean cannot be determined graphically because the values cannot be summed.
Which of the following is not a measure of central tendency?
(a) Mean (b) Mode (c) Median (d) Standard Deviation Answer: (d) Standard Deviation Sol: The standard deviation is a measure of dispersion. It is the action or process of distributing...
Calculate the missing frequency form the following distribution, it being given that the median of the distribution is 24.
The following table, construct the frequency distribution of the percentage of marks obtained by 2300 students in a competitive examination.
The following table gives the life-time (in days) of 100 electric bulbs of a certain brand.
The following frequency distribution gives the monthly consumption of electricity ofr 64 consumers of locality.
In the following data, find the values of p and q. Also, find the median class and modal class.
The following are the ages of 300 patients getting medical treatment in a hospital on a particular day:
Find the mode of the given data:
What is the cumulative frequency of the modal class of the following distribution?
If the median of ???? 5 , ???? 4 , ???? 2 , ???? and ???? 3 , where x > 0, is 8, find the value of x. Hint Arranging the observations in ascending order, we have ????/ 5 , ???? /4 , ????/ 3 , ????/ 2 , ???? Median= ????/ 3 = 8.
CBSE, Class 11, Exercise 27C, Limits, Maths, RS Aggarwal
Let . Show that does not exist.
Let Find
Let Show that does not exist.
Evaluate the following limits:
CBSE, Class 11, Exercise 27B, Limits, Maths, RS Aggarwal
Evaluate the following limits: \pi 6
CBSE, Class 11, Exercise 27A, Limits, Maths, RS Aggarwal
For the binary operation ×10 set S = {1, 3, 7, 9}, find the inverse of 3.
Binary Operations, CBSE, Class 12, Exercise 3.5, Maths, RD Sharma
Answer: 1 ×10 1 = remainder obtained by dividing 1 × 1 by 10 = 1 3 ×10 7 = remainder obtained by dividing 3 × 7 by 10 = 1 7 ×10 9 = remainder obtained by dividing 7 × 9 by 10 = 3 Composition table:...
Construct the composition table for ×5 on set Z5 = {0, 1, 2, 3, 4}
Answer: 1 ×5 1 = remainder obtained by dividing 1 × 1 by 5 = 1 3 ×5 4 = remainder obtained by dividing 3 × 4 by 5 = 2 4 ×5 4 = remainder obtained by dividing 4 × 4 by 5 = 1 Composition table: ×5 0 1...
Construct the composition table for ×6 on set S = {0, 1, 2, 3, 4, 5}.
Construct the composition table for +5 on set S = {0, 1, 2, 3, 4}
Answer: 1 +5 1 = remainder obtained by dividing 1 + 1 by 5 = 2 3 +5 1 = remainder obtained by dividing 3 + 1 by 5 = 2 4 +5 1 = remainder obtained by dividing 4 + 1 by 5 = 3 Composition Table: +5 0 1...
Construct the composition table for ×4 on set S = {0, 1, 2, 3}.
Answer: Given, ×4 on set S = {0, 1, 2, 3} 1 ×4 1 = remainder obtained by dividing 1 × 1 by 4 = 1 0 ×4 1 = remainder obtained by dividing 0 × 1 by 4 = 0 2 ×4 3 = remainder obtained by dividing 2 × 3...
Let * be a binary operation on Z defined by a * b = a + b – 4 for all a, b ∈ Z. (i) Show that * is both commutative and associative. (ii) Find the identity element in Z
Answers: (i) Consider, a, b ∈ Z a * b = a + b – 4 = b + a – 4 = b * a a * b = b * a, ∀ a, b ∈ Z Then, * is commutative on Z. a * (b * c) = a * (b + c – 4) = a + b + c -4 – 4 = a + b + c – 8 (a * b)...
Let * be a binary operation on Q0 (set of non-zero rational numbers) defined by a * b = (3ab/5) for all a, b ∈ Q0. Show that * is commutative as well as associative. Also, find its identity element, if it exists.
Answer: Consider, a, b ∈ Q0 a * b = (3ab/5) = (3ba/5) = b * a a * b = b * a, for all a, b ∈ Q0 a * (b * c) = a * (3bc/5) = [a (3 bc/5)] /5 = 3 abc/25 (a * b) * c = (3 ab/5) * c = [(3 ab/5)...
Let * be a binary operation on Q – {-1} defined by a * b = a + b + ab for all a, b ∈ Q – {-1}. Then, (i) Show that * is both commutative and associative on Q – {-1} (ii) Find the identity element in Q – {-1}
Answers: (i) Consider, a, b ∈ Q – {-1} a * b = a + b + ab = b + a + ba = b * a a * b = b * a, ∀ a, b ∈ Q – {-1} a * (b * c) = a * (b + c + b c) = a + (b + c + b c) + a (b + c + b c) = a + b +...
Let * be a binary operation on Q – {-1} defined by a * b = a + b + ab for all a, b ∈ Q – {-1}. Then, Show that every element of Q – {-1} is invertible. Also, find inverse of an arbitrary element.
Answer: Consider, a ∈ Q – {-1} and b ∈ Q – {-1} be the inverse of a. a * b = e = b * a a * b = e and b * a = e a + b + ab = 0 and b + a + ba = 0 b (1 + a) = – a Q – {-1} b = -a/1 + a Q – {-1}...
Let A = R0 × R, where R0 denote the set of all non-zero real numbers. A binary operation 'O' is defined on A as follows: (a, b) O (c, d) = (ac, bc + d) for all (a, b), (c, d) ∈ R0 × R. (i) Show that 'O' is commutative and associative on A (ii) Find the identity element in A
Answers: (i) Consider, X = (a, b) Y = (c, d) ∈ A, ∀ a, c ∈ R0 b, d ∈ R X O Y = (ac, bc + d) Y O X = (ca, da + b) X O Y = Y O X, ∀ X, Y ∈ A O is not commutative on A. X = (a, b) Y = (c, d) a Z = (e,...
Let A = R0 × R, where R0 denote the set of all non-zero real numbers. A binary operation 'O' is defined on A as follows: (a, b) O (c, d) = (ac, bc + d) for all (a, b), (c, d) ∈ R0 × R. Find the invertible element in A.
Answer: Consider, F = (m, n) be the inverse in A ∀ m ∈ R0 and n ∈ R X O F = E F O X = E (am, bm + n) = (1, 0) and (ma, na + b) = (1, 0) Considering (am, bm + n) = (1, 0) am = 1 m = 1/a And bm + n =...
Let * be a binary operation on Z defined by a * b = a + b – 4 for all a, b ∈ Z. Find the invertible element in Z.
Answer: Consider, a ∈ Z and b ∈ Z be the inverse of a. a * b = e = b * a a * b = e and b * a = e a + b – 4 = 4 and b + a – 4 = 4 b = 8 – a ∈ Z Hence, 8 – a is the inverse of a ∈...
Find the identity element in the set of all rational numbers except – 1 with respect to * defined by a * b = a + b + ab
Answer: Consider, e be the identity element in I+ with respect to * such that a * e = a = e * a, ∀ a ∈ Q – {-1} a * e = a and e * a = a, ∀ a ∈ Q – {-1} a + e + ae = a and e + a + ea = a, ∀ a ∈ Q –...
Find the identity element in the set I+ of all positive integers defined by a * b = a + b for all a, b ∈ I+.
Answer: Consider, e be the identity element in I+ with respect to * a * e = a = e * a, ∀ a ∈ I+ a * e = a and e * a = a, ∀ a ∈ I+ a + e = a and e + a = a, ∀ a ∈ I+ e = 0, ∀ a ∈ I+ Hence, 0 is the...
The median of 19 observations is 30. Two more observation are made and the values of these are 8 and 32. Find the median of the 21 observations taken together. Hint Since 8 is less than 30 and 32 is more than 30, so the value of median (middle value) remains unchanged.
Sol: Since, 8 is less than 30 and 32 is more than 30, so the middle value remains unchanged Thus, the median of 21 observations taken together is 30.
The observation 29, 32, 48, 50, x, x+2, 72, 78, 84, 95 are arranged in ascending order. What is the value of x if the median of the data is 63?arranged in ascending order. What is the value of x if the median of the data is 63?
In a frequency distribution table with 12 classes, the class-width is 2.5 and the lowest class boundary is 8.1, then what is the upper class boundary of the highest class?
Sol: Upper class boundary = Lowest class boundary + width × number of classes = 8.1 + 2.5×12 = 8.1 + 30 = 38.1 Thus, upper class boundary of the highest class is 38.1.
The distribution X and Y with total number of observations 36 and 64, and mean 4 and 3 respectively are combined. What is the mean of the resulting distribution X + Y?
While calculating the mean of a given data by the assumed-mean method, the following values were obtained.
Find the class marks of classes 10 -25 and 35 – 55.
In a class test, 50 students obtained marks as follows:
For a certain distribution, mode and median were found to be 1000 and 1250 respectively. Find mean for this distribution using an empirical relation.
A data has 25 observations arranged in a descending order. Which observation represents the median?
What is the lower limit of the modal class of the following frequency distribution?
Write the median class of the following distribution:
From the following data, draw the two types of cumulative frequency curves and determine the median:
The marks obtained by 100 students of a class in an examination are given below:
From the following frequency, prepare the 'more than' ogive.
The table given below shows the weekly expenditures on food of some households in a locality
The following table gives the production yield per hectare of wheat of 100 farms of a village.
The monthly consumption of electricity (in units) of some families of a locality is given in the following frequency distribution:
The heights of 50 girls of Class X of a school are recorded as follows:
Draw a 'more than' ogive for the data given below which gives the marks of 100 students.
The given distribution shows the number of wickets taken by the bowlers in one-day international cricket matches:
Find the median of the following data by making a 'less than ogive'.
The table below shows the daily expenditure on food of 30 households in a locality:
The following table gives the daily income of 50 workers of a factory:
A survey regarding the heights (in cm) of 50 girls of a class was conducted and the following data was obtained:
Find the mean, median and mode of the following data:
The agewise participation of students in the annual function of a school is shown in the following distribution.
Compute the mode from the following data:
Compute the mode from the following series:
Calculate the mode from the following data:
Given below is the distribution of total household expenditure of 200 manual workers in a city:
Find the mode of the following distribution:
Sign up now and study all subjects for free
CBSE Maths 12
ICSE Maths 12
ICSE Physics 11
MH 12 Chemistry
ICSE 10 Physics
ICSE Biology 12
CBSE Chem 11
ICSE Chem 9
UP Bio 11
CBSE Physics 12
UP Maths 12
ICSE Maths 10 ICSE Maths 11
All PYQ
Copyright © 2021 Noon | All Rights Reserved. | CommonCrawl |
Antenna surface plasmon emission by inelastic tunneling
Plasmon-enhanced light–matter interactions and applications
Huakang Yu, Yusi Peng, … Zhi-Yuan Li
Spontaneous and stimulated electron–photon interactions in nanoscale plasmonic near fields
Matthias Liebtrau, Murat Sivis, … Claus Ropers
Continuous-wave multiphoton photoemission from plasmonic nanostars
Murat Sivis, Nicolas Pazos-Perez, … Claus Ropers
Spectral field mapping in plasmonic nanostructures with nanometer resolution
J. Krehl, G. Guzzinati, … A. Lubk
Simple model of saturable localised surface plasmon
Hisaki Oka & Yasuo Ohdaira
Wavelength conversion through plasmon-coupled surface states
Deniz Turan, Ping Keng Lu, … Mona Jarrahi
A truly one-way lane for surface plasmon polaritons
Francesco Monticone
Information Transfer by Near-Infrared Surface-Plasmon-Polariton Waves on Silver/Silicon Interfaces
Rajan Agrahari, Akhlesh Lakhtakia & Pradip K. Jain
Near field excited state imaging via stimulated electron energy gain spectroscopy of localized surface plasmon resonances in plasmonic nanorod antennas
Robyn Collette, David A. Garfinkel, … Philip D. Rack
Cheng Zhang1,
Jean-Paul Hugonin1,
Anne-Lise Coutrot1,
Christophe Sauvan ORCID: orcid.org/0000-0002-8360-92541,
François Marquier ORCID: orcid.org/0000-0003-3118-11502 &
Jean-Jacques Greffet ORCID: orcid.org/0000-0002-4048-21501
Nature Communications volume 10, Article number: 4949 (2019) Cite this article
Nanophotonics and plasmonics
Nanoscale devices
Photonic devices
Sub-wavelength optics
Surface plasmons polaritons are mixed electronic and electromagnetic waves. They have become a workhorse of nanophotonics because plasmonic modes can be confined in space at the nanometer scale and in time at the 10 fs scale. However, in practice, plasmonic modes are often excited using diffraction-limited beams. In order to take full advantage of their potential for sensing and information technology, it is necessary to develop a microscale ultrafast electrical source of surface plasmons. Here, we report the design, fabrication and characterization of nanoantennas to emit surface plasmons by inelastic electron tunneling. The antenna controls the emission spectrum, the emission polarization, and enhances the emission efficiency by more than three orders of magnitude. We introduce a theoretical model of the antenna in good agreement with the results.
Surface plasmon polaritons (SPPs) are mixed electronic and electromagnetic modes propagating along the interface between a metal and a dielectric1. The corresponding electromagnetic field can be confined to nanometer scale and has a decay time on the order of 10 fs so that a new generation of devices can be designed2,3,4,5. SPPs are widely used for sensing and also for information technology6,7,8,9,10,11. An important limitation of surface plasmons is that in most cases, they are optically excited by a diffraction-limited beam so that the device size is larger than the wavelength. In order to fully benefit from the field confinement, it is necessary to generate SPPs in the near field with a subwavelength device. Electrical generation of surface plasmons has been demonstrated using LED-based platforms12,13,14,15,16 with time scales on the order of 1 ns. Ultrafast emission can be achieved using light emission by inelastic tunneling (LEIT) through a tunnel junction.
LEIT through a planar tunnel junction was discovered by Lambe and McCarthy17. This process is rather inefficient with a typical efficiency on the order of one photon per a million electrons. However, LEIT has important advantages. Firstly, it is intrinsically fast. The fundamental limit is given by the tunneling time18,19, which is on the order of h/eV where e is the electron charge, V the applied voltage on the order of 1 V and h is Planck's constant yielding a time limit on the order of 4 fs. In practice, the limit is given by the circuit time constant RC. Photon sources based on LEIT have been operated at 1 GHz20. Secondly, LEIT can be a highly localized source. The source current density can be confined in a nanoantenna or in a metallic tip enabling electromagnetic excitation localized at the nanometer scale. Both photon21 and plasmon22,23,24 emission have been reported using scanning tunneling microscope (STM) tips. More recently, surface plasmon emission has been reported with metallic microstructures25 and molecular junctions26. Theoretical models are available for LEIT18,27,28,29,30,31, and for the role of the gap plasmon mode32. Hence, antenna surface plasmon emission by inelastic tunneling (ASPEIT) appears to be a suitable candidate for ultrafast and highly localized plasmon emission33.
However, several issues need to be solved to use inelastic tunneling as a light source. It is required to control the emission spectrum, the angular emission pattern and to increase the emission efficiency. In order to tackle these issues, resonant nanoantennas can be used. The rationale for using an antenna is based on different properties. Firstly, a resonant antenna can select the frequency emission34,35,36. Secondly, a resonant antenna can be designed in order to control both the polarization and the angular emission pattern37,38. Thirdly, a resonant plasmonic antenna contributes to the local density of states in the junction. If the contribution of the antenna mode is larger than the contribution of the non-radiative modes, it becomes possible to avoid quenching and therefore increase the radiative efficiency35,39,40 in a controlled and deterministic way. It is important to stress that there is no fundamental limit to the LEIT efficiency. We note in particular that in the microwave regime, the non-radiative modes can be suppressed using dissipation-less superconductors. Furthermore, impedance matching between the junction and a 50-Ω line can be achieved by microwave engineering so that photon emission in a 50-Ω line can reach very high efficiencies28. Photon emission in the visible by electrically driven optical antennas has been reported demonstrating a spectral control of the emission41,42 and also a directional control43. Spectral control of LEIT using resonant nanocubes has been demonstrated recently44 although only a small fraction of the tunneling current was affected by the antenna. These results demonstrate the potential of plasmonic nanoantennas to tailor LEIT. All these demonstrations deal with light emission. No antennas have been reported to emit surface plasmons so far. Reported surface plasmon sources are based on the use of a metallic tip22,23,24 or a planar junction25,26. While it is known that a localized gap plasmon takes place between the tip and the surface, no control of this mode using antennas has been reported.
There are still many issues to be solved before developing electrical surface plasmon sources using inelastic tunneling assisted by antennas: (i) the tunnel barrier width, which plays a critical role in the success of the emission process, is not deterministically controlled in the reported experiments, (ii) the lifetime of the antenna is still an outstanding issue, (iii) LEIT is limited to very small power on the order of 1 fW for STM junctions or on the order of 10 fW for planar junctions while pW are needed for practical applications, (iv) the coupling between the antenna mode and the propagating plasmon mode has to be optimized, (v) a theoretical analysis of the emitted power as a function of the current fluctuations in the presence of an antenna is needed in order to analyze and optimize the emission.
Here, we report the design, fabrication and characterization of antennas to emit surface plasmons propagating along an aluminum/air interface. We also report an original theoretical model introducing a figure of merit of the antenna showing that the relevant figure of merit is not the Purcell factor but the spatially averaged enhancement factor in the junction barrier. The antenna produces a narrow emission spectrum and provides an enhancement of the efficiency as compared to a planar junction larger than three orders of magnitude. The emitted power is on the order of 10 pW, four orders of magnitude larger than an STM tip.
ASPEIT junction and electrical characterization
The electrical surface plasmon source consists in a gold nanopatch antenna deposited on an Al/AlOx interface (see Fig. 1a). A plasmon is emitted when an electron tunnels inelastically from gold to aluminum through a 3-nm-thick AlOx tunnel barrier as shown in Fig. 1b. The gap between gold and aluminum supports gap plasmons, which are reflected at both edges of the antenna separated by a width D = 128 nm forming a Fabry–Perot cavity45 (Fig. 1a). Due to partial transmission at the edges, plasmons are emitted by the antenna and propagate along the Al/air interface. In order to observe them, the aluminum thickness was chosen to be 25 nm so that plasmons can leak in the glass substrate. The geometry of the antenna needs to be optimized so that the field of the antenna mode is efficiently coupled to both the current density and the radiated plasmonic field34. The thin gap and the resonant behavior of the Fabry–Perot cavity serves to enhance the field in the plasmonic cavity in order to couple to the current density. The radiation of propagating surface plasmons depends on the height of the antenna. Both effects are captured by the theoretical form of the signal derived in the Supplementary Note 5 where we introduce a figure of merit, which is the spatially averaged enhancement factor. In order to increase the emitted optical power, we have fabricated a finite size array of 25 µm width where the 60-µm-long and 128-nm-wide antennas are periodically arranged with a period of 400 nm (Fig. 1c). We checked that the device is operating in the tunnel regime by measuring the current density (J) as a function of the applied voltage. The characteristic J(V) curve is depicted in Fig. 2 in linear plot (Fig. 2a) and semi-log plot (Fig. 2b) displaying the typical exponential dependence. A fit of the data with Simmon's model46 allows the estimation of the barrier thickness (3 nm) and the mean barrier height (2.01 ± 0.03 eV) using an effective mass of the electron of 0.23me47. As an additional check of the tunnel regime, we show in the Supplementary Note 4 that the current is proportional to the area of the device for four different devices and we also show that the J(V) curve does not depend on temperature.
Principle of antenna surface plasmon emission by inelastic tunneling (ASPEIT). a Schematic of the device consisting of a gold patch antenna (thickness 50 nm and width D) on an aluminum film (thickness 25 nm) with a 3-nm-thick tunnel barrier of AlOx in between. Gap plasmons propagating along the x-axis under the gold patch are reflected at the edges forming a Fabry–Perot cavity. The surface plasmons transmitted at the edges propagate along the Al/air interface. They are observed by leakage radiation microscopy through a finite thickness of Al. When applying a voltage bias V, SPPs can be excited by inelastic tunneling. b Energy level diagram for the metal–insulator–metal (MIM) tunnel junction with a voltage bias V. The black arrow shows the maximum electron energy loss, which is the maximum plasmon energy. c Optical microscope image showing the periodical antenna junction (inside the white square). A SEM image shows a zoom of the array of linear antennas (width D = 128 nm, period 400 nm)
J(V) characteristic of ASPEIT junction. a Red dots: J(V) experimental data, blue line: fitted curve based on the Simmons model (fitting parameters: barrier thickness of 3 nm and mean barrier height of 2.01 ± 0.03 eV, effective electron mass 0.23me). b Semi-log plot of J(V) showing the exponential behavior
Antenna surface plasmon emission
We now turn to the optical characterization of the emitted surface plasmons. The optical observation of the SPPs emitted by the Al/AlOx/Au junction is based on leakage radiation microscopy (LRM) using an inverted optical microscope. We use an oil objective (NA = 1.3) to collect the leakage of the emitted SPPs through the glass substrate when the junctions are biased. We show in Fig. 3a an optical microscope image of the device taken through the glass substrate. The bright area in the center corresponds to the array of antennas with a width of 25 µm. The direction of the antennas is along the y-axis. It is seen that on both sides of the array of antennas, light is also collected. This is due to radiative leakage of surface plasmons propagating away from the antennas. To prove the plasmonic character of the emitted light, we averaged the intensity along y in the area indicated by the dashed rectangle shown in Fig. 3a. The result is plotted as a function of x in Fig. 3b in a semi-log scale showing clearly an exponential decay with a decay length of 4.9 µm, which is smaller than theoretically predicted (8.5 µm at 850 nm) when using the data of ref. 48. This value is known to be highly dependent on the deposition procedure48. To further check the plasmonic character of the emitted light, we recorded the far-field angular spectrum in the back focal plane as displayed in Fig. 3c. It is clearly seen that light is predominantly emitted for wavevectors slightly larger than the vacuum wavevector k0 = ω/c, a clear signature of surface plasmons propagating along an Al/air interface. An interesting non-uniform intensity pattern due to the antenna periodicity is observed as a function of the angle. To understand the data, we have developed a theoretical model of ASPEIT. A detailed derivation is given in Supplementary Note 5. The simulated emission pattern at 850 nm (1.46 eV) shown in Fig. 3d is in good agreement with the experimental data, which have a broader spectrum.
Plasmon emission. a Image of the source plane showing the electrical excitation of SPP (under a biased voltage of 1.6 V). The white rectangle indicates the area of the array of antennas (the scale bar is 25 μm). b Cross-section profile along the x-axis in a semi-log plot is averaged over the y-axis in the yellow dashed line rectangle (red dots are experiment data, the blue smoothed curve is a guide to the eye to see the SPP profile, the propagation length of the SPP is 4.9 ± 0. 3 μm. c Experimental back focal plane image, showing the plasmon emission between k/k0 = 1.0 (light cone in air) and k/k0 = 1.3 (immersion objective numerical aperture), where k0 = ω/c and k2 = kx2 + ky2, kx = n k0 sinθ cosϕ, where n = 1.5 is the refractive index of the substrate. d Simulated back focal plane image (at wavelength of 850 nm and the width of patch is 132 nm)
We now study the spectrum of the emitted field. To proceed, we collect the signal emitted toward the substrate and send it to a spectrometer (Andor Shamrock 750i). The data are shown in Fig. 4a–c and the theoretical simulations are shown in Fig. 4 (d–f) for three different bias voltages 1.4, 1.5 and 1.6 V. Note that the calculations account for the detector responsivity, responsible for the decay of the signal at low energy. We observe two spectral peaks at 1.2 and 1.5 eV that are well recovered by the model. Here, we see the ability of the antenna to control the light emission spectrum. We note that to recover the spectrum shape and width, we had to average over different antenna widths in order to account for width fluctuations of the antenna (see Supplementary Fig. 2). The two peaks correspond to the two modes of the Fabry–Perot cavity shown in Fig. 4g–h. We report in Supplementary Fig. 10 emission spectra of antennas with different widths showing different emission frequencies. A cutoff of light emission is expected around eV/h (indicated by a vertical dashed line in Fig. 4a–f for inelastic light emission) but not observed. The spectrum is significantly broadened and shifted from the theoretical eV/h value by the finite temperature (300 K) and by the antenna resonance. When repeating the simulation at 0 K as shown in Supplementary Fig. 9, a clear cutoff is observed. For the sake of comparison, we plot in Fig. 4d–f the theoretical emission spectrum of a planar junction at three different voltages multiplied by a factor of 500. This shows the two roles of the antenna: spectral control and efficiency enhancement.
Controlling the emission spectrum. a–c Experimental SPP emission spectra at three different biases (1.4–1.6 V, width of patch is around 128 nm, period of patch is 400 nm, spectra intensity have been normalized by the maximum power with the bias of 1.6 V); d–f theoretical normalized SPP emission spectra with same bias voltage as the figures (a–c). The emission spectrum shows two spectral peaks due to the excitation of two resonant modes of the antenna. The peaks are broadened due to the inhomogeneity of the antenna width. The simulations result from an average over seven different widths varying from 116 to 140 nm to account for width fluctuations along the wires in the experiments. The dotted line in each figure shows the corresponding normalized spectrum from a planar junction with the same voltage bias (intensity of the spectra has been multiplied by a factor of 500). All the theoretical emission spectra are corrected by the quantum efficiency of the spectrometer responsible for the decay at low energies. g, h Theoretical magnetic field distribution54 (Hy in the X–Z plane) of the two gap modes of a 132-nm-wide antenna corresponding to the lower photon energy at 1.2 eV (g) and the higher photon energy at 1.5 eV (h)
Polarization emission of the ASPEIT junction
We now move to the study of the polarization emission properties of the antenna. Figure 5 shows the far-field emission of the device when using an analyzer aligned perpendicular to the antennas (0°) or parallel to the antennas (90°). It is seen that the signal drops significantly at the emission peak showing that light emission by the antennas is polarized. The far-field emission pattern confirms that this polarization behavior is due to the plasmons. Figure 5b, c shows the back focal plane image for two orientations of the analyzer.
Polarization-dependence of the ASPEIT junction. a SPP emission spectra with two different polarized directions (0° stands for the direction, which is perpendicular to the long wire patch antenna, while 90° stands for the direction that is parallel to the wire.) b, c Back focal plane image, showing the plasmon emission between k/k0 = 1.0 and k/k0 = 1.3, with two different polarization direction of 0° and 90° (the white arrows indicate the polarization direction). d, e Simulated back focal plane images (at 1.45 eV and D = 132 nm) associated with the figure (b, c)
Emitted power and efficiency
An important figure of merit of an electrical source of surface plasmon is the emitted power. We detected 3.6 × 106 photons per second by using ASPEIT (with a voltage bias of 1.6 V). Correcting for the collection efficiency, this corresponds to 1 × 108 photons emitted per second (~23 pW, using a photon energy of 1.45 eV). It is interesting to compare with a STM tip22 (104 photons per second, 3 fW), with a planar junction25 (4.3 × 105 photons per second, 100 fW). The increased emitted power is both due to the larger size of our source (25 µm × 45 µm) and to the electron to photon conversion efficiency enhancement.
As already mentioned, the low efficiency of the inelastic tunneling is due to the fact that the photonic local density of states in a tunnel junction is dominated by non-radiative modes. The key to increase the efficiency is therefore to design a plasmonic antenna with a resonant plasmon mode fulfilling two conditions35: (i) it provides a contribution to the local density of states larger than the non-radiative modes, (ii) its losses are dominated by radiative losses. We designed our antennas to fulfill these criteria. In order to measure the efficiency enhancement, we have compared the emission by the array of antennas with the emission of the same sample at a different position with no antennas (see Supplementary Note 9). We found an averaged efficiency of 6.0 × 10−10 photons per electron. This low efficiency is due to the low transmission of the 25 nm thick Al layer in the near infrared and also to the good quality of the surface, whose roughness is lower than 1 nm. Such a low roughness is required in order to fabricate a good plasmonic cavity. We now compare with the efficiency of light emitted by the area covered with antennas shown in Fig. 3a. The optical power is integrated over the array of antennas (dashed rectangular region in Fig. 3a) and normalized by the corresponding intensity assuming that the current density is uniform. We found 1.6 × 10−6 photons per electron indicating a 2700 efficiency enhancement compared to the planar junction. Taking into account the plasmon propagation loss, it corresponds to an electron-to-plasmon efficiency on the order of 10−5.
To analyze what are the key factors playing a role in this enhancement, we have derived a theoretical model of light emission by an antenna (see Supplementary Note 5). We compute the fields emitted by a fluctuating current density in the tunneling gap. The power spectral density of the current fluctuations is given by29:
$$\left\langle {I^2} \right\rangle (\omega ){\mathrm{ = }}\frac{1}{{1 - \exp \left( { - \frac{{eV}}{{k_{\mathrm{{B}}}T}}\left( {1 - \frac{{\hbar \omega }}{{eV}}} \right)} \right)}}eI_0\left( {1 - \frac{{\hbar \omega }}{{eV}}} \right),$$
where I0 is the tunneling current, e the electron charge, ω the frequency, ћ the reduced Planck's constant, kB is Boltzmann's constant and T is temperature. To compute the emitted field, we use the reciprocity theorem32. The efficiency defined as the number of emitted photons due to plasmon leakage per electron can be cast in the form (see Supplementary Note 5):
$$\eta _{e - p}(\omega _0) = \left[ {\frac{{Z_0}}{{R_k}}} \right]\frac{{\Delta \omega }}{{\omega _0}}\frac{n}{4}\left( {\frac{t}{{\lambda _0}}} \right)^{\!\!\!2}\ \overline {\left| {K^l({\mathbf{u}},\omega _0)} \right|^2} \frac{{\left\langle {I^2} \right\rangle (\omega _0)}}{{eI_0}}\Delta \Omega ,$$
where Rk = h/e2 is the quantum of resistance, Z0 = μ0c the vacuum impedance, c light speed, t the barrier thickness, λ0 and ω0 are the central wavelength and central frequency of a resonant antenna, respectively, ΔΩ is the related solid angle, Δω is the spectral width of the emitted peak, and \(\overline {\left| {K^l({\mathbf{u}},\omega _0)} \right|}\)is the field enhancement factor (with corresponding polarization and emission direction) spatially averaged over the whole junction barrier. It is defined as the ratio of the electric field in the junction and the electric field of an incident plane wave coming from the substrate in the emission direction. The quantity \(nZ_0(t/\lambda )^2\left\langle {I^2} \right\rangle (\omega )\) is essentially the power radiated by a dipole with length t in a homogeneous medium with refractive index n. The role of the antenna appears in the spectrally dependent factor \(\overline {\left| {K^l({\mathbf{u}},\omega _0)} \right|^2}\). For an angle, polarization and frequency corresponding to the resonant excitation of a plasmon, it takes the value 5 × 10−3 for a planar junction due to the low transmission through the 25-nm-thick aluminum layer. However, it is on the order of 70 in the presence of the resonant antenna, which thus provides an enhancement of four orders of magnitude. It is seen that this factor is the key to improve the efficiency by increasing the field in the junction using the antenna.
As shown before, the theoretical model predicts correctly the emission spectrum, the angular emission pattern and the polarization. However, the photon per electron efficiency is only qualitatively predicted. The theoretical model predicts an antenna efficiency of 1.1 × 10−7 (under a voltage of 1.6 V), which is one order magnitude lower than the experimental value 1.6 × 10−6. We also find a difference for the antenna junction efficiency normalized by the planar junction efficiency. The experimental value is 2700 whereas the model predicts 400. We attribute this difference to the simplified model of the current fluctuation correlation function used. It has already been reported in the literature49,50 that this model underestimates the efficiency and it has been suggested that light emission could be due to hot electrons in the metal. We have computed this contribution and we found a marginal change. We thus conclude that the origin of the discrepancy is due to the simplified correlation function of the current density. In summary, the antenna appears to be a powerful tool to enhance the efficiency of LEIT, which has been known for decades to be an inefficient light emission process. Nonetheless, the experimental electron to plasmon conversion efficiency (~10−5) is still low. We now discuss how it could be improved. As shown in Supplementary Fig. 13, replacing aluminum and gold by silver allows gaining one order of magnitude in the electron to photon efficiency (1.4 × 10−5) so that the electron to plasmon efficiency could be on the order of 10−4.
In conclusion, we have reported ASPEIT. We have shown that the emission spectrum can be tuned by varying the width of the antenna. An important feature of the antenna is its ability to enhance the efficiency of the coupling between electrons and plasmons. We have demonstrated a more than three orders of magnitude enhancement of the electron to photon conversion efficiency when comparing with a planar junction. The ASPEIT operates in the 10 pW regime, four orders of magnitude above the fW regime of a STM-based junction. We have introduced a theoretical model of ASPEIT that accounts for the observed spectrum, polarization structure and is able to predict the efficiency enhancement. This model provides guidance to further improve the efficiency. Using the model, we find that a silver/alumina/silver junction could reach an efficiency at 10−4 plasmons per electron. This raises the prospect of an efficient, ultrafast and highly localized electric surface plasmon source.
Devices fabrication
The ASPEIT tunnel junctions are fabricated on a standard coverslip (VWR, 0.15 mm thick). The bottom Al electrode is fabricated by UV photolithography via a mask aligner (MJB4). An Ebeam evaporator (MEB 550, Plassys) is employed to do the Al deposition. After the lift-off, a dry thermal oxidation process in a furnace is used to generate an ultrathin AlOx layer. The Au top electrode is fabricated by using an electron-beam lithography (Nanobeam NB4) system. All the detailed fabrication processes can be found in Supplementary Note 1.
Electrical characterization
The electrical characterization of our junction is performed via a source meter (Agilent B2902A). The IV measurements are conducted by a Quick IV Measurement software provided by keysight technologies website. For all experiments, the Au electrodes are grounded and the Al electrodes are biased. To avoid any possible dielectric breakdowns, each IV curve is within ±1.8 V. To avoid heating the junction, we use a time period of 1.5 ms for every voltage step. When performing the electroluminescence (EL) measurement, we apply a lower voltage (<1.6 V) to maintain a longlife measurement (larger than 10 h).
Optical characterization
The optical characterization of our devices is based on an inverted optical microscope (Olympus X71). An EMCCD (iXon 885) and a spectrometer (Andor Shamrock 750i) are used to capture the EL images and spectra, respectively. Detailed information is provided in Supplementary Note 2.
Numerical simulations
The far-field angular distributions and the emitted spectra are calculated based on the reciprocity theorem model described in Supplementary Note 5. We use the aperiodic Fourier modal method51 to numerically compute the electric field in the tunnel junction barrier. The dielectric constants for Al are taken from ref. 52 and the Au data from ref. 53. The refractive indexes of the glass substrate and alumina layer are set at 1.5 and 1.76, respectively. The modes of the patch antenna in Fig. 4g, h are calculated with the method described in ref. 54. For that purpose, a vertically polarized (along z direction) dipole source is inserted in the middle of the alumina layer along the z-axis and 30 nm off-center along the x-axis. More details can be found in Supplementary Note 5.
The data are available from the corresponding author upon reasonable request.
The codes are available from the authors upon reasonable request.
Raether, H. Surface Plasmons on Smooth and Rough Surfaces and on Gratings, 4–39 (Springer, 1988).
Schuller, J. A. et al. Plasmonics for extreme light concentration and manipulation. Nat. Mater. 9, 193–204 (2010).
Maier, S. A. et al. Plasmonics—a route to nanoscale optical devices. Adv. Mater. 13, 1501–1505 (2001).
Zayats, A. V., Smolyaninov, I. I. & Maradudin, A. A. Nano-optics of surface plasmon polaritons. Phys. Rep. 408, 131–314 (2005).
Gramotnev, D. K. & Bozhevolnyi, S. I. Plasmonics beyond the diffraction limit. Nat. Photon. 4, 83–91 (2010).
Pacifici, D., Lezec, H. J. & Atwater, H. A. All-optical modulation by plasmonic excitation of CdSe quantum dots. Nat. Photon. 1, 402–406 (2007).
Mayer, K. M. & Hafner, J. H. Localized surface plasmon resonance sensors. Chem. Rev. 111, 3828–3857 (2011).
Lal, S., Link, S. & Halas, N. J. Nano-optics from sensing to waveguiding. Nat. Photon. 1, 641–648 (2007).
Ebbesen, T. W., Genet, C. & Bozhevolnyi, S. I. Surface-plasmon circuitry. Phys. Today 61, 44–50 (2008).
Homola, J., Yee, S. S. & Gauglitz, G. Surface plasmon resonance sensors. Sens. Actuators B Chem. 54, 3–15 (1999).
Haes, A. J. et al. Plasmonic materials for surface-enhanced sensing and spectroscopy. MRS Bull. 30, 368–375 (2005).
Fan, P. et al. An electrically-driven GaAs nanowire surface plasmon source. Nano Lett. 12, 4943–4947 (2012).
Article ADS CAS PubMed Google Scholar
Koller, D. et al. Organic plasmon-emitting diode. Nat. Photon. 2, 684–687 (2008).
Huang, K. C. et al. Electrically driven subwavelength optical nanocircuits. Nat. Photon. 8, 244–249 (2014).
Neutens, P., Lagae, L., Borghs, G. & Van Dorpe, P. Electrical excitation of confined surface plasmon polaritons in metallic slot waveguides. Nano Lett. 10, 1429–1432 (2010).
Rai, P. et al. Electrical excitation of surface plasmons by an individual carbon nanotube transistor. Phys. Rev. Lett. 111, 026804 (2013).
Lambe, J. & McCarthy, S. L. Light emission from inelastic electron tunneling. Phys. Rev. Lett. 37, 923–925 (1976).
Février, P. & Gabelli, J. Tunneling time probed by quantum shot noise. Nat. Commun. 9, 4940 (2018).
Article ADS PubMed PubMed Central Google Scholar
Büttiker, M. & Landauer, R. Traversal time for tunneling. Phys. Rev. Lett. 49, 1739–1742 (1982).
Parzefall, M. et al. Antenna-coupled photon emission from hexagonal boron nitride tunnel junctions. Nat. Nanotechnol. 10, 1058–1063 (2015).
Berndt, R. et al. Photon emission at molecular resolution induced by a scanning tunneling microscope. Science 262, 1425–1427 (1993).
Bharadwaj, P., Bouhelier, A. & Novotny, L. Electrical excitation of surface plasmons. Phys. Rev. Lett. 106, 226802 (2011).
Wang, T., Boer-Duchemin, E., Zhang, Y., Comtet, G. & Dujardin, G. Excitation of propagating surface plasmons with a scanning tunnelling microscope. Nanotechnology 22, 175201 (2011).
Zhang, Y. et al. Edge scattering of surface plasmons excited by scanning tunneling microscopy. Opt. Express 21, 13938–13948 (2013).
Article ADS PubMed Google Scholar
Du, W., Wang, T., Chu, H.-S. & Nijhuis, C. A. Highly efficient on-chip direct electronic–plasmonic transducers. Nat. Photon. 11, 623–627 (2017).
Du, W. et al. On-chip molecular electronic plasmon sources based on self-assembled monolayer tunnel junctions. Nat. Photon. 10, 274–280 (2016).
Duke, C. B. Tunneling In Solids (Academic Press, 1969).
Grabert, H. & Devoret, M. H. Single Charge Tunneling: Coulomb Blockade Phenomena in Nanostructures, Vol. 294 (Springer Science & Business Media, 2013).
Hone, D., Mühlschlegel, B. & Scalapino, D. Theory of light emission from small particle tunnel junctions. Appl. Phys. Lett. 33, 203–204 (1978).
Johansson, P., Monreal, R. & Apell, P. Theory for light emission from a scanning tunneling microscope. Phys. Rev. B 42, 9210–9213 (1990).
Parlavecchio, O. et al. Fluctuation-dissipation relations of a tunnel junction driven by a quantum circuit. Phys. Rev. Lett. 114, 126801 (2015).
Aizpurua, J., Apell, S. P. & Berndt, R. Role of tip shape in light emission from the scanning tunneling microscope. Phys. Rev. B 62, 2065–2073 (2000).
Parzefall, M. & Novotny, L. Light at the end of the tunnel. ACS Photonics 5, 4195–4202 (2018).
Muehlschlegel, P., Eisler, H.-J., Martin, O. J., Hecht, B. & Pohl, D. Resonant optical antennas. Science 308, 1607–1609 (2005).
Bigourdan, F., Hugonin, J.-P., Marquier, F., Sauvan, C. & Greffet, J.-J. Nanoantenna for electrical generation of surface plasmon polaritons. Phys. Rev. Lett. 116, 106803 (2016).
Biagioni, P., Huang, J.-S. & Hecht, B. Nanoantennas for visible and infrared radiation. Rep. Prog. Phys. 75, 024402 (2012).
Curto, A. G. et al. Unidirectional emission of a quantum dot coupled to a nanoantenna. Science 329, 930–933 (2010).
Novotny, L. & Van Hulst, N. Antennas for light. Nat. Photon. 5, 83–90 (2011).
Yang, J., Faggiani, R. & Lalanne, P. Light emission in nanogaps: overcoming quenching. Nanoscale Horiz. 1, 11–13 (2016).
Faggiani, Rm, Yang, J. & Lalanne, P. Quenching, plasmonic, and radiative decays in nanogap emitting devices. ACS Photonics 2, 1739–1744 (2015).
Kern, J. et al. Electrically driven optical antennas. Nat. Photon. 9, 582–586 (2015).
Qian, H. et al. Efficient light generation from enhanced inelastic electron tunnelling. Nat. Photon. 12, 485–488 (2018).
Gurunarayanan, S. P. et al. Electrically driven unidirectional optical nanoantennas. Nano Lett. 17, 7433–7439 (2017).
Parzefall, M. et al. Light from Van der Waals quantum tunneling devices. Nat. Commun. 10, 292 (2019).
Article ADS CAS PubMed PubMed Central Google Scholar
Kurokawa, Y. & Miyazaki, H. T. Metal-insulator-metal plasmon nanocavities: analysis of optical properties. Phys. Rev. B 75, 035411 (2007).
Simmons, J. G. Electric tunnel effect between dissimilar electrodes separated by a thin insulating film. J. Appl. Phys. 34, 2581–2590 (1963).
Article ADS MATH Google Scholar
Groner, M., Elam, J., Fabreguette, F. & George, S. M. Electrical characterization of thin Al2O3 films grown by atomic layer deposition on silicon and various metal substrates. Thin Solid Films 413, 186–197 (2002).
McPeak, K. M. et al. Plasmonic films can easily be better: rules and recipes. ACS Photonics 2, 326–333 (2015).
Kirtley, J., Theis, T., Tsang, J. & DiMaria, D. Hot-electron picture of light emission from tunnel junctions. Phys. Rev. B 27, 4601–4611 (1983).
Laks, B. & Mills, D. Photon emission from slightly roughened tunnel junctions. Phys. Rev. B 20, 4962–4980 (1979).
Lalanne, P. & Silberstein, E. Fourier-modal methods applied to waveguide computational problems. Opt. Lett. 25, 1092–1094 (2000).
Zeman, E. J. & Schatz, G. C. An accurate electromagnetic theory study of surface enhancement factors for silver, gold, copper, lithium, sodium, aluminum, gallium, indium, zinc, and cadmium. J. Phys. Chem. 91, 634–643 (1987).
Olmon, R. L. et al. Optical dielectric function of gold. Phys. Rev. B 86, 235147 (2012).
Bai, Q., Perrin, M., Sauvan, C., Hugonin, J.-P. & Lalanne, P. Efficient and intuitive method for the analysis of light scattering by a resonant nanostructure. Opt. Express 21, 27371–27382 (2013).
This work was supported by a public grant by the French National Research Agency (ANR) project (ANR-15-CE24-0020). The authors thank the C2N nanofabrication platform at Paris-Sud University. C.Z. thanks the Ph.D financial support from the chair Safran-IOGS on Ultimate Photonics. J.-J.G. thanks the support of Institut Universitaire de France. The authors acknowledge enlighting discussions with Julien Gabelli. C.Z. acknowledges technical guidance from Jean-Rene Coudevylle.
Laboratoire Charles Fabry, Institut d'Optique Graduate School, CNRS, Université Paris-Saclay, 91127, Palaiseau, France
Cheng Zhang, Jean-Paul Hugonin, Anne-Lise Coutrot, Christophe Sauvan & Jean-Jacques Greffet
Laboratoire Aimé Cotton, Ecole Normale Supérieure de Paris-Saclay, CNRS, Université Paris-Saclay, 91405, Orsay, France
François Marquier
Cheng Zhang
Jean-Paul Hugonin
Anne-Lise Coutrot
Christophe Sauvan
Jean-Jacques Greffet
J.-J.G. conceived the idea, developed the theoretical model and supervised the project. C.Z. developed the fabrication procedures and fabricated the devices with help of A.-L.C. C.Z. built the experiment setup. C.Z. performed the experiments and analyzed the data with help from F.M. C.Z., C.S. and J.-P.H. carried out the simulations. J.-J.G. and C.Z. co-wrote the paper, with feedback from all co-authors.
Correspondence to Jean-Jacques Greffet.
Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Peer Review File
Zhang, C., Hugonin, JP., Coutrot, AL. et al. Antenna surface plasmon emission by inelastic tunneling. Nat Commun 10, 4949 (2019). https://doi.org/10.1038/s41467-019-12866-3
Highly-efficient electrically-driven localized surface plasmon source enabled by resonant inelastic electron tunneling
Haoliang Qian
Shilong Li
Zhaowei Liu
Editors' Highlights
Nature Communications (Nat Commun) ISSN 2041-1723 (online) | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.